entry_id
stringlengths 33
33
| published
stringlengths 14
14
| title
stringlengths 15
199
| authors
list | primary_category
stringlengths 5
18
| categories
list | text
stringlengths 1
461k
|
---|---|---|---|---|---|---|
http://arxiv.org/abs/2307.02100v2
|
20230705081929
|
MDViT: Multi-domain Vision Transformer for Small Medical Image Segmentation Datasets
|
[
"Siyi Du",
"Nourhan Bayasi",
"Ghassan Harmarneh",
"Rafeef Garbi"
] |
cs.CV
|
[
"cs.CV"
] |
Siyi Du et al.
University of British Columbia, Vancouver, British Columbia, CA
{siyi,nourhanb,rafeef}@ece.ubc.ca
Simon Fraser University, Burnaby, British Columbia, CA
hamarneh@sfu.ca
: Multi-domain Vision Transformer for Small Medical Image Segmentation Datasets
Siyi Du10000-0002-9961-4533 Nourhan Bayasi10000-0003-4653-6081 Ghassan Hamarneh20000-0001-5040-7448 Rafeef Garbi10000-0001-6224-0876
August 1, 2023
========================================================================================================================================
Despite its clinical utility, medical image segmentation (MIS) remains a daunting task due to images' inherent complexity and variability. Vision transformers (ViTs) have recently emerged as a promising solution to improve MIS; however, they require larger training datasets than convolutional neural networks. To overcome this obstacle, data-efficient ViTs were proposed, but they are typically trained using a single source of data, which overlooks the valuable knowledge that could be leveraged from other available datasets. Naïvly combining datasets from different domains can result in negative knowledge transfer (NKT), i.e., a decrease in model performance on some domains with non-negligible inter-domain heterogeneity. In this paper, we propose , the first multi-domain ViT that includes domain adapters to mitigate data-hunger and combat NKT by adaptively exploiting knowledge in multiple small data resources (domains).
Further, to enhance representation learning across domains, we integrate a mutual knowledge distillation paradigm that transfers knowledge between a universal network (spanning all the domains) and auxiliary domain-specific network branches. Experiments on 4 skin lesion segmentation datasets show that outperforms state-of-the-art algorithms, with superior segmentation performance and a fixed model size, at inference time, even as more domains are added. Our code is available at <https://github.com/siyi-wind/MDViT>.
§ INTRODUCTION
Medical image segmentation (MIS) is a crucial component in medical image analysis, which aims to partition an image into distinct regions (or segments) that are semantically related and/or visually similar. This process is essential for clinicians to, among others, perform qualitative and quantitative assessments of various anatomical structures or pathological conditions and perform image-guided treatments or treatment planning <cit.>. Vision transformers (ViTs), with their inherent ability to model long-range dependencies, have recently been considered a promising technique to tackle MIS. They process images as sequences of patches, with each patch having a global view of the entire image. This enables a ViT to achieve improved segmentation performance compared to traditional convolutional neural networks (CNNs) on plenty of segmentation tasks <cit.>. However, due to the lack of inductive biases, such as weight sharing and locality, ViTs are more data-hungry than CNNs, i.e., require more data to train <cit.>. Meanwhile, it is common to have access to multiple, diverse, yet small-sized datasets (100s to 1000s of images per dataset) for the same MIS task, e.g., PH2 <cit.> and ISIC 2018 <cit.> in dermatology, LiTS <cit.> and CHAOS <cit.> in liver CT, or OASIS <cit.> and ADNI <cit.> in brain MRI. As each dataset alone is too small to properly train a ViT, the challenge becomes how to effectively leverage the different datasets.
Related works on mitigating ViTs' data-hunger or multi-domain adaptive learning. U (universal) implies a model spans multiple domains. F means the model's size at inference time remains fixed even when more domains are added.
0.5!
Method ViT 2c|Mitigate ViTs' data-hunger U F
<cit.> √ √ by adding inductive bias × -
<cit.> √ √ by knowledge sharing × -
<cit.> √ √ by increasing dataset size × -
<cit.> √ √ by unsupervised pretraining × -
<cit.> × × √ √
<cit.> × × √ ×
<cit.> √ × √ ×
√ √ by multi-domain learning √ √
Various strategies have been proposed to address ViTs' data-hunger (Table <ref>), mainly: Adding inductive bias by constructing a hybrid network that fuses a CNN with a ViT <cit.>, imitating CNNs' shifted filters and convolutional operations <cit.>, or enhancing spatial information learning <cit.>; sharing knowledge by transferring knowledge from a CNN <cit.> or pertaining ViTs on multiple related tasks and then fine-tuning on a down-stream task <cit.>; increasing data via augmentation <cit.>; and non-supervised pre-training <cit.>. Nevertheless, one notable limitation in these approaches is that they are not universal, i.e., they rely on separate training for each dataset rather than incorporate valuable knowledge from related domains. As a result, they can incur additional training, inference, and memory costs, which is especially challenging when dealing with multiple small datasets in the context of MIS tasks. Multi-domain learning, which trains a single universal model to tackle all the datasets simultaneously, has been found promising for reducing computational demands while still leveraging information from multiple domains <cit.>. To the best of our knowledge, multi-domain universal models have not yet been investigated for alleviating ViTs' data-hunger.
Given the inter-domain heterogeneity resulting from variations in imaging protocols, scanner manufacturers, etc. <cit.>, directly mixing all the datasets for training, i.e., joint training, may improve a model's performance on one dataset while degrading performance on other datasets with non-negligible unrelated domain-specific information, a phenomenon referred to as negative knowledge transfer (NKT) <cit.>. A common strategy to mitigate NKT in computer vision is to introduce adapters aiding the model to adapt to different domains, i.e., multi-domain adaptive training (MAT), such as domain-specific mechanisms <cit.>, and squeeze-excitation layers <cit.> (Table <ref>). However, those MAT techniques are built based on CNN rather than ViT or are scalable, i.e., the models' size at the inference time increases linearly with the number of domains.
To address ViTs' data-hunger, in this work, we propose , a novel fixed-size multi-domain ViT trained to adaptively aggregate valuable knowledge from multiple datasets (domains) for improved segmentation. In particular, we introduce a domain adapter that adapts the model to different domains to mitigate negative knowledge transfer caused by inter-domain heterogeneity. Besides, for better representation learning across domains, we propose a novel mutual knowledge distillation approach that transfers knowledge between a universal network (spanning all the domains) and additional domain-specific network branches.
We summarize our contributions as follows: (1) To the best of our knowledge, we are the first to introduce multi-domain learning to alleviate ViTs' data-hunger when facing limited samples per dataset. (2) We propose a multi-domain ViT, , for medical image segmentation with a novel domain adapter to counteract negative knowledge transfer and with mutual knowledge distillation to enhance representation learning. (3) The experiments on 4 skin lesion segmentation datasets show that our multi-domain adaptive training outperforms separate and joint training (ST and JT), especially a 10.16% improvement in IOU on the skin cancer detection dataset compared to ST and that outperforms state-of-the-art data-efficient ViTs and multi-domain learning strategies.
§ METHODOLOGY
Let X∈ℝ^H × W × 3 be an input RGB image and Y∈{0,1}^H × W be its ground-truth segmentation mask. Training samples {(X,Y)} come from M datasets, each representing a domain. We aim to build and train a single ViT that performs well on all domain data and addresses the insufficiency of samples in any of the datasets. We first introduce our baseline (BASE), a ViT with hierarchical transformer blocks (Fig. <ref>-a). Our proposed extends BASE with 1) a domain adapter (DA) module inside the factorized multi-head self-attention (MHSA) to adapt the model to different domains (Fig. <ref>-b,c), and 2) a mutual knowledge distillation (MKD) strategy to extract more robust representations across domains (Fig. <ref>-d). We present the details of in Section <ref>.
BASE is a U-shaped ViT based on the architecture of U-Net <cit.> and pyramid ViTs <cit.>. It contains encoding (the first four) and decoding (the last four) transformer blocks, a two-layer CNN bridge, and skip connections. As described in <cit.>, the ith transformer block involves a convolutional patch embedding layer with a patch size of 3×3 and L_i transformer layers with factorized MHSA in linear complexity, the former of which converts a feature map X_i-1 into a sequence of patch embeddings z_i ∈ℝ^N_i × C_i, where N_i=H/2^i+1W/2^i+1, 1 ≤ i ≤ 4 is the number of patches and C_i is the channel dimension. We use the same position embedding as <cit.> and skip connections as <cit.>. To reduce computational complexity, following <cit.>, we add two and one CNN layer before and after transformer blocks, respectively, enabling the 1st transformer block to process features starting from a lower resolution: H/4×W/4. We do not employ integrated and hierarchical CNN backbones, e.g., ResNet, in BASE as data-efficient hybrid ViTs <cit.>, to clearly evaluate the efficacy of multi-domain learning in mitigating ViTs' data-hunger.
§.§
consists of a universal network (spanning M domains) and M auxiliary network branches, i.e., peers, each associated with one of the M domains. The universal network is the same as BASE, except we insert a domain adapter (DA) in each factorized MHSA to tackle negative knowledge transfer. Further, we employ a mutual knowledge distillation (MKD) strategy to transfer domain-specific and shared knowledge between peers and the universal network to enhance representation learning. Next, we will introduce DA and MKD in detail.
Domain Adapter (DA): In multi-domain adaptive training, some methods build domain-specific layers in parallel with the main network <cit.>. Without adding domain-specific layers, we utilize the existing parallel structure in ViTs, i.e., MHSA, for domain adaptation. The H parallel heads of MHSA mimic how humans examine the same object from different perspectives <cit.>. Similarly, our intuition of inserting the DA into MHSA is to enable the different heads to have varied perspectives across domains. Rather than manually designate each head to one of the domains, guided by a domain label, learns to focus on the corresponding features from different heads when encountering a domain. DA contains two steps: Attention Generation and Information Selection (Fig. <ref>-c).
Attention Generation generates attention for each head. We first pass a domain label vector m (we adopt one-hot encoding m∈ℝ^M but other encodings are possible) through one linear layer with a ReLU activation function to acquire a domain-aware vector d∈ℝ^K/r. K is the channel dimension of features from the heads. We set the reduction ratio r to 2. After that, similar to <cit.>, we calculate attention for each head: a^h = ψ(W^hd) ∈ℝ^K, h=1,2,...H, where ψ is a softmax operation across heads and W^h ∈ℝ^K ×K/r.
Information Selection adaptively selects information from different heads. After getting the feature U^h = [u_1^h,u_2^h,...,u_K^h] ∈ℝ^N × K from the hth head, we utilize a^h to calibrate the information along the channel dimension: ũ_k^h = a^h_k ·u_k^h.
Mutual Knowledge Distillation (MKD): Distilling knowledge from domain-specific networks has been found beneficial for universal networks to learn more robust representations <cit.>. Moreover, mutual learning that transfers knowledge between teachers and students enables both to be optimized simultaneously <cit.>. To realize these benefits, we propose MKD that mutually transfers knowledge between auxiliary peers and the universal network. In Fig. <ref>-d, the mth auxiliary peer is only trained on the mth domain, producing output Ŷ^m, whereas the universal network's output is Ŷ. Similar to <cit.>, we utilize a symmetric Dice loss L_mkd^a_m=Dice(Ŷ, Ŷ^m) as the knowledge distillation loss. Each peer is an expert in a certain domain, guiding the universal network to learn domain-specific information. The universal network experiences all the domains and grasps the domain-shared knowledge, which is beneficial for peer learning.
Each Auxiliary Peer is trained on a small, individual dataset specific to that peer (Fig. <ref>-d). To achieve a rapid training process and prevent overfitting, particularly when working with numerous training datasets, we adapt a lightweight multilayer perception (MLP) decoder designed for ViT encoders <cit.> to our peers' architecture. Specifically, multi-level features from the encoding transformer blocks (Fig. <ref>-a) go through an MLP layer and an up-sample operation to unify the channel dimension and resolution to H/4×W/4, which are then concatenated with the feature involving domain-shared information from the universal network's last transformer block. Finally, we pass the fused feature to an MLP layer and do an up-sample to obtain a segmentation map.
§.§ Objective Function
Similar to Combo loss <cit.>, BASE's segmentation loss combines Dice and binary cross entropy loss: L_seg=L_Dice+L_bce. In , we use the same segmentation loss for the universal network and auxiliary peers, denoted as L_seg^u and L_seg^a, respectively. The overall loss is calculated as follows.
L_total = L_seg^u(Y, Ŷ)+α∑_m=1^M L_seg^a_m(Y, Ŷ^m)+β∑_m=1^ML_mkd^a_m(Ŷ, Ŷ^m).
We set both α and β to 0.5. L_seg^a_m does not optimize DA to avoid interfering with the domain adaptation learning. After training, we discard the auxiliary peers and only utilize the universal network for inference.
§ EXPERIMENTS
Datasets and Evaluation Metrics: We study 4 skin lesion segmentation data-sets collected from varied sources: ISIC 2018 (ISIC) <cit.>, Dermofit Image Library (DMF) <cit.>, Skin Cancer Detection (SCD) <cit.>, and PH2 <cit.>, which contain 2594, 1300, 206, and 200 samples, respectively. To facilitate a fairer performance comparison across datasets, as in <cit.>, we only use the 1212 images from DMF that exhibited similar lesion conditions as those in other datasets. We perform 5-fold cross-validation and utilize Dice and IOU metrics for evaluation as <cit.>.
Implementation Details: We conduct 3 training paradigms: separate (ST), joint (JT), and multi-domain adaptive training (MAT), described in Section <ref>, to train all the models from scratch on the skin datasets. Images are resized to 256 × 256 and then augmented through random scaling, shifting, rotation, flipping, Gaussian noise, and brightness and contrast changes. The encoding transformer blocks' channel dimensions are [64, 128, 320, 512] (Fig. <ref>-a). We use two transformer layers in each transformer block and set the number of heads in MHSA to 8. The hidden dimensions of the CNN bridge and auxiliary peers are 1024 and 512. We deploy models on a single TITAN V GPU and train them for 200 epochs with the AdamW <cit.> optimizer, a batch size of 16, ensuring 4 samples from each dataset, and an initial learning rate of 1 × 10^-4, which changes through a linear decay scheduler whose step size is 50 and decay factor γ=0.5.
Comparing Against BASE: In Table <ref>-a,b, compared with BASE in ST, BASE in JT improves the segmentation performance on small datasets (PH2 and SCD) but at the expense of diminished performance on larger datasets (ISIC and DMF). This is expected given the non-negligible inter-domain heterogeneity between skin lesion datasets, as found by Bayasi et al. <cit.>. The above results demonstrate that shared knowledge in related domains facilitates training a ViT on small datasets while, without a well-designed multi-domain algorithm, causing negative knowledge transfer (NKT) due to inter-domain heterogeneity, i.e., the model's performance decreases on other datasets. Meanwhile, MDViT fits all the domains without NKT and outperforms BASE in ST by a large margin; significantly increasing Dice and IOU on SCD by 6.4% and 10.16%, showing that smartly selects valuable knowledge when given data from a certain domain. Additionally, outperforms BASE in JT across all the domains, with average improvements of 0.82% on Dice and 1.4% on IOU.
Comparing Against State-of-the-Art (SOTA) Methods: We conduct experiments on SOTA data-efficient MIS ViTs and multi-domain learning methods. Previous MIS ViTs mitigated the data-hunger in one dataset by adding inductive bias, e.g., SwinUnet <cit.>, UTNet <cit.>, BAT <cit.>, TransFuse <cit.>, and Swin UNETR <cit.>. We implement ResNet-34 as the backbone of BAT for fair comparison (similar model size). As illustrated in Table <ref>-a,b,c, these SOTA models are superior to BASE in SJ. This is expected since they are designed to reduce data requirements. Nevertheless, in JT, these models also suffer from NKT: They perform better than models in ST on some datasets, like SCD, and worse on others, like ISIC. Finally, achieves the best segmentation performance in average Dice and IOU without NKT and has the best results on SCD and PH2. Fig. <ref> shows 's excellent performance on ISIC and DMF and that it achieves the closest results to ground truth on SCD and PH2. More segmentation results are presented in the supplementary material. Though BAT and TransFuse in ST have better results on some datasets like ISIC, they require extra compute resources to train M models as well as an M-fold increase in memory requirements. The above results indicate that domain-shared knowledge is especially beneficial for training relatively small datasets such as SCD.
We employ the two fixed-size (i.e., independent of M) multi-domain algorithms proposed by Rundo et al. <cit.> and Wang et al. <cit.> on BASE. We set the number of parallel SE adapters in <cit.> to 4. In Table <ref>-b,d, outperforms both of them on all the domains, showing the efficacy of and that multi-domain methods built on ViTs might not perform as well as on CNNs. We also apply the domain-specific normalization <cit.> to BASE and to get BASE^† and ^†, respectively. In Table <ref>-d, BASE^† confronts NKT, which lowers the performance on DMF compared with BASE in ST, whereas ^† not only addresses NKT but also outperforms BASE^† on average Dice and IOU.
Ablation Studies and Plug-in Capability of DA: We conduct ablation studies to demonstrate the efficacy of DA, MKD, and auxiliary peers. Table <ref>-b reveals that using one-direction knowledge distillation (KD) or either of the critical components in , i.e., DA or MKD, but not together, could not achieve the best results. Table <ref>-c exemplifies that, for building the auxiliary peers, our proposed MLP architecture is more effective and has fewer parameters (1.6M) than DeepLabv3's decoder <cit.> (4.7M) or BASE's decoding layers (10.8M). Finally, we incorporate DA into two ViTs: TransFuse and DosViT (the latter includes the earliest ViT encoder <cit.> and a DeepLabv3's decoder). As shown in Table <ref>-a,b, DA can be used in various ViTs but is more advantageous in with more transformer blocks in the encoding and decoding process.
§ CONCLUSION
We propose a new algorithm to alleviate vision transformers (ViTs)' data-hunger in small datasets by aggregating valuable knowledge from multiple related domains. We constructed , a robust multi-domain ViT leveraging novel domain adapters (DAs) for negative knowledge transfer mitigation and mutual knowledge distillation (MKD) for better representation learning. is non-scalable, i.e., has a fixed model size at inference time even as more domains are added. The experiments on 4 skin lesion segmentation datasets show that outperformed SOTA data-efficient medical image segmentation ViTs and multi-domain learning methods. Our ablation studies and application of DA on other ViTs show the effectiveness of DA and MKD and DA's plug-in capability.
splncs04
|
http://arxiv.org/abs/2307.03090v1
|
20230706160504
|
A cohort-based Partial Internal Model for demographic risk
|
[
"Francesco Della Corte",
"Gian Paolo Clemente",
"Nino Savelli"
] |
q-fin.RM
|
[
"q-fin.RM"
] |
label1]Gian Paolo Clemente
gianpaolo.clemente@unicatt.it
[label1]Department of Mathematics for Economic, Financial and Actuarial Sciences
Università Cattolica del Sacro Cuore, Milano
label1]Francesco Della Corte
francesco.dellacorte1@unicatt.it
label1]Nino Savelli
nino.savelli@unicatt.it
We investigate the quantification of demographic risk in a framework consistent with the market-consistent valuation imposed by Solvency II. We provide compact formulas for evaluating inflows and outflows of a portfolio of insurance policies based on a cohort approach. In this context, we maintain the highest level of generality in order to consider both traditional policies and equity-linked policies: therefore, we propose a market-consistent valuation of the liabilities. In the second step we evaluate the Solvency Capital Requirement of the idiosyncratic risk, linked to accidental mortality, and the systematic risk one, also known as trend risk, proposing a formal closed formula for the former and an algorithm for the latter. We show that accidental volatility depends on the intrinsic characteristics of the policies of the cohort (Sums-at-Risk), on the age of the policyholders and on the variability of the sums insured; trend risk depends both on accidental volatility and on the longevity forecasting model used.
§ INTRODUCTION
In the recent years, there has been a trend towards market consistent valuation of assets and liabilities in the insurance field. Both Solvency II Directive <cit.> and IFRS accountings standard <cit.> defined a market-consistent valuation of technical liabilities. Market-consistent valuation indeed plays a crucial role in regulatory compliance and financial solvency assessment for insurance companies. Regulators require insurers to value their liabilities using market-consistent methods to ensure accurate and reliable financial reporting.
In particular, assessing demographic risk is an important aspect of market-consistent valuation for life insurance company. Demographic risk refers to the uncertainty and variability associated with demographic factors such as mortality rates, longevity, and morbidity. It is a crucial component in valuing insurance liabilities accurately, as these factors have a significant impact on the timing and amount of future policy benefits and claims obligations. The assessment of demographic risk involves incorporating appropriate demographic assumptions and models into the valuation process. These assumptions are typically based on historical data, expert judgment, and relevant population statistics. The aim is to capture the inherent variability and uncertainty in demographic factors to produce more realistic and reliable valuation outcomes.
In the literature, a common approach to assess capital requirement for demographic risk is through stochastic modelling and several contributions have been provided (see, e.g., <cit.>).
Several papers focused also on market-valuation in life insurance. In particular, Dhaene et al. (see
<cit.>) introduce a fair valuation of liabilities related to a portfolio in a single period framework, such that it is both mark-to-market for any hedgeable part of a claim, and mark-to-model for any claim that is independent of financial market evolutions.
<cit.> and <cit.> focus instead on the market-consistent evaluation in continuous time. A valuation of some life insurance contracts in a stochastic interest rate environment taking into account the default risk of the underlying insurance company is given in <cit.>. <cit.> consider typical participating policy showing that they can be decomposed into a risk free bond element, a bonus option, and a surrender option. A dynamic model is then constructed in which these elements can be valued separately using contingent claims analysis. <cit.> propose a valuation framework based on replication over multiple 1-year time periods by a periodically updated portfolio of assets.
The mathematical framework that leads to market-consistent values for insurance liabilities is presented in detail in <cit.> and aspects related to Solvency topics are treated in <cit.>. Approaches based on two and three steps procedure have been also provided for market-consistent valuation of liabilities. <cit.> combine quadratic hedging with application of a risk measure on the residual liability, to obtain a cost-of-capital margin. A three-step hedge-based valuation for the valuation of hybrid claims is given in <cit.>.
However few attention has been paid to combine market-consistent valuation and the assessment of capital requirement for demographic risk. By assessing demographic risk within the framework of market-consistent valuation, insurance companies can better understand and manage their exposure to longevity risk, mortality risk, and other demographic uncertainties. This enables insurers to make informed decisions regarding pricing, reserving, and risk management strategies.
Therefore, we propose here a novel methodology for assessing capital requirement for mortality and longevity risk of linked policies. To this end, starting from the framework defined in <cit.> and <cit.>, we consider a portfolio of homogenous policies and we provide a method for the evaluation of idiosyncratic and trend risks considering also financial effects. We show that accidental volatility is mainly related to the sum-at-risk of the considered contracts, the age of the policyholders and the variability of the insured sums. Trend risk is instead related to the accidental volatility and to the volatility of the model used for forecasting future longevity behaviour. Furthermore, we use a matrix approach that allows us to simulate a large number of scenarios in an extremely short time, in order to obtain a punctual and robust estimate of the risk-based capital.
A numerical application is developed in order to provide additional insights about main drivers that affect capital requirement.
The paper is organized as follows. In Section 2 we introduce some preliminary aspects of the model. In Section 3, we outline the mathematical framework of the model and in Section 4, we present the Cohort Valuation Portfolio using a matrix approach. Moving on to Section 5, we introduce the stochastic model designed to identify and quantify idiosyncratic and trend risks. Lastly, in Section 6, we provide a numerical example with a specific emphasis on the assessment of the Solvency Capital Requirement in line with Solvency II rules.
§ PRELIMINARIES
We indicate with Ω a given set, then with ℱ a family of subsets of Ω with usual desired properties (see e.g., <cit.>), hence ℱ is a σ-algebra and (Ω, ℱ) is a measurable space.
Considering a generic fixed positive number n, we assume that for each t∈[0,n] there is a σ-algebra; the collection of σ-algebras ℱ_t is a filtration, denoted with 𝔽. We define with P the probability measure that, to every set A∈ℱ assigns a number in [0,1] (i.e., P(A)) coherently with real-word evidence.
The starting point coincides with the definition of the vector of the cashflows, generally indicated with X [Random variables are indicated with capital letters, while deterministic values with small letters]
X=(X_0,X_1,...,X_n)∈L^2_n+1(P,𝔽)
where L^2_n+1(P,𝔽) is a Hilbert space with scalar product ⟨·,·⟩:
E[∑_t=0^nX^2_t]<∞ for all X∈L^2_n+1(P,𝔽)
E[⟨X,Y⟩]=E[∑_t=0^nX_t· Y_t]<∞ for all X,Y∈L^2_n+1(P,𝔽)
||X||= ⟨X,X⟩^1/2<∞ for all X∈L^2_n+1(P,𝔽)
For several purposes, contracts are grouped in cohorts in the actuarial practice (see, e.g., market consistent embedded value, IFRS, solvency capital requirement). Therefore, we consider here the following definition of cohort.
We define a cohort a set of policyholders that have underwritten the same contract and that have the same characteristics. The only difference inside a single cohort regards the sums insured. It is therefore assumed that the survival (and death) of individual policyholders is independent from each other and that all those who belong to the same cohort have the same survival probability.
At the inception date (i.e., t = 0), we consider a number of l_0 policyholders in the cohort and we denote with s_0=[s_k,0] (k=1,2,...,l_0) the vector of the sums insured of the policyholders. At the inception, each policyholder buys s_k,0 shares of the portfolio by paying the necessary premium to build the replicating portfolio.
At each time period t (with t>0), the information available is represented by ℱ_t. Therefore, the stochastic vector of the sums insured at time t is defined as follows:
S_t = [ s_1,0; s_2,0; ...; s_l_0,0 ]∘[ 𝕀_1,0^L; 𝕀_2,0^L; ...; 𝕀_l_0,0^L ]∘...∘[ 𝕀_1,t-1^L; 𝕀_2,t-1^L; ...; 𝕀_l_0,t-1^L ]
where with ∘" we indicate the Hadamard product (also known as element-wise product) and the generic 𝕀_k,t-1^L is a dichotomic ℱ_t-measurable random variable. It assumes the value 1 if the k-th policyholder survived in (t-1,t], 0 otherwise.
[Valuation functional and deflators]
We assume that 𝒬 : L^2_n+1(P,𝔽) →ℝ is a linear, positive, continuous and normalized functional on L^2_n+1(P,𝔽) (hence, 𝒬 is a valuation functional).
For more information on the valuation functional 𝒬, we refer to <cit.> and <cit.>. However, we point out that 𝒬_t(X) is a monetary value, ℱ_t-measurable and therefore stochastic at time 0.
Taking advantage of the Riesz–Fréchet representation theorem (see Theorem 2.5 in <cit.>), there exists a φ∈L^2_n+1(P,𝔽) such that for all X∈L^2_n+1(P,𝔽) we have
𝒬_0(X_(0))=1φ_0E[∑_t=0^nφ_t X_t ]=E[⟨(φ_(0))^⊺,X_(0)⟩]
Notice that we use the notation X_(t) to indicate a generic vector X which elements are (X_t)_t,t+1,...,n[This notation is taken from <cit.>].
The vector φ is the state price deflator, with the usual properties to be 𝔽-adapted and (P, 𝔽)-martingale, it has square integrable components, it is normalized (φ_0=1), its components are strictly greater than 0 and, whereas the market is complete, it's unique.
Since the deflators assign market-consistent values to the cashflows X_h, with h∈[1,n], it is possible to represent the reserves at time t, with t∈[0,h-1] for outstanding cashflows as
ℛ_t^(h)=𝒬_t(X_(h))=1φ_tE[∑_s=h^nφ_s X_s| ℱ_t]
Article 76 of Directive 2009/138/EC (usually called Solvency II, see <cit.>) requires that the insurance and reinsurance undertaking hold technical provisions measured at fair value, i.e., the current amount insurance and reinsurance undertakings would have to pay if they were to transfer their insurance and reinsurance obligations immediately to another insurance or reinsurance undertaking".
In literature, many authors have dealt with the topic in life insurance. Ante litteram, Brennan & Schwartz (see <cit.>) evaluated the equilibrium pricing of equity-linked life insurance policies with an asset value guarantee. Dhaene et. al. (see <cit.>), investigated the fair value of liabilities intended as both a market-consistent valuation for hedgeable claims (or parts of them), and actuarial for non-hedgeable claims. Delong et. al. (see <cit.> and <cit.>), extended the market consistent and actuarial valuation to the continuous time. We assume here that 𝒬 is a valuation functional (for further details see Definition <ref> or <cit.>) hence, for a given state price deflator φ, the process {𝒬_t(X)}_t=0,...,n is consistent with an arbitrage-free valuation scheme.
In conclusion, we highlight that φ-consistency implies that (φ_t𝒬_t(X_h))_t=0,...,n is a (P,𝔽)-martingale.
[Independent split of filtrations]
As in <cit.> and <cit.>, we factorize the filtered probability space (Ω, ℱ, P, 𝔽) into a product space to obtain an independent decoupling:
𝕋=(𝒯_t)_t=0,...,n
𝔾=(𝒢_t)_t=0,...,n
where formula (<ref>) refers to the filtration of insurance technical events and formula (<ref>) refers to the filtration of financial events. We assume that 𝕋 and 𝔾 are independent w.r.t the probability measure P and that, for every t, ℱ_t is generated by 𝒯_t and 𝒢_t, hence 𝒯_t, 𝒢_t ⊂ℱ_t for every t.
Therefore, the state price deflator φ has a product structure
φ_t=φ_t^𝕋·φ_t^𝔾
§ THE MODEL FRAMEWORK
[Model assumptions]
We consider now a equity-linked endowment policy with constant annual premiums that guarantee the beneficiary a stochastic amount if the insured dies and a capital if the policyholder survives up to maturity t=n. For the sake of simplicity, we assume the same structure of guarantee for both benefits, but the model can be easily extended to different cases.
We build a model framework coherent with the market-consistent valuation of Solvency II and we assume :
* In every t, with t∈[0,n-1], the undertaking collects the premiums at the beginning of the year according to the cohort who survived at time t and to the sums insured;
* Claims that occur in the period t∈(t,t+1] depend on the realizations of i.i.d. Bernoulli random variables and are paid at the end of the one-year period.
Considering a generic valuation instant t, we present a definition of the future in and out cashflows that is consistent with the aforementioned model assumptions, in particular that premiums are paid at the beginning of each time span and benefits are paid at the end. We denote this quantity with X_(t)=[X_τ]_τ=t,...,n where each element X_τ is defined as follows:
X_τ =
-X_τ^in if τ=t,
X_τ^out-X_τ^in if τ >t,
where
X_τ^out=⟨(S_τ-1)^⊺,Λ^B_τ-1,U^out, τ_τ⟩
and
X_τ^in=⟨(S_τ)^⊺,1,U^in, τ_τ⟩
In formula (<ref>), S_τ-1 is a l_0×1 vector of insured sums as specified in Definition <ref>, Λ_τ-1^B is a l_0×1 𝕋-adapted matrix of insurance technical variables which elements are 𝕀_k,τ-1^D with k∈[1,l_0]. Because of Definition <ref>, they are i.i.d. Bernoulli's random variables with parameter equals to q_x+τ-1, i.e, the (realistic) death probability of the policyholders.
We assume that (U_τ^out, t)_t∈[0,n] is a 𝔾-adapted stochastic process and represents the process of the financial portfolio 𝒰^out_τ used to replicate the outflow at time τ; moreover, we specify that the superscript t denotes that it is the price of 𝒰^out_τ at time t and, lastly, that the assumption is completely identical for (U_τ^in, t)_t∈[0,n].
In case S_τ-1∈{0,1}, the term ⟨(S_τ-1)^⊺,Λ^B_τ-1⟩ represents the r.v. number of deaths in the period (τ-1,τ] and it is consistent with the formulation provided in <cit.>.
As regards to formula (<ref>), 1 is the l_0×1 unitary vector and U_τ^in, τ is a 𝔾-adapted stochastic process related to the process of the financial portfolio 𝒰^in_τ used to replicate inflows collected at time τ.
As in <cit.> and <cit.>, we are assuming a product structure of insurance cash flows X, splitting them into a headgeable 𝒢_t-measurable part and a non-hedgleable 𝒯_t-measurable part.
It is therefore possible to extend formula (<ref>) to the cohort context to calculate the reserve of outstanding premiums and liabilities at a generic time t.
First of all, let us denote with 𝕀_k,t^L a Bernoulli random variable with parameter p_x+t, equal to the second order annual survival probability of a policyholder aged x at the inception. The r.v. assumes the value 1 if the policyholder k survives; moreover we assume policyholders are i.i.d.. Consequently we have the r.v. 𝕀_k,t^D=1-𝕀_k,t^L for describing the death of the policyholder.
Therefore, we define the mathematical reserves for outstanding premiums and liabilities at time t as
ℛ_t =𝒬_t(X_(t))
=⟨E(S_t|𝒯_t)^⊺,1φ_t^𝕋E[(φ_(t+1)^𝕋∘Λ_(t)^L∘Λ_(t)^B )|𝒯_t] , 1φ_t^𝔾E[(φ_(t+1)^𝔾∘U_(t+1)^out)|𝒢_t] ⟩
-⟨E(S_t|𝒯_t)^⊺, 1φ_t^𝕋E[(φ_(t)^𝕋∘Λ_(t)^L)|𝒯_t], 1φ_t^𝔾E[(φ_(t)^𝔾∘U_(t)^in)|𝒢_t]⟩
where Λ_(t)^L=[λ^L_(t),k,τ], Λ_(t)^B=[λ^B_(t),k,τ] with k=1,...,l_0, τ=1,...,n-t are l_0× (n-t) matrices that consider the survival of the policyholder and the unitary benefit in a endowment contract[It is possible to rewrite Λ_(t)^B for other types of policies. For instance, for a Term Insurance the elements of the last column (i.e. τ=n-t) would be 𝕀_k,n-1^D. For a Pure Endowment, all the elements of the matrix would be equal to 0, except those of the last column equal to 𝕀_k,n-t^L.], respectively.
We define indeed
λ^L_(t),k,τ =
1 if τ=1,
∏_h=0^τ-2𝕀_k,t+h^L if τ≥ 1,
and
λ^B_(t),k,τ =
𝕀_k,t+τ-1^D if τ< n-t,
1 if τ = n-t,
Therefore, notice that the Hadamard product (Λ_(t)^L∘Λ_(t)^B) results in a l_0× (n-t) indicator matrix, which (stochastic) elements assume the value 1 if the generic policyholder is entitled to the benefit at the time specified on the column of the matrix. With a slight abuse of notation, we specify that φ^𝕋_t+1 is also a l_0× (n-t) matrix, which columns are composed of the same deflators.
§ THE COHORT VALUATION PORTFOLIO
The purpose of this section is to show the construction of a valuation portfolio (VaPo) and to obtain its value. Given a certain series of future cash flows, a Valuation Portfolio is a portfolio of financial instruments that replicate the specific cash flow. Therefore, we present the steps for building a VaPo which final result is consistent with the cohort approach.
* Choice of financial instruments: The first step concerns the choice of financial basis: with reference to formula (<ref>), we are interested in two groups of financial instruments: those for replicating outflows and those for inflows. As regards the former, the choice depends exclusively on the value of the outflows: in the most generic way, here we define with 𝒰^out_t the portfolio of financial instruments to replicate the outflow X_t^out. The 𝒰^out_t portfolio can be understood as a linear combination of weights y^out_i,t of financial instruments available on the market ℳ:
𝒰^out_t=∑_i∈ℳy_i,t^out·𝒰^out_i,t
In an analogous way, with reference to the inflows, we have:
𝒰^in_t=∑_i∈ℳy_i,t^in·𝒰^in_i,t
* Determination of the number of portfolio shares to be held to replicate the cash flows After defining the financial instruments to replicate outflows and premiums, it is necessary to identify the best estimate of the number of such instruments that the insurance company must hold to replicate the general future cash flows X_(t)
In formulas:
X_(t)↣ VaPo_t(X_(t))=
=⟨E(S_t|𝒯_t)^⊺,E[(Λ_(t)^L∘Λ_(t)^B )|𝒯_t] , 𝒰_(t+1)^out⟩
-⟨E(S_t|𝒯_t)^⊺, E[(Λ_(t)^L)|𝒯_t],𝒰_(t)^in⟩
* The account principle to evaluate financial instruments This last step presents an accounting principle which associates a monetary value to each financial instrument. Of all the possible accounting principles, we choose to use a fair value valuation consistent with the Solvency II framework, where each financial instrument is valued at the traded market value. We indicate this accounting principles with ℰ_t with t∈[0,n] and we define the accounting value of the cash-flows as follows:
X_(t)↣ 𝒬_t(X_(t))=ℰ_t(VaPo_t(X_(t)))
=⟨E(S_t|𝒯_t)^⊺,E[(Λ_(t)^L∘Λ_(t)^B )|𝒯_t] , ℰ_t(𝒰_(t+1)^out)⟩
-⟨E(S_t|𝒯_t)^⊺, E[(Λ_(t)^L)|𝒯_t],ℰ_t(𝒰_(t)^in)⟩
Introducing the values of financial instruments, we have:
𝒬_t(X_(t))=⟨E(S_t|𝒯_t)^⊺,E[(Λ_(t)^L∘Λ_(t)^B )|𝒯_t] , U_(t+1)^out, t⟩
-⟨E(S_t|𝒯_t)^⊺, E[(Λ_(t)^L)|𝒯_t],U_(t)^in, t⟩
In insurance practice it is very common to use an ad hoc demographic table for pricing. This table, called the first order demographic base, leads to a distorted probability (far from any best estimate) whose use allows the creation of an expected demographic profit. Considering a generic indicator 𝕀_k,0^L of the matrix Λ_(0)^L, we define
(φ_1^𝕋/φ_0^𝕋)=φ̃_1^𝕋
and
φ̃_1^𝕋(1)=_1p^*_x_1p_x
φ̃_1^𝕋(0)=1-_1p^*_x1-_1p_x
As known in the literature (see, e.g, <cit.>), premiums of a policy are calculated as the expected present value of the benefits, computed with prudent technical bases (also called first-order bases).
In this paper we assume that the insurance company prices the policies of the cohort using a safety loading only with reference to the technical risk, i.e., the demographic one. In other words, the value of financial instruments in the Cohort VaPo protected are calculated throughout the market-consistent accounting principle ℰ. Typically, in the traditional insurance market, insurance companies ensure the non-certainty of default (ruin probability theorem) by inserting implicit safety loading in the pricing phase. For instance, considering a without profit traditional policy it is possible to use deflators of the type
φ_t^𝔾=(1+i^*)^-t
where i^* is a return rate (first order financial basis) that the company is likely to be able to earn on the capital market through the investment of premiums and mathematical reserves. In this paper, we propose a premium calculation structure where the undertaking sells the collateral at exactly the market price. The purpose is to assess the Solvency Capital Requirement linked to demographic risk and the use of implicit safety loading influences the risk profile of undertaking. Our goal is to evaluate only the demographic risk (and therefore the profit), eliminating an additional source of profit of a purely financial nature.
The cohort VaPo protected, obtained by replacing the insurance technical risk by their probability distorted conditional expectations at time 0, is defined as follows:
X_(0)↣ VaPo_0^prot(X_(0))=
=⟨(s_0)^⊺,E[(φ_(1)^𝕋∘Λ_(0)^L∘Λ_(0)^B )|𝒯_0] , 𝒰_(1)^out⟩
-⟨(s_0)^⊺, E[(φ_(0)^𝕋∘Λ_(0)^L)|𝒯_0],𝒰_(0)^in⟩
Hence, by formula (<ref>), weights y_i,t^in are calculated as a solution of the equation
ℰ_0(VaPo_0^prot(X_(0)))=0
or, analogously
⟨(s_0)^⊺,E[(φ_(1)^𝕋∘Λ_(0)^L∘Λ_(0)^B )|𝒯_0] , U_(1)^out, 0⟩ =⟨(s_0)^⊺, E[(φ_(0)^𝕋∘Λ_(0)^L)|𝒯_0],U_(0)^in, 0⟩
We now present a definition of best estimate liabilities that is consistent with both the market-consistent valuation of Solvency II and with Assumptions <ref>.
We define the best estimate of liabilities of a cohort of endowment contracts with an annual premium as follows:
ℛ_t=𝒬_t(X_(t))= ℰ_t(VaPo_t(X_(t)))
=⟨E(S_t|𝒯_t)^⊺,E[(Λ_(t)^L∘Λ_(t)^B )|𝒯_t] , ℰ_t(𝒰_(t+1)^out)⟩
-⟨E(S_t|𝒯_t)^⊺, E[(Λ_(t)^L)|𝒯_t],ℰ_t(𝒰_(t)^in)⟩
The case of single premium can be obtained as a specific case of formula (<ref>) where second term should be neglected.
Comparing formulas (<ref>) and (<ref>),net of the difference relating to the instant of valuation, it is noticeable the absence of φ_(0)^𝕋 in formula (<ref>). The best estimate reserve is indeed defined as the expected present value of the cash flows, using the most up-to-date demographic information and market prices (under the non-arbitrage assumption).
§ DEMOGRAPHIC RISK AND SOLVENCY
In order to introduce the proposal for the quantification of the capital requirement related to demographic risk, some distinctive features of the legislation (Solvency II) are presented below, highlighting the aspects that influence the structural characteristics of the proposed model.
In particular, within the several sources of risks considered by the Solvency II regulation, we provide here a stochastic model aimed at assessing the capital requirement related to both longevity risk and mortality risk. These risks are defined by evaluating both the risk of loss or of adverse change in the value of insurance liabilities, resulting from changes in the level, trend or volatility of mortality rates (see Art. 105 of <cit.>). Consistently with the legislation, a model is therefore presented that focuses on the two risks of losses and unfavourable variations of insurance liabilities and that considers both the effects of longevity and mortality.
§.§ The model framework
Examining a generic period (t,t+1], as shown in Figure <ref>, we consider to have available, at the beginning of the year, the best estimate ℛ_t (see (<ref>)) and the deterministic premiums X_t^in. These amounts, properly invested, should cover the r.v. claims X_t+1^out and the new reserves ℛ_t+1.
The starting point therefore coincides considering the quantity V_t+1 defined as
V_t+1= ⟨E(S_t|𝒯_t)^⊺,E[(Λ_(t)^L∘Λ_(t)^B )|𝒯_t] , U_(t+1)^out, t+1⟩
-⟨E(S_t|𝒯_t)^⊺, E[(Λ_(t+1)^L)|𝒯_t],U_(t+1)^in, t+1⟩
Therefore, we assume to buy VaPo_t(X_(t)) at time t at its market price ℛ_t. In addition to this quantity, we collect the inflow X_t^in from the policyholders. At the end of the time span, i.e. in t+1, the market price total value is V_t+1.
We define the Claims Development Result (CDR) CDR_t+1 as
CDR_t+1=V_t+1-X_t+1^out-ℛ_t+1
We observe that in formula (<ref>) the available financial information is described by 𝒢_t+1, while the insurance technical one is represented by 𝒯_t+1. Moreover, within V_t+1 the expected value is calculated on the basis of 𝒯_t, while in ℛ_t+1 the expected value of the technical risk is conditioned on the information contained in 𝒯_t+1.
We then introduce the quantity ℛ̂_t+1
ℛ̂_t+1= ⟨E(S_t+1|𝒯_t+1)^⊺,E[(Λ_(t+1)^L∘Λ_(t+1)^B )|𝒯_t] , ℰ_t+1(𝒰_(t+2)^out)⟩
-⟨E(S_t+1|𝒯_t+1)^⊺, E[(Λ_(t+1)^L)|𝒯_t],ℰ_t+1(𝒰_(t+1)^in)⟩
that represents the best estimate in t+1 under the assumption that the technical risk is conditioned on the information contained in 𝒯_t.
Hence, we split the CDR in two components as follows:
CDR_t+1^Idios=V_t+1-X_t+1^out-ℛ̂_t+1
and
CDR_t+1^Trend=ℛ̂_t+1-ℛ_t+1
It could be noticed that two sources of uncertainties of demographic nature are present. The first is given by the accidental mortality of the cohort, related the volatility due to the mortality/longevity probabilities (i.e, the idiosyncratic risk). The second source is related to the fact that new information available during the year could lead to a modification of the best estimate at the end of the year. For instance, this aspect occurs when a trend in mortality or longevity rates is observed. This risk will be referred to as trend risk.
§.§ The idiosyncratic risk
We focus here on the idiosyncratic risk defined as the risk of monetary loss resulting from changes in level or volatility of mortality rates. It concerns both polices whose benefit is linked to the death of the policyholder and policies whose benefit is linked to the survival of the policyholder.
In a one-year time horizon, we define the idiosyncratic profit and loss extending formula (<ref>):
CDR_t+1^Idios= ⟨E(S_t|𝒯_t)^⊺,E[(Λ_(t)^L∘Λ_(t)^B )|𝒯_t] , U_(t+1)^out, t+1⟩
-⟨E(S_t|𝒯_t)^⊺, E[(Λ_(t+1)^L)|𝒯_t],U_(t+1)^in, t+1⟩
-⟨(S_t)^⊺,Λ^B_t,U^out, t+1_t+1⟩
-⟨E(S_t+1|𝒯_t+1)^⊺,E[(Λ_(t+1)^L∘Λ_(t+1)^B )|𝒯_t] , U_(t+2)^out, t+1⟩
+⟨E(S_t+1|𝒯_t+1)^⊺, E[(Λ_(t+1)^L)|𝒯_t],U_(t+1)^in, t+1⟩
As usual in a life insurance balance-sheet (see, e.g., <cit.> and <cit.>) the profit is here defined as the sufficiency of initial best estimate and premiums collected to cover the payment of benefits, due to the claims occurred during the year, and the assessment of the final best estimate. Notice that in the idiosyncratic component we want to catch only the risk due to the volatility of deaths in the year (t,t+1], that affects both the payments in t+1 and the sums insured used in the new best estimates calculation.
Therefore, we assume that the insurance company does not change its assumptions on future mortality, considering any change in mortality during the year to be accidental. Indeed, the effect on the final best estimate due to a revision of demographic assumptions is devoted to the trend risk.
We provide in the following Theorem a compact formulation of the r.v. idiosyncratic risk. Proof is reported in <ref>.
Considering a generic cohort of policyholders consistent with Definition <ref>, if the premiums are paid in advance, the claims are paid at the end of the occurrence year and the best estimates at the end of the year are calculated on the same demographic and financial bases used at the beginning year, it is possible to define the idiosyncratic risk as
CDR_t+1^Idios=⟨[ E(S_t∘Λ_t^B |𝒯_t)-E(S_t∘Λ_t^B|𝒯_t+1)]^⊺,1·η_t+1⟩
where η_t+1 =( U_t+1^out, t+1-β_t+1) with β_t+1 best estimate rate of a policyholder of the cohort calculated in t+1 with technical bases estimated in t; or, analogously, β_t+1 is the generic element of β_t+1 = ⟨E(Λ_(t+1)^L∘Λ_(t+1)^B|𝒯_t), U_(t+2)^out, t+1⟩-⟨E(Λ_(t+1)^L|𝒯_t),U_(t+1)^in, t+1⟩.
According to formula (<ref>), if the accidental (idiosyncratic) mortality of the year is higher than expected, the insurer will have to cope with a greater amount of claims U_t+1^out, t+1. In this scenario, the firm will release best estimate liabilities which unit rate for each policyholder is β_t+1.
It is noteworthy that we can interpret the term η_t+1 as the expected value at time t of the Sum-at-Risk rate at time t+1 of a policyholder. Notice that this result generalizes the simplified case of non equity-linked contracts where U_t+1^out, t+1 would be equal to one (see, e.g. <cit.>, <cit.> and <cit.>).
We can focus now on the main cumulants of CDR_t+1^Idios. In particular, using the tower property:
E(CDR_t+1^Idios|ℱ_t)=0
Therefore, on average the best estimate at time t added to the premiums, is sufficient to meet the payments of the expected benefits and the provision of new best estimates calculated on the same demographic basis. Figure <ref> shows that, if the accidental mortality of the year is different[The sign of the variation in mortality that produces a loss depends on the type of contract] from the expected one, a profit or loss is created.
In this way, the idiosyncratic profit definition presented in formula (<ref>) is perfectly consistent with the concept of Claims Development Result, and the result is consistent with <cit.> and <cit.>.
Considering the random variable defined in formula (<ref>), we prove (see <ref> for details) that the standard deviation of the idiosyncratic profit is equal to
Var(CDR_t+1^Idios|ℱ_t) =( l_t· q_x+t·(1-q_x+t)·S̅_t^2)·E(η^2_t+1|𝒢_t)
where S̅_t^j denotes the j-raw moment of the ℱ_t-adaptable vector S_t.
We therefore observe that the volatility of the idiosyncratic risk increases with the ageing of the cohort[Only for cohorts with q_x+t<0.5, therefore all ages except for the extreme ages], the size of the cohort l_t and a higher variability within the sums insured. Additionally, it depends on the path of the expected Sum-at-Risk rate, that, as well-known, is related to the the type of policies considered. For instance, it has a decreasing behaviour for a endowment contract and a convex one for a term insurance with annual premiums.
We define the generic risk measure (see <cit.>) as a function ς
ς:L^2(P) →ℝ^+; X →ς(X)
General pleasant properties for a risk measure are normalization (ς(0)=0), translation invariance (ς(c+X)=c+ς(X)), positive homogeneity(if c>0, then ς(c· X)=c·ς(X)), subadditivity (ς(X+Y)≤ς(X)+ς(Y)), monotonicity (if X≤ Y P-almost surely, then ς(X)≤ς(Y)), expectation boundedness (ς(X)>E^P(-X)).
Although Value at Risk does not satisfy all the properties in Definition <ref>, following Solvency II regulation (see Art. 102), the Solvency Capital Requirement can be computed with a partial internal model
as follows:
ς(CDR_t+1^Idios)=-VaR_0.5%(CDR_t+1^Idios)
where VaR_0.5% stands for Value at Risk calculated with a confidence level equal to 99.5%.
Our aim is to also provide a possible undertaking specific approach for quantifying the capital requirement due to idiosyncratic risk. To this end, adapting the formula provided for non-life premium and reserve risk, we define a factor-based structure for capital assessment. Knowing the characteristics of the distribution of CDR_t+1^Idios, it is therefore possible to write
SCR_L,Idios=f[CDR_t+1^Idios]·σ(CDR_t+1^Idios)
where f[CDR_t+1^Idios] is a multiplier, appropriately calibrated, of the standard deviation, calculated as a function of the distribution characteristics of the r.v. idiosyncratic profit.
§.§ The trend risk
Comparing the definitions of the whole demographic profit (see Figure <ref> and the idyosincratic profit (see formula (<ref>)), we have to consider the possibility that the best estimate a time t + 1, given by ℛ_t+1, can be could differ from the reserve accounted in the idiosyncratic profit.
We define the random variable demographic profit (or loss) due to trend risk CDR_t+1^Trend as
CDR_t+1^Trend = ⟨E(S_t+1|𝒯_t+1)^⊺, ⟨E(Λ_(t+1)^L∘Λ_(t+1)^B|𝒯_t+1)-E(Λ_(t+1)^L∘Λ_(t+1)^B|𝒯_t), U_(t+2)^out, t+1⟩
-⟨E(Λ_(t+1)^L|𝒯_t+1)-E(Λ_(t+1)^L|𝒯_t),U_(t+1)^in, t+1⟩⟩
It is noteworthy that, the Claim Development Result linked to the trend risk depends on two sources of uncertainty: since in t+1 the technical information 𝒯_t+1 is different from that of the previous instant 𝒯_t, the insurance company can update (albeit to a limited extent) future expectations. Secondly, it is specified that, as in idiosyncratic risk context, the adjustment of technical expectations implies purchases or sales of shares in replicating portfolios: for this reason, trend risk also incorporates a share of financial risk.
In this context, the distribution of CDR_t+1^Trend is therefore influenced by both the accidental mortality of the year (given by S_t+1) and the revision of the demographic basis.
In conclusion, we highlight that since ℱ_t⊂ℱ_t+1, the tower property implies that
E(CDR_t+1^Trend|ℱ_t)=0
(SCR for Trend risk)
We define the Solvency Capital requirement for Trend risk as
SCR_L,Trend=ς(CDR_t+1^Trend)
Quantile of CDR_t+1^Trend can be carried out through different methodologies. We propose a simple approach based on Monte Carlo, capable of capturing the aforementioned dependencies between accidental mortality and trend risk.
We propose the following algorithm:
* Fit a projection model (see, e.g, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, etc.) to forecast expected mortality rates using train data available at time t;
* Calculate U_(t+2)^out, t+1 and U_(t+1)^in, t+1 simulating the underlying financial variables Z_t+1;
* For each policyholder k, simulate H times the r.v. 𝕀_k,t^L from i.i.d. Bernoulli distributions with parameter p_x+t given by the expected mortality rates obtained at step 1;
* Compute for each simulation the stochastic vector S_t+1 of the sums insured at time t+1;
* For each simulation, fit the same mortality model used at step 3 on a train set enriched with additional information simulated under real-word probabilities;
* Using values at steps 2, 4 and 5, compute for each simulation CDR_t+1^h,Trend with h∈[1,H];
* Calculate formula (<ref>) as
SCR_L,Trend=-inf[CDR_t+1^h,Trend: F_CDR_t+1^Trend(CDR_t+1^h,Trend)>0.5%]
Therefore, having obtained the distribution of CDR_t+1^Trend, through the H simulations, we calculate the quantity -VaR_p=0.5%(CDR_t+1^n,Trend) as the opposite of the lower extreme of the obtained vector, weighted by the probabilities of both S_t+1 and Z_t+1.
§ THE CASE STUDY
We develop here a case study in order to investigate the behaviour of the model in describing both idiosyncratic and trend risk. The characteristics of the cohort and of the contracts and some preliminary aspects are outlined in Section <ref>. Main results regarding idiosyncratic and trend risks are provides in Section <ref> and <ref>, respectively.
§.§ Preliminaries
We summarize in Table 1 main characteristics of the cohort analysed. In particular, we consider a cohort of 10,000 policyholders who have subscribed equity-linked endowment policies, each with its own specific sum insured. If the generic k-th policyholder dies in the time span (t-1;t] the beneficiary is entitled to receive a benefit equal to S_k,t· max(U_tU_0,(1+i_gar)^t) where U_t is the market value of the underlying asset 𝒰 (the equity) at time t and i_gar is the minimum guaranteed rate.
Assuming U_0=1, the general payoff max(U_tU_0,(1+i_gar)^t) can be hedged through the ownership of the equity 𝒰 and a put option 𝒫(𝒰, (1+i_gar)^t) written on 𝒰, with a strike price and a maturity equals to (1+i_gar)^t and t, respectively.
According to the number of equities that the company must hold to form the replicating portfolio, we can define the following relation:
∑_t=1^n-1E(D_x+t|𝒯_0)+E(L_x+n|𝒯_0)=l_0
where D_x+t denotes the r.v. number of deaths within the cohort between t-1 and t and L_x+n the r.v. number of policyholders survived at maturity.
Therefore, s_k,0 indicates the number of replicating portfolio shares due to the k-th policyholder. Hence, the insurance company, by buying exactly s_0=∑_k=1^l_0s_k,0 equities, will certainly be able to meet part of the liabilities; the risk of incurring losses for the company concerns only the part of outflows covered by Put options. We represent the whole Hedging Portfolio in Table 2.
By assuming a constant risk-free flat rate over time, the prices of Zero Coupon Bonds can be easily determined. However, it is important to note that while the model presented in this paper is applicable to a wide range of inflows, equity-linked policies in insurance practice are typically financed either through a Single Premium or Recurrent Premiums. The utilization of Regular Annual Premiums would necessitate spreading the purchase cost of equities throughout the entire premium duration. In such a scenario, the insurance company would incur a substantial initial outflow before recovering the costs, which is unsustainable given the negative best estimate of the mathematical reserve. This situation could create an incentive for the policyholder to terminate the contract.
Furthermore, we assume that we are in an efficient market context, all relevant information is reflected in the prices of the underlying assets, there are neither dividends nor transition costs and prices are described by Geometric Brownian Motions (GBM). In particular
dU_t=μ U_t dt+σ U_t dW_t
where W_t is a Wiener process. We calibrated the parameters of the GBM in order to obtain
E(U_t)=(1+15%)^t
CoV(U_t)=1
it was therefore possible to calculate the prices of put options at time t with maturity m with the well-known Black & Scholes formula.
§.§ Idiosyncratic risk results
In Table 3 we present the results of the simulation model aimed at obtaining the distribution of CDR_t+1^Idios at different time instants (displayed in Figure 3). First of all, we specify that we performed 1 million simulations for each instant of time, obtaining consistent results. The number of simulations has been indeed validated by verifying that the simulated characteristics of the distribution converge to the exact ones.
As shown before, the average of the distributions is centered on zero. With reference to the standard deviation, we notice two contrasting effects.
On the one hand, a higher volatility due to an increased death probability over time. On the other hand, the expected sum at risk rate decreases over time. However results show a prevalence of the mortality rates leading to an increase of the standard deviation over time.
It is also observed a relevant positive skewness quite stable over time. As a consequence we have an increased capital that is equal to more than 3 times the volatility.
It is noteworthy that formula (<ref>) shows some common features with the factor-based approach provided for the assessment of idiosyncratic component of longevity risk in Quantitative Impact Study n.2 (see <cit.>). The SCR for idiosyncratic mortality risk was indeed defined in QIS2 as follows:
SCR^Id,QIS2=2.58·√(q·(1-q)l)· Sum-at-Risk.
The simulation framework based on the cohort approach therefore highlights two innovative aspects:
* The variability of the vector of the insured sums might be considered;
* The fact that the distributions of the Claims Development Results for Idiosyncratic Risk have positive skewness and are all leptokurtic (see Figure <ref>), imposes the use of a multiplier greater than the 2.58 obtainable under the assumptions of Normal distribution. A better proxy could be the multiplier equal to 3 used for the Premium & Reserve Risk of the Non-Life Underwriting Risk macro-module.
§.§ Trend Risk results
Figure 4 and Table 4 show the results of the simulation model aimed at quantifying the Claims Development Result for Trend Risk. For th sake of simplicity, future mortality rates have been obtained using the Lee-Carter model. This choice is related to the fact that at each simulation it is necessary to re-fit the mortality projection model and, therefore, the computation times are particularly more consistent, despite the matrix representation of the components. However, the approach can be tested with alternative models for forecasting mortality.
The results of the model highlight two phenomena: on the one hand, the volatility linked to the new reserve estimates is significantly lower than the idiosyncratic volatility. This phenomenon was foreseeable, since the sample used to estimate the initial table were based on data from 1872 to 2019; an additional unit number/information content can only have a limited effect on updating expectations. On the second hand, we observe that the standard deviation of the CDR for Trend Risk decreases over time. Indeed, any revision of expectations has an increasingly limited effect as the expiry of the policies approaches because a change of expectations affects only a few cash flows.
In conclusion, we observe that also for the CDR Trend, 3 times the standard deviation is a good proxy of the Solvency Capital Requirement.
§.§ Conclusions
In conclusion, the proposed model incorporates the market-consistent actuarial valuation proposed by Solvency II and effectively computes the key characteristics of both components of demographic risk: idiosyncratic and trend. The model shows an excellent performance when applied to a single cohort of policyholders. Additionally, it can be extended to assess the riskiness of portfolios composed of multiple cohorts, serving as a valuable model point.
One notable advantage of the model is its efficient computational framework, leveraging matrix notation to significantly reduce computation times. By utilizing this approach, actuarial calculations can be performed more swiftly, facilitating timely analysis and decision-making processes.
Furthermore, we emphasize the importance of considering the volatility of sums insured, element that is neglected by approaches based on classical models for forecasting mortality. This volatility fluctuations has indeed a substantial impact on both the demographic and trend distributions and the inclusions of this aspect in the model allows a more accurate assessment of the associated risks.
The numerical analyses show the applicability and effectiveness of the model within the context of Solvency II. It offers valuable insights and practical tools for actuarial professionals to enhance their risk assessment and management capabilities.
acm
§ PROOFS
We report here the proof of relation (<ref>) in Theorem <ref>.
Recalling formula (<ref>), it is possible to rewrite it by extracting the first terms from the cross products
V_t+1= ⟨E(S_t|𝒯_t)^⊺, E(Λ_t^B|𝒯_t),U_t+1^out, t+1⟩
+⟨E(S_t|𝒯_t)^⊺,E[(Λ_(t+1)^L∘Λ_(t+1)^B )|𝒯_t] , U_(t+2)^out, t+1⟩
-⟨E(S_t|𝒯_t)^⊺, E[(Λ_t+1^L)|𝒯_t],U_t+1^in, t+1⟩
-⟨E(S_t|𝒯_t)^⊺, E[(Λ_(t+2)^L)|𝒯_t],U_(t+2)^in, t+1⟩
By definition of S_t+1 (see Definition <ref>) and exploiting cross product algebra, ℛ̂_t+1 can be re-written as
ℛ̂_t+1= ⟨E(S_t+1|𝒯_t+1)^⊺, ⟨E(Λ_(t+1)^L∘Λ_(t+1)^B|𝒯_t), U_(t+2)^out, t+1⟩
-⟨E(Λ_(t+1)^L|𝒯_t),U_(t+1)^in, t+1⟩⟩
= ⟨E(S_t∘(1-Λ_t^B)|𝒯_t+1)^⊺, ⟨E(Λ_(t+1)^L∘Λ_(t+1)^B|𝒯_t), U_(t+2)^out, t+1⟩
-⟨E(Λ_(t+1)^L|𝒯_t),U_(t+1)^in, t+1⟩⟩
Implementing formulas (<ref>) and (<ref>) in formula (<ref>), it is possible to write CDR_t+1^Idios as
CDR_t+1^Idios = ⟨(E(S_t∘Λ_t^B|𝒯_t)-E(S_t∘Λ_t^B|𝒯_t+1))^⊺,(1· U_t+1^out, t+1)⟩
+⟨(E(S_t∘(1-Λ_t^B)|𝒯_t)-E(S_t∘(1-Λ_t^B)|𝒯_t+1))^⊺,(1·β_t+1)⟩
where β_t+1 is the generic element of the vector β_t+1, defined as
β_t+1=⟨E(Λ_(t+1)^L∘Λ_(t+1)^B|𝒯_t), U_(t+2)^out, t+1⟩-⟨E(Λ_(t+1)^L|𝒯_t),U_(t+1)^in, t+1⟩
β_t+1 represents the best-estimate rate of the policyholder k (we specify that all elements of the vector are equal due to assumptions of i.i.d. on survival of policyholders).
With simple algebra we obtain:
CDR_t+1^Idios = ⟨(E(S_t∘Λ_t^B|𝒯_t)-E(S_t∘Λ_t^B|𝒯_t+1))^⊺,(1· U_t+1^out, t+1-1·β_t+1)⟩
Considering linearity of the expected value formula (<ref>) is proved.
Here we prove formulas (<ref>).
We introduce the hedgeable filtration (see <cit.>) ℍ=(ℋ_t)_t=0,...,n-1 and
ℋ_t+1=σ{𝒯_t,𝒢_t+1}
We observe
Var(CDR_t+1^Idios|ℱ_t)
=E[Var(CDR_t+1^Idios|ℋ_t+1)|ℱ_t]+Var[E(CDR_t+1^Idios|ℋ_t+1)|ℱ_t]
=E[Var(CDR_t+1^Idios|ℋ_t+1)|ℱ_t]≥0
Switching from a matrix representation to an extended one, formula (<ref>) can be represented as
CDR_t+1^Idios = ∑_k=0^l_0s_k,t·[E(Λ_k,t^B|𝒯_t)-E(Λ_k,t^B|𝒯_t+1)]·η_t+1
Let define as Z_t=⟨(S_t)^⊺, Λ_t^B⟩ the total sums insured of occurred deaths at time t and Z_k,t the sum insured of occurred death of the single k-th policyholder.
Since the generic element of Λ_t^B is distributed as a Bernoulli r.v. with parameter q_x+t, the moment generating function (mgf) M_λ_k,t^B(m) is
M_λ_k,t^B(m)=q_x+t· e^m+(1-q_x+t)
Considering the policyholders' sums insured, we have:
M_Z_k,t(m)=q_x+t· e^m · s_k,t+(1-q_x+t)
Considering that elements of Λ_t^B are identical distributed, we have:
M_Z_t(m)=(q_x+t· e^m · s_k,t+(1-q_x+t))^l_t
Therefore, the cumulant generating function is defined as:
Ψ_Z_t(m)=l_t· ln(q_x+t· e^m · s_k,t+(1-q_x+t))
Hence, the variance is defined as:
Var(Z_t)=l_t· q_x+t·(1-q_x+t)·S̅_t^2
where S̅_t^j denotes the j-raw moment of the ℱ_t-adaptable vector S_t.
|
http://arxiv.org/abs/2307.01817v1
|
20230704164521
|
Human Trajectory Forecasting with Explainable Behavioral Uncertainty
|
[
"Jiangbei Yue",
"Dinesh Manocha",
"He Wang"
] |
cs.CV
|
[
"cs.CV",
"cs.AI"
] |
Article Title]Human Trajectory Forecasting with Explainable Behavioral Uncertainty
1]Jiangbei Yue
2]Dinesh Manocha
[1]He Wangh.e.wang@leeds.ac.uk
*[1]University of Leeds, Woodhouse Lane, Leeds, UK
[2]University of Maryland at College Park, College Park, Maryland, US
Human trajectory forecasting helps to understand and predict human behaviors, enabling applications from social robots to self-driving cars, and therefore has been heavily investigated. Most existing methods can be divided into model-free and model-based methods. Model-free methods offer superior prediction accuracy but lack explainability, while model-based methods provide explainability but cannot predict well. Combining both methodologies, we propose a new Bayesian Neural Stochastic Differential Equation model BNSP-SFM, where a behavior SDE model is combined with Bayesian neural networks (BNNs). While the NNs provide superior predictive power, the SDE offers strong explainability with quantifiable uncertainty in behavior and observation. We show that BNSP-SFM achieves up to a 50% improvement in prediction accuracy, compared with 11 state-of-the-art methods. BNSP-SFM also generalizes better to drastically different scenes with different environments and crowd densities (∼20 times higher than the testing data). Finally, BNSP-SFM can provide predictions with confidence to better explain potential causes of behaviors. The code will be released upon acceptance.
[
*
August 1, 2023
==================
§ INTRODUCTION
Accurate human trajectory forecasting benefits many applications, , social robots, self-driving vehicles, etc <cit.>, and therefore has been studied in areas from computer science, physics, and mathematics to robotics and transportation <cit.>. Existing research largely falls into model-free and model-based methods. Model-free methods enjoy the strong data-fitting capacity of data-driven models such as statistical machine learning models <cit.> and deep neural networks (DNNs) <cit.>. While they provide excellent prediction accuracy, their black-box nature makes it difficult for humans to interpret the learned underlying function. Comparatively, model-based methods are based on explicit systems parameterized as ordinary/partial/stochastic differentiable equations (O/P/SDEs) <cit.> or rule-based systems <cit.>. These models are explainable but less accurate in prediction <cit.>, as they do not benefit from training on data (or only on small amounts of data) and therefore are better fit in small data regime. Overall, there seems to be a trade-off between the ability to explain (explainability) and the ability to predict (foresight).
A very recent effort has been focused on eliminating the explainability-foresight trade-off via a hybrid approach <cit.>. One can combine a PDE with DNNs for trajectory prediction, where the PDE can explain the behavior while the DNNs can provide data-fitting capacity. However, these approaches either do not capture uncertainty or do not explore the structure of uncertainty, only reporting the average/best performance in prediction without giving the confidence of the prediction <cit.>. Generally, this is not ideal because uncertainty modeling is crucial in many tasks, especially in risk-related decision-making <cit.>. Specific to human trajectories, uncertainty can also help explain behaviors better, why a pedestrian suddenly steers.
To fill this gap while retaining advantages of hybrid methods, we extend our previous work NSP-SFM <cit.> to propose a new Bayesian Neural Stochastic Differentiable Equation model for trajectory forecasting. Unlike NSP-SFM which learns a deterministic steering behavior plus an unexplainable randomness, we further dissect the randomness into aleatoric uncertainty caused by behavioral randomness and epistemic uncertainty by unobserved reasons <cit.>. We argue that the aleatoric uncertainty should be explainable and the epistemic uncertainty can be unexplainable, because the former is observed in data and can be explained , due to collision avoidance, while the latter is unknown. To this end, we propose to model the steering behavior with the aleatoric uncertainty using a learnable stochastic differentiable equation and learn the epistemic uncertainty with neural nets, leading to a neural stochastic differentiable equation model. Furthermore, to quantify the level of uncertainty in the aleatoric uncertainty, we propose a Bayesian treatment on its attributed factors.
Specifically, we model the dynamics of a pedestrian trajectory as a second-order SDE, part of which modeling a learnable aleatoric uncertainty caused by stochastic social interactions among pedestrians and stochastic interactions between a pedestrian and the environment. In addition, to account for the epistemic uncertainty, we employ a conditional variational autoencoder (CVAE) that can predict the residual between the prediction from the SDE and the observation <cit.>. Furthermore, to quantify the randomness of the aleatoric uncertainty, we impose priors on the associated factors and learn both the motion dynamics and uncertainty via Bayesian neural networks. We call our model Bayesian Neural Social Physics (BNSP).
We show that our BNSP model can achieve the state-of-the-art performance in standard trajectory prediction tasks across several public datasets and metrics. In addition, the BNSP model can provide not only plausible explanations for predicted trajectories, but also a confidence analysis for the prediction and explanation. Moreover, we demonstrate the better generalizability to unseen scenarios of our BNSP model compared with existing methods. Formally, our contributions include:
* A new Bayesian neural stochastic differentiable equation model for trajectory forecasting and behavior analysis with fine-grained explainable uncertainties;
* A new way to combine SDE with neural networks for human trajectory prediction;
* A demonstration of the effectiveness of our model over a wide range of tasks and data: trajectory forecasting, behavior explanation with aleatoric uncertainty, generalization to unseen scenarios and high data efficiency.
§ RELATED WORK
Human trajectory forecasting predicts future human trajectories based on historical and environmental information. Existing methods can be divided into model-based and model-free methods depending on whether it explicitly models human behaviors or not.
Model-based approaches are based on first-principles and depend on fundamental assumptions of human motions. With these assumptions, motions can be summarized into rules and deterministic systems, described as ordinary/partial differential equations or optimization problems. In such methods, velocity, acceleration and turning can be assumed to be constant <cit.> to model the steering. Pedestrians can be treated as particles that are subject to forces caused by social interactions <cit.>. In addition to modeling people as entities, the influence of their affective states is also studied <cit.>. Later, data-driven model-based methods are introduced to improve data fitting <cit.>. In such a perspective, model parameters are optimized to fit noisy data <cit.> and simulate the behaviors. However, model-based methods always suffer from limited flexibility of data fitting or learning capacity, making them incapable of learning from large amounts of data and predicting accurately. In comparison, our model exploits the strong data-fitting ability of deep neural networks to leverage large data fully for accurate prediction.
More recently, model-free methods based on deep learning have dominated human trajectory forecasting by demonstrating their surprisingly high prediction accuracy. Social LSTM <cit.> uses Long Short-Term Memory (LSTM) networks to model social interactions and learn from temporal data. This work inspires other methods based on recurrent neural networks <cit.> and more recent applications of transformers <cit.>. In addition, other deep neural networks have also been explored. Deep generative models such as generative adversarial networks and variational autoencoders are exploited to handle uncertainties in the future trajectories and generate multiple acceptable predictions <cit.>. Graph and temporal convolutional neural networks provide a novel, efficient way to model interactions between people <cit.>. Besides high prediction accuracy, it is found that these models have limited explainability and cannot generalize to drastically different scenarios well <cit.>. Different from existing deep learning methods, our model not only provides higher accuracy in prediction but also possesses explainability and better generalizability, benefiting from our embedded explicit model.
Very recently, hybrid approaches combining model-based and model-free methods have emerged. Physical models are embedded into deep neural networks <cit.> to balance explainability and prediction accuracy. Our method falls into this category. Compared with these approaches, our model further explores the fine-grained structure of uncertainty, , aleatoric and epistemic uncertainty. As a result, our model can provide more accurate prediction, stronger explainability and better generalization in different scenarios.
At a high level, our method is part of a recent attempt in physics-based deep learning <cit.> to combine deep neural networks with differential equations. Existing applications include finite element mesh generation <cit.>, reduced-order modeling <cit.>, differentiable simulation <cit.>, . Compared with these methods, our research focuses on human trajectory forecasting.
§ METHODOLOGY
We first introduce the background of Bayesian neural stochastic differential equations in <ref>. Then we introduce a new Stochastic Social Physics model in <ref>. Next, we introduce a new Bayesian treatment on our stochastic social physics model to derive our general Bayesian Social Physics Model (BNSP) as a general framework in <ref>. Further, we instantiate our BNSP with a stochastic social force model by augmenting a previous deterministic social force method into a stochastic one in <ref>. Finally, we derive the inference method in <ref> and provide the implementation details in <ref>.
§.§ Background
Bayesian neural stochastic differential equation <cit.> is an extension and a combination of both Bayesian neural networks <cit.> and stochastic differential equations (SDEs) <cit.> that allows for the integration of uncertainty quantification and model-based representation of dynamical systems. In this approach, neural networks are used to approximate the solution of a SDE, and Bayesian inference is used to estimate the parameters of these neural networks. This results in a probabilistic model that captures both the dynamics of the system and the uncertainty associated with the model parameters. Bayesian neural stochastic differential equation models have applications in various fields, including finance, physics, and biology. They can be used for tasks such as prediction, parameter estimation, and control of dynamic systems. The ability to model uncertainty in the dynamics of these systems makes these models particularly useful for behavioral analysis under uncertainty.
A SDE is a differential equation that includes a stochastic component and can capture the randomness inherent to dynamical systems. A typical SDE describes a system evolving over time with random fluctuations:
dX_t = μ(t,X_t)dt + σ(t,X_t)dW_t,
where X_t is a stochastic process of the system state, μ, σ are two real-valued functions and W(t) is a Wiener process. From <ref>, we have its integral form:
X_t+s-X_t= ∫_t^t+sμ(u,X_u)du
+ ∫_t^t+sσ(u,X_u)dW_u.
In contrast to ODEs/PDEs, SDEs explicitly consider the randomness whose solutions are stochastic processes. <ref> demonstrates that the change of the solution X_t is the sum of an ordinary Lebesgue integral of μ (the first term) and an Itô integral (the second term). In general, we refer to the function μ and σ as the drift coefficient and the diffusion coefficient, respectively. The solution X_t is called the diffusion processes and satisfies the Markov property. Therefore, we can model the pedestrian motion with uncertainty from the view of SDEs, which naturally captures the uncertainty of future trajectories.
§.§ Stochastic Neural Social Physics
Notation. Assuming p^t∈ℝ^2 is the 2D location of a pedestrian at time t, we represent the whole trajectory as a function p(t). In data, we assume such a trajectory is observed discretely in time {p^0, p^1, ⋯, p^T}. Therefore, P = {p_i^t }_i=1:N^t=1:T∈ℝ^N× T × 2 represents a set of trajectories of N pedestrians of length T. Given p_i^t of the ith person, we consider his/her neighborhood set Ω_i^t which includes the indices of other nearby pedestrians, p_j^t with j ∈Ω_i^t, all of whom influence the motion of pedestrian i. The neighborhood is also a function of time Ω(t).
In Neural Social Physics (NSP) <cit.>, the state of a pedestrian at t is q=[p, ṗ]^𝐓, and the trajectory can be formulated as:
{dp(t) = ṗ(t)dt + α(q^t:t-M)
dṗ(t) = F_goal + F_col + F_env,
.
where F_goal, F_col, F_env are deterministic forces parameterized by neural networks to model social interactions. α(q^t:t-M) is another neural network to take into account any randomness that has not been captured in dṗ(t). Although NSP shows superior performance, it ignores the fine-grained structure of the randomness in the data and simply captures it in α(q^t:t-M).
We propose to divide the randomness into aleatoric and epistemic uncertainty and learn them separately in trajectory forecasting. This is because these two uncertainties come from different sources and bear different meanings. Aleatoric uncertainty arises from the steering behavior which is random when avoiding other pedestrians or obstacles, while epistemic uncertainty is caused by unknown factors affective state, sensor error, <cit.>. Explicitly capturing the aleatoric uncertainty can help explain the behavior as well as give confidence about the prediction.
To this end, we model the aleatoric uncertainty as random forces rising from social interactions. We first formulate the dynamics of a person (agent) in a crowd as:
{dp(t) = ṗ(t)dt
dṗ(t) = f_η, ϕ(t, p(t), ṗ(t), Ω(t), p^T, E)dt
+ σ_η, ϕ (t, p(t), ṗ(t), Ω(t), p^T, E)dW(t)
p(0) = p^0, ṗ(0) = ṗ^0, p(T) = p^T
.
given the initial position p^0, initial velocity ṗ^0 and the destination p^T. Here ṗ(t) and dṗ(t) denote the first-order and the second-order dynamics of positions p(t). f and σ are functions governing the dynamics. W(t) is a Wiener process such that for any t_2 > t_1, W(t_2) - W(t_1) is normally distributed with mean 0 and variance t_2 - t_1. η is the set of explainable parameters and ϕ is the set of unexplainable parameters such as neural network weights. Ω(t) represents the neighborhood set. E is the environment (, obstacles). From bde:
p^T - p^0 = ∫_t=0^T ṗ(t)dt
=∫_t=0^T (∫fdt + ∫σdW(t)) dt.
where f is a deterministic steering behavior and σΔW(t) is the steering randomness. bde explicitly considers the aleatoric uncertainty, but it alone cannot fully describe the motion. Therefore, we add another term, ε(t, p^t:t-M), to model the epistemic uncertainty, which is time-dependent and depends on the brief history p^t:t-M where M is the length of the history. Therefore, the full model becomes:
{dp(t) = ṗ(t)dt + ε(t, p^t:t-M)
dṗ(t) = f_η, ϕ(t, p(t), ṗ(t), Ω(t), p^T, E)dt
+ σ_η, ϕ (t, p(t), ṗ(t), Ω(t), p^T, E)dW(t)
.
where we omit the boundary conditions. To fit this model to discrete observations in time, we discretize bde_full into:
{p^t+Δt - p^t = ṗ^t+ΔtΔt + ε^t, p^t:t-M
ṗ^t+Δt - ṗ^t = f_η, ϕ(t, p(t), ṗ(t), Ω(t), p^T, E)Δt
+ σ_η, ϕ (t, p(t), ṗ(t), Ω(t), p^T, E)ΔW(t).
.
Finally, the full forward model is:
p^t+Δt = p^t + ṗ^tΔt + fΔt^2 + σΔtΔW(t) + ε^t, p^t:t-M
<Ref> is the main (stochastic) prediction model employed in this paper. It is a general model and can ideally incorporate any f with a second-order differentiability. We instantiate it later.
§.§ Bayesian Neural Social Physics
A key difference between our method and previous ones is we aim to quantify the aleatoric uncertainty, through estimating the explainable parameters η in <ref> during inference. To this end, we propose a Bayesian treatment on them. Note we temporally ignore unexplainable parameters ϕ as they are fixed once learned. Given a new brief history p̂^h={p̂^0, p̂^1, ⋯, p̂^t_h}, we predict the future trajectory p̂^f={p̂^t_h+1, p̂^t_h+2, ⋯, p̂^t_h+t_f} in the testing phase. A Bayesian predictor can be represented as:
p(P, p̂^h, p̂^f) = ∫ p(P, p̂^h, p̂^f, η)dη =
∫ p(p̂^f |p̂^h, P, η) p(p̂^h, P, η) dη =
∫ p(p̂^f |p̂^h, η) p(η|P)p(P)p(p̂^h) dη =
∫ p(p̂^f |p̂^h, η)p(η|P)dη,
where η is a latent variable. p(p̂^f |p̂^h, P, η) = p(p̂^f |p̂^h, η) because P is given and not part of the prediction process. p(p̂^h, P, η) = p(η|p̂^h, P)p(p̂^h, P)=p(η|P)p(p̂^h)p(P)=p(η|P) as p̂^h is not used for estimating η, and p̂^h and P are observed. After learning p(η|P), the model predicts via p(p̂^f |p̂^h, η) across all η. With Bayesian inference to learn p(η|P), we obtain our full framework BNSP by using <ref> for p(p̂^f |p̂^h, η). BNSP benefits from the quantification of uncertainty during prediction.
Remarks. Most existing deep learning methods can be represented as <ref>, where p(p̂^f |p̂^h, η) is realized as neural networks (NNs) parameterized by
learned parameters <cit.>. Deviating from these unexplainable models, recently some research <cit.> shows that p(p̂^f |p̂^h, η) can be realized as ODEs/PDEs with model parameters and motion randomness captured by NNs. However, while the ODEs/PDEs provide explainability to some extent, they are intrinsically deterministic so that the randomness is either not captured or still captured by additional neural network components hence unexplainable. Their learned randomness does not truly capture the posterior p(η|P). Explicitly capturing and explaining such uncertainty is crucial for analysis and prediction, which can be naturally captured by Bayesian inference, as proposed by us.
§.§ Instantiation of BNSP with Stochastic Social Forces
Similar to <cit.>, we instantiate f using social forces <cit.> but augment it with stochastic forces instead of deterministic ones. One key difference in the modeling choice between our model (<ref>) and NSP-SFM <cit.> (<ref>) is that NSP-SFM assumes all the randomness happens at the first-order, while BNSP assumes the epistemic uncertainty is captured at the first order but the aleatoric uncertainty is captured at the second order, which is more sensible as this is where the stochastic forces caused by social interactions are modeled. We name our model BSNP-SFM.
Similar to <ref>, we consider three main factors for the behavior in <ref>: stochastic goal attraction ℱ_goal, stochastic collision avoidance ℱ_col and stochastic environment repulsion ℱ_env. To this end, we specify the second-order part of <ref> as:
f+σΔW(t)/Δt = ℱ_goal + ℱ_col + ℱ_env,
Substituting <ref> into <ref> gives:
p^t+Δt = p^t + ṗ^tΔt +
(F^t_goal + F^t_col + F^t_env)Δt^2 + ε^t,p^t:t-M
F^t_goal ∼ℱ^t_goal, F^t_col∼ℱ^t_col, F^t_env∼ℱ_env,
where ℱ_goal and ℱ_col are time-varying Gaussians. ℱ_env is a static Gaussian. The overview of our model is shown in model. After training, given an input trajectory and an endpoint, we predict distributions ℱ^t_goal, ℱ^t_col and ℱ_env by neural networks; then we can sample F^t_goal∼ℱ^t_goal, F^t_col∼ℱ^t_col, F^t_env∼ℱ_env and ε^t,p^t:t-M at time t. Iteratively, we can predict positions via solving <ref>. Note that, in <ref>, we assume that destinations p^T are given, although they are not available directly during prediction. Therefore, we employ the pre-trained Goal Sampling Network (GSN) <cit.> to sample p^T in advance for prediction during testing.
§.§.§ Individual Neural Networks
Stochastic Goal Attraction Humans are constantly attracted to their destinations. However, the magnitude and direction of such an attraction can vary over time and in different circumstances (, detour for collision avoidance). We model such goal attraction as a stochastic force:
F^t_goal = (p^T-p^t/(T-t)Δt - ṗ^t)k_goal^t
k_goal^t ∼𝒩_ϕ_1(μ_goal^t, σ_goal^t 2),
where (p^T-p^t/(T-t)Δt - ṗ^t) is the expected velocity correction on the current velocity ṗ^t towards p^T. μ_att^t and σ_att^t are the mean and standard deviation functions at t, realized by the Goal Network (GN) with parameters ϕ_1:
[μ_goal^t, σ_goal^t]_ϕ_1 = GN_ϕ_1(p^t, ṗ^t, p^T)
whose architecture is shown in <ref>. [p^t, ṗ^t] is encoded and fed into a Long Short-Term Memory (LSTM) network <cit.>. The output of the LSTM is transformed by a linear layer to get the feature f_goal^t. The first orange block (orange dashed lines), which consists of two fully connected blocks and one linear layer, is used to encode the destination p^T into f_goal^T. Then the concatenated feature [f_goal^t,f_goal^T] is fed into the second orange block to output the distribution parameters [μ_k_goal, logσ_k_goal]. Instead of learning σ_k_goal directly, we learn its logarithm to ensure that every output dimensions have the same range. The dimensions of each linear layer in the first and second orange block are [64, 256, 16] and [512, 256, 512, 2], respectively. In the LSTM block, the linear layers before and after the LSTM have dimensions 64 and 16, respectively. We use an LSTM with 256 dimensions.
Stochastic Collision Avoidance Given the ith pedestrian at t, p_i^t, its neighborhood Ω_i^t and any pedestrian p_j^t, j∈Ω^t_i, we define the collision avoidance factor ℱ_col^t as follows:
ℱ_col^t = ∑_j=0^m ℱ_col_ij^t, where j ∈Ω_i^t, F^t_col_ij∼ℱ^t_col_ij
F^t_col_ij =-∇_r_ij^tr_cole^-‖r_ij^t‖/r_colk^t_col_ij,
r_ij = p_i^t - p_j^t, k^t_col_ij∼𝒩_ϕ_2(μ^t_col_ij, σ^t 2_col_ij),
where μ^t_eva_ij and σ^t_eva_ij are the mean and standard deviation functions. Similar to <cit.>, r_cole^-‖r_ij‖/r_col is a repulsive potential energy function with a radius r_col (hyperparameter), and the negative gradient w.r.t. r_ij gives a repulsive force. However, such repulsion between pedestrians has randomness <cit.>. Therefore, our model considers a time-varying Gaussian with learnable mean and variance, so that the collision avoidance factor ℱ_col^t is a time-varying Gaussian mixture. We realize μ_col_ij and σ_col_ij by the Collision Network (CN) parameterized by ϕ_2:
[ μ_col_ij^t, σ_col_ij^t]_ϕ_2 = CN_ϕ_2( p^t_i, ṗ^t_i, p^t_j, ṗ^t_j)
whose architecture is shown in <ref>. For any neighbor p_j^t in Ω_i^t, the collision network encodes [p_i^t, ṗ_i^t] and [p_j^t, ṗ_j^t] to features f_col^t and f_j^t, respectively. The concatenated feature [f_col^t, f_j^t] is fed into a decoder to output the distribution parameters. The collision network has the same architecture and dimensions as the goal network except that the input dimension of the first orange block is 4.
Stochastic Environment Repulsion To model how people avoid obstacles in the environment, given the position p^t and an obstacle position p_obj, we define the repulsion as:
ℱ_env^t = ∑_obj∈Eℱ_obj^t, F^t_obj∼ℱ^t_obj
F_obj^t = (p^t - p_obj/‖p^t - p_obj‖^2_2)k_env, k_env∼𝒩(μ_obj, σ_obj^2)
where μ_obj and σ_obj are the mean and standard deviation. Different from <ref> and <ref>, k_env is assumed to be time-independent and agent-independent. This is because we observe similar influences of obstacles on different pedestrians. Therefore, unlike ℱ_att and ℱ_col, we learn μ_obj and σ_obj directly and do not use neural networks.
Epistemic Uncertainty
Finally, we specify the epistemic uncertainty term ε^t,p^t:t-M. Here, we employ the same strategy as <cit.> and assume the observation noise has a well-behaved distribution in the latent space, rather than the data space. This is because the epistemic term captures all the residual randomness that is not captured by <ref> and its distribution is arbitrary. Therefore, we use a variational autoencoder to learn its distribution. The architecture of the Conditional Variational Autoencoder (CVAE) is shown in <ref>. We use the CVAE to reconstruct the residual r^t+1 = p^t+1 - p̅^t+1 between the ground truth p^t+1 and the prediction p̅^t+1 only considering the aleatoric uncertainty. The residual r^t+1 is encoded by E_ϕ_3, while the history condition (p^t:t-M,p̅^t+1) is encoded by another encoder E_ϕ_4. Their outputs are concatenated and then fed into the encoder E_ϕ_5 to output means and variances of the Gaussian distribution of the latent variable Z. Then we sample Z and concatenate it with the feature of the history condition to generate the input of the decoder D_ϕ_6. Finally, the decoder D_ϕ_6 computes the predicted residual.
The red connections in <ref> are only used in the training phase. The ground truth r^t+1 is unavailable during the testing phase. We sample the latent variable Z from a Gaussian distribution 𝒩(0, σ_latentI) with a hyper-parameter σ_latent. We extract the history feature from the history condition through the trained encoder E_ϕ_4. Then we concatenate sampled Z and the history feature to decode via the trained decoder D_ϕ_6 to obtain the estimated residual. The neural networks E_ϕ_3, E_ϕ_4, E_ϕ_5, and D_ϕ_6 are all multi-layer perceptrons (MLPs). The dimensions of neural networks and the values of hyper-parameters are shown in <ref>.
§.§ Loss Function and Bayesian Inference
Now we are ready to derive the loss function for our BNSP-SFM model to learn the posterior p(η|P)∝ p(P|η)p(η), where p_ϕ(P|η) is a doubly stochastic differential equation (<ref>). Here, P denotes the training set and η = {k_goal, k_col_ij, k_env} is directly explainable. In addition, the distributions are parameterized by ϕ = ϕ_F∪ϕ_ε = {ϕ_1, ϕ_2, μ_obj, σ_obj}∪{ϕ_3, ϕ_4, ϕ_5, ϕ_6 }, which includes unexplainable network weights and distribution parameters that cannot be explained directly. Our learning scheme consists of two parts depending on two losses ℒ_Bayes and ℒ_cvae. We use Bayesian inference <cit.> to derive ℒ_Bayes because the integral on p(η) is intractable to learn p(η|P) directly. Specifically, we minimize the KL divergence between a variational posterior q_ϕ_F(η) and the true posterior p(η|P):
ϕ_F = _ϕ_FD_𝕂𝕃(q_ϕ_F(η) p(η|P))
= _ϕ_F𝔼_q_ϕ_F(η)[
log q_ϕ_F(η) -
log( p(P|η)p(η)/p(P))
]
= _ϕ_F{𝔼_q_ϕ_F(η) [log q_ϕ_F(η) -log p(P|η)p(η)]
+ log p(P)_}
=_ϕ_F𝔼_q_ϕ_F(η) [log q_ϕ_F(η) -log p(P|η)p(η)]_ℒ_Bayes(ϕ_F|P,η),
where we assume that both of q_ϕ_F(η) and p(η) are diagonal Gaussian distributions. Then we can get:
log q_ϕ_F(η) = ∑_k ∈η -(k-μ_ϕ_F)^2/2σ^2_ϕ_F - logσ_ϕ_F - log√(2π)
log p(η) = ∑_k ∈η -(k-μ_prior)^2/2σ^2_prior - logσ_prior - log√(2π),
where μ_ϕ_F and σ^2_ϕ_F are the means and variances predicted by our method with parameters ϕ_F for each k ∈η. We empirically choose the prior for training. We also model the likelihood p(P|η) as a diagonal Gaussian:
log p(P|η) = ∑_p^f ∈P -1/2‖p̅^f-p^f ‖_2^2 - t_f/2log 2π,
where p̅^f={p̅^t_h+1, p̅^t_h+2, ⋯, p̅^t_h+t_f} is calculated via <ref> given η and p^h={p^0, p^1, ⋯, p^t_h}, and p^f is the ground truth. We optimize ϕ_F using ℒ_Bayes. This learns the parameters that are not in the CVAE.
Then we train the CVAE by using:
ℒ_cvae = 1/Nt_f∑_i=1^N∑_t=t_h+1^t_h+t_f{‖r_i^t - r̅_i^t‖_2^2
+ λ D_𝕂𝕃(q_ϕ_5(z_i^t| E_ϕ_3,E_ϕ_4)𝒩(0, I))},
where N is the total number of data samples, λ is a tradeoff hyper-parameter, r_i^t=p_i^t - p̅_i^t is the residual and r̅_i^t is the predicted residual out of p_ϕ_6. Our overall loss function is ℒ = ℒ_Bayes+ℒ_cvae.
§.§ Implementation Details
We first pre-train the model without the epistemic uncertainty by using ℒ_Bayes to ensure that the aleatoric uncertainty captures most of the behavior, then train the CVAE via ℒ_cvae while fixing other parameters. Details are shown in <ref>.
We use ADAM for training. The learning rates for the attraction network, the evasion network, and the environment distribution are between 3 × 10^-6 and 3 × 10^-5. The learning rates for CVAE are between 1 × 10^-7 and 1 × 10^-6.
§ EXPERIMENTS
§.§ Datasets
We use two public datasets for evaluation: SDD <cit.> and ETH/UCY <cit.>, which are widely used in human trajectory forecasting. Stanford Drone Dataset (SDD): The dataset contains videos across 20 different scenes in bird’s eye view, with more than 100,000 pedestrian-pedestrian interactions and pedestrian-environment interactions. Following <cit.>, we extract trajectories with a time step 0.4 seconds and obtain 20-frame samples for an 8/12 setting, i.e., given the first 8 frames (3.2 seconds, t_h=7), we aim to predict the future 12 frame trajectories (4.8 seconds, t_f=12). ETH/UCY Datasets: There are five sub-datasets (ETH, Hotel, Univ, Zara1, and Zara2), including more than 1500 pedestrians with various behaviors such as collision avoidance. Following the standard leave-one-out evaluation protocol <cit.>, we train our model on four sub-datasets and test it on the remaining one in turn. The world coordinates used by the dataset don't match some parts of our model such as the goal sampling network and ℱ_env. Our model generally works in the pixel space. Therefore, we convert the world coordinates into pixel coordinates using the homography matrices from Y-net <cit.>. We project the predictions back into the world space to calculate errors for fair comparisons with existing methods. We extract the trajectories in the same way as SDD and adopt the same 8/12 prediction strategy. For pedestrians that have less than 20 frames, we treat them as observed dynamic obstacles as part of the environment.
§.§ Trajectory Forecasting
We adopt well-established Average Displacement Error (ADE) and Final Displacement Error (FDE) <cit.> to measure the prediction accuracy. ADE is the ℓ_2 error between a predicted trajectory and its ground truth averaged over all positions, while FDE is the ℓ_2 error between the predicted destination and its ground truth. Following prior research, we report the best ADE and FDE among multiple sampled trajectories. There are two existing strategies in existing work: standard-sampling and ultra-sampling. Standard-sampling employs 20 sampled trajectories and ultra-sampling employs 20 sampled positions at each step. The former is more widely employed and the latter is employed when the model is intrinsically stochastic especially when multiple stochastic components exist <cit.>. We compare our BNSP-SFM in both standard-sampling and ultra-sampling with a wide range of baselines: Social GAN (S-GAN) <cit.>, Sophie <cit.>, NEXT <cit.>, P2TIRL <cit.>, SimAug <cit.>, PECNet <cit.>, Y-Net <cit.>, S-CSR <cit.>, SocialVAE <cit.>, V^2-Net <cit.>, and NSP-SFM <cit.>.
We first show the standard-sampling results in <ref> and <ref>. Our BNSP-SFM achieves state-of-the-art performance on both SDD and ETH/UCY. Overall, both NSP-SFM and BNSP-SFM outperform other methods. BNSP-SFM provides slightly better results than NSP-SFM. On SDD, BNSP-SFM obtains 6.46/10.49 in ADE/FDE, improving the past state-of-the-art model NSP-SFM (6.52/10.61) by 0.92%/1.13%. Compared with other previous methods, our BNSP-SFM has at least 9.27%/7.90% improvement in ADE/FDE. On ETH/UCY, the improvement is 5.88% in ADE on average and gains a better average FDE with the maximum 15.38% improvement in Hotel. Since BNSP-SFM is based on NSP-SFM, it is understandable that they can achieve similar numerical accuracy. However, the Bayesian components do not hurt prediction accuracy while providing additional explainability and confidence estimation, which are extra benefits.
Further, we show ultra-sampling results in <ref>. BNSP-SFM outperforms NSP-SFM by approximately 14.61%/60.17% in ADE/FDE on SDD. We also observe that BNSP-SFM gains the same performance in ADE and 50% improvement in FDE on ETH/UCY compared with NSP-SFM. BNSP-SFM has higher performance than S-SCR on both datasets. Stochastic models with ultra-sampling such as S-CSR and NSP-SFM have shown to be more accurate in prediction <cit.>, but BNSP-SFM still outperform them in general. Further, for a fair comparison with S-CSR, both BNSP-SFM and NSP-SFM sample 20 destinations and only sample 15 positions at each step, so the total number of sampled trajectories is slightly smaller than S-CSR. S-CSR doesn't need the given goals and samples 20 positions at each step following the ultra-sampling strategy. However, both BNSP-SFM and NSP-SFM outperform S-CSR. The main difference is that S-CSR is a black-box neural network based on Variational Autoencoder, while the core of BNSP-SFM and NSP-SFM are based on explicit models, which clearly demonstrates its advantages. Further, BNSP-SFM improves the accuracy or is at least in par with NSP-SFM in all scenarios, indicating that the Bayesian components manage to capture the uncertainty better and our new model retains the great prediction ability.
§.§ Generalization
§.§.§ Generalization on Cross-scene Testing
One way to test the generalizability is cross-scene testing, , training on one scene and testing on another, drastically different scene. Although the results on ETH/UCY and SDD are already based on cross-scene testing, we increase the challenge by using ETH/UCY and SDD as training and testing data respectively, where the scenes are more different in terms of the environment, space size and pedestrian dynamics. For comparison, we choose Y-net and NSP-SFM as baselines and show the results in <ref>. The performance of Y-net drops severely from 7.85/11.85 (when it's trained also on SDD) to 30.59/51.43. NSP-SFM and BNSP-SFM perform slightly worse than when they are also trained on SDD, but considerably better than Y-net. NSP-SFM changes from 6.52/10.61 to 6.65/10.60 while BNSP-SFM changes from 6.46/10.49 to 6.55/10.59. This shows NSP-SFM and BNSP-SFM learn intrinsic behaviors of pedestrians that are universal across scenes.
§.§.§ Generalization to High-density Scenarios
Generalization is the ability to adapt to unseen data. Normally, we assume that the distribution of the training data is similar to that of the testing data. A model with a good generalizability should perform well not only on the testing data but also on drastically different data. For trajectory prediction methods, an efficient way to evaluate their generalizability is to use the trained model to simulate pedestrians with much higher densities, as it will lead to significantly different pedestrian dynamics <cit.>.
Following <cit.>, we adopt the collision rate as the metric to evaluate the generalizability. We regard each agent as a disc with a radius of r pixels. We count one collision if the minimum distance between two trajectories falls below 2r at any time. Given N agents in the scene, we calculate the collision rate as R_col = M/N(N-1)/2, where M is the number of collisions. Generally, the ground-truth r is hard to acquire because of the tracking error, the distorted images, etc. We observe unrealistically high collision rates in all cases when r is too large. In contrast, the collision rate will always be too low when r is too small, , r = 0 will give 0% collision rate all the time. Therefore, we need to search for a reasonable r that keeps the collision rate of the ground-truth data approximately zero.
According to <cit.>, we set the r as 7.5 pixels 0.2 m on SDD and ETH/UCY, respectively. A lower collision rate means higher plausibility and better generalization. We conduct two kinds of experiments: one is the collision rates on the testing data of SDD and ETH/UCY; the other is the collision rates on unseen scenarios with higher crowd densities. The first one compares the plausibility of predicted trajectories in scenarios similar to the training data. The second one pushes the models for generalization. We use NSP-SFM, Y-net, and S-CSR as baselines.
Collision rates on testing data (<ref>) show all methods achieve reasonable results, with the maximum 1.82% from S-CSR on UNIV. All methods achieve 0% in ETH and HOTEL. This is not surprising for two reasons. First, all methods predict well. So, if the ground-truth data does not contain collisions, neither do the predictions. The second reason is the sparsity of people in the scene. For instance, the highest number of people who are simultaneously in the scene is 11 in Coupa0 (in SDD). Even if the prediction goes wrong, there might not be people around, so no collisions will occur. When people are indeed close to each other, NSP-SFM and BNSP-SFM outperform Y-net and S-CSR. This is because our learned explicit model has an explicit repulsive force between agents, and can therefore avoid collisions.
To test models in drastically different scenarios from the two datasets, we select the scene Coupa0 from SDD and use the highest number of people (HNP) in the scene as an indicator of the crowd density. Coupa0 has a large space so, in theory, it can contain many people. However, the HNP in the original Coupa0 is merely 11. Therefore, we increase the HNP to 50, 100, 150, and 200 people. Then we use all methods trained on the original SDD data as simulators to run a long simulation and compute the collision rates for three time intervals of the simulation under each HNP, following the same setting in <cit.>. Specifically, instead of creating all agents at the same time, we initiate agents batch by batch from the boundaries so that they start with no collisions and walk into the scene. For 30-second simulations, we divide the time into t = 0 to 8, t = 4 to 12, and t = 8 to 16, where the density in the central area is the highest during t=8 to 16.
We report the average collision rates of three intervals in each simulation for every method in <ref>. Overall, BNSP-SFM and NSP-SFM outperform the baseline methods with lower collision rates across different agent numbers. To further understand it, we show detailed collision rates on three intervals for all methods in <ref> when HNP=200. Y-net performs poorly in all three time intervals. This indicates that, from the very beginning, collisions start to happen. Comparatively, S-CSR performs better, especially in the beginning when the agents start to walk into the scene. However, its collision rate spikes to 2% when the density becomes the highest. Comparatively, BNSP-SFM and NSP-SFM perform well in every time interval. Although their collision rates also increase, the highest is only 0.8% for NSP-SFM and 0.5% for BNSP-SFM. On average, Y-net, S-CSR, and NSP-SFM are 1400%, 300%, and 66.67% worse than BNSP-SFM.
Collision rate does not directly reflect the number of collisions. In safety-critical applications, every collision should count. Therefore, we also show the averaged number of collisions of three intervals in each simulation experiment in <ref>. All models have more collisions when the density becomes higher. However, our BNSP-SFM model maintains the lowest collision number across all simulation settings. The closest second is NSP-SFM. Moreover, BNSP-SFM also shows the lowest increase rate of collisions when the density increases. Overall, we demonstrate that BNSP-SFM and NSP-SFM possess the strongest generalization to high-density scenarios through collision rates and the number of collisions. The repulsive force in our explicit model plays a key role in collision avoidance.
§.§ Explainability of Prediction
Being able to explain behaviors is crucial in human trajectory forecasting <cit.>. Our BNSP-SFM not only provides a plausible interpretation for pedestrian behaviors, but also gives the confidence of the interpretation. To this end, we analyze the three main factors ℱ_goal, ℱ_col, ℱ_env in the behavioral model, all of which are Gaussians. We demonstrate several explainability examples and compare this model with NPS-SFM <cit.>.
In <ref>, we choose pedestrians with similar trajectories predicted by BNSP-SFM and NSP-SFM. In the top row, the pedestrian steers to avoid other pedestrians instead of going directly towards the goal. Both methods explain the steering by the influence of the goal attraction and the collision avoidance (yellow and blue arrows). However, BNSP-SFM also gives the confidence of the explanation, shown as heatmaps based on the learned means and variances of different factors. This not only gives the possible alternative explanations, but also shows how confident BNSP-SFM is regarding each explanation.
<ref> Bottom shows a slightly more complex explanation where there is also environment repulsion (black arrow). The agent is attracted by his/her destination (yellow arrow) while he/she avoids collisions with other pedestrians (light blue arrow) and is repelled by the obstacle car in the environment (black arrow). When there are multiple factors involved, the confidence maps associated with factors are more informative in term of their relative confidence in our model. The environment repulsion arises when the person is very close to the car and avoiding the collision becomes a major concern. This is reflected by the environment repulsion having a more concentrated confidence map than the goal attraction and the collision avoidance. To be specific, the standard deviations for the factors ℱ_goal, ℱ_col, and ℱ_env here are [9.13, 11.40], [5.73, 8.14], [5.29, 2.02], respectively. Two standard deviations for each factor correspond to x-axis and y-axis. This means BNSP-SFM is more certain about the influence of the environment repulsion.
Overall, both NSP-SFM and BNSP-SFM give similar predicted trajectories with great prediction accuracy. Both methods can explain the same three factors. However, the means and the variances of the estimated distributions of social forces in BSNP-SFM provide more informative and explainable predictions.
We show more explainability examples of our BNSDE model in <ref>. The future trajectories (green dots) are predicted by using the standard sampling, where the ℱ_goal dominates among the three factors and has a more concentrated confidence map, shown in <ref> (a). This is likely because there are not many imminent collisions for this person. With the more concentrated confidence map, BNSP-SFM is more certain about the attraction of the destination, and the predicted future trajectory goes almost straight to the destination. In <ref> (b), we ignore the influence of environment ℱ_env because it's too weak to be visualized here. ℱ_col dominates and has a more concentrated confidence map, meaning that our BNSP-SFM model is more certain about collision avoidance. This is because the neighbor in front of the person is walking at a high speed. The predicted future trajectory first avoids the neighbor then aims for the destination.
In human trajectory forecasting, to our best knowledge, few deep learning methods <cit.> can provide explainability. In theory, we can also visualize black-box deep learning methods such as latent features or layer activations. However, it's difficult to determine how to visualize them to explain the behaviors. Among the explainable models, BNSP-SFM models the fine-grained structure of commonly observed uncertainty <cit.> and is therefore more explainable.
§.§ Epistemic Uncertainty
<ref> shows how the BNSP-SFM model captures epistemic uncertainty as a residual of the predicted behavior with the aleatoric uncertainty. There are no neighbors or obstacles. The predicted trajectories in <ref> (yellow and purple dots) are based on ultra-sampling. We capture only aleatoric uncertainty in <ref> (a). We can see that the yellow dots are close to the ground truth (black dots), showing partial uncertainty has been captured. BNSP-SFM exploits CVAE to capture the remaining epistemic uncertainty. Purple dots in <ref> (b) denote the prediction with both the aleatoric and the epistemic uncertainty and almost overlap with the ground truth. This means that our model can capture the complete uncertainty well.
§.§ Data Efficiency
For human trajectory forecasting, it is expensive and time-consuming to collect clean data, which might involve manual labelling and checking. Therefore, data efficiency is crucial. We conduct experiments to test the data efficiency. We decrease the training data of SDD to 50% and 25%, then train BNSP-SFM, NSP-SFM, and S-CSR and test them on the original testing data of SDD. The results are shown in <ref>.
Both BNSP-SFM and NSP-SFM have higher data efficiency than S-CSR. Intuitively, this is sensible. When there is little data, black-box deep neural networks like S-CSR will be under-trained or overfitted. Having an explicit model in neural nets significantly reduces the required data size, which is known in differentiable physics models as explicit models can act as regularizers in learning <cit.>. Both NSP-SFM and BNSP-SFM benefit from it.
Between NSP-SFM and BNSP-SFM, BNSP-SFM has less performance deterioration when the training data is reduced. This is because BNSP-SFM learns distributions of social forces, while NSP-SFM learns deterministic forces. Once the distributions are learned, it is still possible to sample good predictions, while NSP-SFM needs enough data to accurately learn the forces across space and time.
§.§ Analysis of Failure Cases
Despite achieving the state-of-the-art performance in multiple applications, there are two sources of prediction error that BNSP-SFM can struggle to mitigate. Similar to deterministic SFM models, our model replies on the destination, which, if predicted wrong, can cause high overall prediction error, shown in <ref> (a). This is decided by the overall explicit nature of BNSP-SFM, i.e. assuming the overall movement trend is largely decided by the force leading to the destination. A more fine-grained modeling of the destination could improve the results. Also, BNSP-SFM cannot predict well on highly nonlinear motions which is rare in the data employed. This is shown in <ref> (b) where the pedestrian suddenly stopped, giving near-zero velocities in a number of frames. In this case, the goal attraction force can cause the agent to overshoot in reaching the goal. As a result, the agent moves around the goal for some time before being able to completely stop there. Such sudden stop behaviors also cause issues in simulators where some heuristics are needed to make the agent to stop as soon as they reach their goals. We do not employ similar heuristics as we mainly target prediction in this paper.
§.§ Ablation Study
To further understand the different components in our model, we conduct ablation experiments on SDD, shown in <ref>. We can see that our model can obtain good results when only considering ℱ_goal because this is the goal attraction, which already determines the motion trend and governs the motion dynamics. ℱ_col and ℱ_env might not exist if there is no imminent collision with other pedestrians or the environment. However, we still obtain more accurate results by considering them.
Next, significant improvement is obtained by incorporating the aleatoric uncertainty. This demonstrates that our aleatoric model can capture the dynamics stochasticity well, meaning the aleatoric uncertainty is also a large source of prediction error. Note that the aleatoric uncertainty is modeled at the force level which further models the social interactions. Capturing such interaction uncertainty significantly improves the prediction.
Finally, we gain an even better result 1.52/1.37 by combining aleatoric and epistemic uncertainty. The total uncertainty is captured well by exploiting the excellent data-fitting capacity of deep learning. We can see that the performance improvement trend is similar for both BNSP-SFM and NSP-SFM in <ref>. This is because these two methods model the same three factors; the major difference is where the uncertainty is captured. To show the importance of fine-grained modeling of uncertainty, we compare BNSP-SFM with NSP-SFM. One noticeable result is that NSP-SFM also improves when uncertainty is captured, jumping from 6.52/10.61 to 1.78/3.44. However, such a gain is obtained by blindly fitting a neural network to capture the combined aleatoric and epistemic uncertainty. Consequently, not only is the uncertainty unexplainable, the overall performance is also slightly worse.
§ DISCUSSION, CONCLUSION AND FUTURE WORK
We have proposed a novel Bayesian neural stochastic differentiable equation model for human trajectory forecasting. BNSP-SFM outperforms existing methods, achieving higher prediction accuracy, better generalizability, more explainability and higher data efficiency. One limitation is that our model does not explicitly consider high-level factors such as affective states in crowd dynamics, as studied in crowd research, which has found be to closely relevant to the motion randomness. In future, we will incorporate high-level factors, including the affective state, for better explainability. Moreover, we will explore our model in other areas such as autonomous vehicles and social robots.
§ ACKNOWLEDGMENT
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 899739 CrowdDNA.
§ DECLARATIONS
* Funding. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 899739 CrowdDNA.
* Conflict of interest/Competing interests (check journal-specific guidelines for which heading to use). Not applicable
* Ethics approval. Not applicable
* Consent to participate. Not applicable
* Consent for publication. All authors have given their consent for publication.
* Availability of data and materials. Data sharing not applicable to this article as no datasets were generated during the current study.
* Code availability. The code will be shared upon acceptance. Part of the code has been shared at http://drhewang.com/pages/NSP.html
* Authors' contributions. Jiangbei Yue contributed in conceptualization, experiment design, conducting experiments and drafting the paper. Dinesh Manocha contributed in conceptualization and paper drafting. He Wang led the whole research project, including conceptualizing the research idea, supervision, drafting the paper, etc.
|
http://arxiv.org/abs/2307.02083v1
|
20230705074654
|
Leveraging multilingual transfer for unsupervised semantic acoustic word embeddings
|
[
"Christiaan Jacobs",
"Herman Kamper"
] |
eess.AS
|
[
"eess.AS",
"cs.CL"
] |
Submitted 2023
Shell et al.: Bare Demo of IEEEtran.cls for IEEE Journals
Leveraging Multilingual Transfer for Unsupervised Semantic Acoustic Word Embeddings
Christiaan Jacobs, Student Member, IEEE, Herman Kamper, Senior Member, IEEE
Manuscript submitted to IEEE Signal Processing Letters.
The authors are with E&E Engineering, Stellenbosch University, South Africa (e-mail: 20111703@sun.ac.za; kamperh@sun.ac.za).
August 1, 2023
==============================================================================================================================================================================================================================================================================
Acoustic word embeddings (AWEs) are fixed-dimensional vector representations of speech segments that encode phonetic content so that different realisations of the same word have similar embeddings.
In this paper we explore semantic AWE modelling.
These AWEs should not only capture phonetics but also the meaning of a word (similar to textual word embeddings).
We consider the scenario where we only have untranscribed speech in a target language.
We introduce a number of strategies leveraging a pre-trained multilingual AWE model—a phonetic AWE model trained on labelled data from multiple languages excluding the target.
Our best semantic AWE approach involves clustering word segments using the multilingual AWE model, deriving soft pseudo-word labels from the cluster centroids, and then training a Skipgram-like model on the soft vectors.
In an intrinsic word similarity task measuring semantics, this multilingual transfer approach outperforms all previous semantic AWE methods.
We also show—for the first time—that AWEs can be used for downstream semantic query-by-example search.
Semantic embeddings, acoustic word embeddings, semantic retrieval, query-by-example search.
§ INTRODUCTION
Word embedding models such as Word2Vec <cit.> and GloVe <cit.> revolutionised natural language processing (NLP) by mapping written words to continuous fixed-dimensional vectors.
These models learn from co-occurrence information in large unlabelled text corpora.
As a result, words that are related in meaning end up having similar embeddings.
This has led to improvements in a wide range of NLP tasks <cit.>.
However, limited efforts have been made to generate such semantic representations for spoken words.
While acoustic word embedding (AWE) models <cit.> map variable-duration speech segments to fixed-dimensional vectors, these models do not aim to capture meaning.
The goal is rather to map different realisations of the same word to similar embeddings, i.e. the embedding space encodes phonetic rather than semantic similarity.
Several unsupervised AWE modelling techniques have been explored <cit.>.
Recently, multilingual AWE models have been introduced as an alternative <cit.>: a single AWE model is trained on labelled data from multiple well-resourced languages and then applied to an unseen target low-resource language.
While phonetic AWEs have proven useful in several downstream applications <cit.>, there are also many cases where semantics would be beneficial.
In semantic AWE modelling the goal would be to map speech segments to vector representations that not only capture whether two segments are instances of the same word, but also the semantic relationship between words.
E.g., we want an AWE space where different instances of “red” are close to each other, but also close to instances of “blue".
And all these embeddings should be far from unrelated words, such as “group”.
An example is given in Fig. <ref>, which visualises actual AWEs from our approach.
Learning semantic AWEs from speech is challenging due to channel variability, noise, and speaker-specific information that are not present in written text.
Some studies, therefore, use another modality as a grounding signal,
e.g. using images <cit.> or text labels <cit.> as a weak form of supervision.
Only a handful of studies have looked at learning semantic AWEs from unlabelled speech alone <cit.>.
To overcome these challenges, we propose leveraging the recent improvements in multilingual modelling for phonetic AWEs.
We specifically propose using transfer learning from a phonetic multilingual AWE model to obtain a semantic AWE model in a target language where we only have unlabelled speech.
Since the multilingual model already captures phonetics, this should simplify the semantic learning problem.
We present three approaches.
Our best approach involves using a multilingual AWE model to cluster unlabelled word segments from the target language.
For each segment, we derive a soft pseudo-word label vector based on the proximity to the cluster centroids.
Finally, we get semantic AWEs by training a Skipgram-like model on these soft vectors.
In an intrinsic word similarity task, this approach outperforms previous methods learning from scratch <cit.> and also our other multilingual transfer methods.
We also show that this method can be used downstream in an extrinsic semantic query-by-example search task.
§ PHONETIC ACOUSTIC WORD EMBEDDINGS
Most existing AWE methods map speech segments to a vector space where instances of the same word class are located near each other.
We call these phonetic AWEs, because the space should capture whether input segments are phonetically similar rather than related in meaning.
Formally, a speech segment X = ( 𝐱_1, 𝐱_2, …, 𝐱_T ) is projected to a vector 𝐳, with each 𝐱_t a speech frame.
Two phonetic AWE models have proven to be particularly effective: the correspondence autoencoder RNN (CAE-RNN) <cit.> and the ContrastiveRNN <cit.>.
CAE-RNN. This model uses an encoder RNN to map a word segment X to a latent embedding 𝐳.
The embedding 𝐳 is then given to a decoder RNN to reconstruct a target word segment X^', where X^' is a different instance of the same class as the input.
The model is optimised to minimise the reconstruction loss, ∑_t=1^T^'‖𝐱_t^' - f_t(X) ‖^2, where f_t(X) is the tth decoder output conditioned on embedding 𝐳.
During inference, the encoder generates a unique AWE 𝐳 for every new input segment.
ContrastiveRNN.
This model explicitly minimises the distance between embeddings from speech segments of the same word class while maximising the distance between words of a different class.
Formally, given speech segments X_anc and X_pos containing instances of the same word class and multiple negative examples X_neg_1,…, X_neg_N, the ContrastiveRNN produces embeddings 𝐳_anc, 𝐳_pos, 𝐳_neg_1, …, 𝐳_neg_N.
The loss is then defined as <cit.>:
J = -logexp(sim(𝐳_anc, 𝐳_pos)/τ)/∑_j ∈{pos, neg_1, , neg_N}^exp(sim(𝐳_anc, 𝐳_j)/τ)
where sim(·) denotes cosine similarity and τ is a temperature parameter, tuned on development data.
From this point onwards we add the subscript p to the AWEs described in this section, i.e. 𝐳_p indicates that the embedding preserves phonetic information related to word class only.
Previous studies showed the advantage of using these models in a multilingual transfer setup where a single AWE model is trained on labelled data from multiple well-resourced languages before transferring and applying it to an unseen target language <cit.>.
This allows for AWEs to be obtained even in languages for which we do not have any labelled data.
§ SEMANTIC AWES (TRAINED FROM SCRATCH)
Two approaches have been proposed that adapt the framework above to obtain semantic embeddings 𝐳_s, where the embeddings not only reflect phonetic similarity but also capture meaning.
In both cases <cit.>, the
problem is simplified by assuming that we know where words start and end
(but the word classes are still unknown), i.e. we have an unlabelled speech corpus {X^(n)}^N_n=1 of N segmented word tokens. We also make this assumption.
Speech2Vec <cit.> is a variant of the CAE-RNN
where, instead of using pairs of instances of the same word class, the positive pairs are now context word pairs (X_trg, X_ctx).
X_trg is a target centre word segment while X_ctx is a context word appearing somewhere in a window around the centre.
These context pairs are constructed without word labels by only considering the relative position of words within an utterance.
Speech2Vec was inspired by the Skipgram model for text data <cit.>, where an input word is fed to a log-linear classifier that predicts words within a context window.
By using the CAE-RNN reconstruction loss, Speech2Vec similarly tries to reconstruct a word segment that appears near its input.
Ideally the resulting embeddings 𝐳_s should therefore be similar for words that co-occur in the speech data.
There have recently been concerns about the original Speech2Vec implementation <cit.>, and we therefore use our own version here (but still refer to it as Speech2Vec).
By similarly modifying the model presented in Sec. <ref>, a semantic ContrastiveRNN can be trained on target (anchor), context (positive), and out-of-context word segments (negatives) to learn semantic embeddings using the loss in (<ref>).
This approach is similar to <cit.>, where they include a trainable network to remove speaker information.
In both these methods, a semantic AWE model is trained from scratch, therefore requiring the models to learn to capture phonetic and semantic similarity simultaneously.
§ OUR APPROACH: USING MULTILINGUAL TRANSFER FOR SEMANTIC AWES
Our new proposal is to utilise a pre-trained multilingual AWE model (end of Sec. <ref>) to assist semantic AWE modelling.
Three specific strategies are proposed.
ContrastiveRNN with multilingual initialisation.
Instead of training semantic models from scratch (<ref>), we can warm-start them using the learned weights of a pre-trained multilingual AWE model.
In our experiments, we use the learned weights of a multilingual AWE model's encoder to initialise the encoder RNN of the ContrastiveRNN.
The model is then updated on context pairs from the target language using (<ref>).
Projecting multilingual AWEs. Alternatively, we can project an existing phonetic AWE space to a new semantic AWE space.
First, we apply the multilingual model to the unlabelled speech segments {X^(n)} to get a set of phonetic AWEs {𝐳^(n)_p}.
Then we train a projection network that maps the phonetic AWEs to semantic embeddings {𝐳^(n)_s}.
The projection network is trained using the contrastive loss (<ref>), optimising the distances between the output embeddings 𝐳_s.
Cluster+Skipgram.
This approach is based on the Skipgram Word2Vec model <cit.>.
Instead of using a fixed dictionary of discrete word class labels to construct input and output vectors to train a Skipgram model on text, we use the phonetic similarities in the original AWE space to derive a soft pseudo-word label for each speech segment.
This is illustrated in Fig. <ref>.
In more detail, a multilingual AWE model is applied to the segmented speech corpus {X^(n)}, producing a set of phonetic AWEs {𝐳_p^(n)}.
Next we apply K-means clustering to the phonetic embedding space, producing a set of centroids {𝐜_k}^K_k=1.
The idea is that these clusters should resemble distinct word classes.
We then calculate a soft vector label of an AWE belonging to each cluster:
v_k^(n) = exp(-sim(z_p^(n), c_k)/σ^2)/∑_j=1^Kexp(-sim(z_p^(n), c_j)/σ^2)
where sim(·) denotes cosine similarity and σ is a hyperparameter controlling the influence of distant centroids.
Each segment is represented by a unique vector 𝐯^(n), with segments from the same word class ideally having similar representations.
This is different from Word2Vec, where a single one-hot vector represents a unique word class.
Finally, a linear classifier model is trained with these continuous vectors as input and target outputs using the negative log-likelihood loss (as in the original Skipgram).
We also experimented with hard clustering, but this gave very poor performance on development data.
§ EXPERIMENTAL SETUP
Data.
We perform experiments using the Flickr8k Audio Captions Corpus (FACC) <cit.>.
This corpus contains 40k spoken captions in English describing the content of a Flickr image <cit.>.
This is useful for measuring semantics: the images come from a fairly narrow domain, and the semantic concepts, therefore, reoccur in different utterances.
We do not use the images during training: the spoken captions are treated as our unlabelled target speech corpus.
We use the default train, development, and test splits containing 30k, 5k, and 5k spoken utterances, respectively.
Speech audio is parametrised as 13-dimensional static mel-frequency cepstral coefficients (MFCCs).
We also perform experiments using self-supervised speech features:
we use the 12th transformer layer of the multilingual XLSR model <cit.> to get 1024-dimensional features.
Previous work has shown that self-supervised speech features (obtained in an unsupervised way) can be useful as the frame-level input to AWE models <cit.>.
Utterances are normalised per speaker and segmented using true word boundaries from forced alignments <cit.>.
We use these word segments to construct context word pairs as described in Sec. <ref>.
For all the semantic models, we use a context window of three words before and after a centre word.
Semantic models trained from scratch (<ref>).
The encoder and decoder of our Speech2Vec implementation each consist of three unidirectional RNNs with 400-dimensional hidden vectors and an embedding size of 100.
The model is trained on roughly two million word pairs occurring in the same context window in our training data.
The semantic ContrastiveRNN uses the same encoder structure.
It is also trained on the same context pairs together with out-of-context word segments serving as negatives; for each positive, we sample 20 negatives.
Semantic models using multilingual transfer (<ref>).
We train a CAE-RNN multilingual AWE model <cit.> on five different Common Voice <cit.> languages: Italian, Dutch, Russian, Czech.
We pool the data from all languages and extract 300k training pairs.
The CAE-RNN model structure is the same as that of our Speech2Vec model.
This multilingual CAE-RNN is used to initialise a ContrastiveRNN;
we freeze the weights of the first two encoder layers while training on context pairs.
For the projection network, we use a feed-forward network of two linear layers with an inner dimension of 1024 and input and output dimensions of 100.
Again we sample 20 negatives for each positive and train the network to optimise the contrastive loss (<ref>).
For the Cluster+Skipgram approach, we use the multilingual CAE-RNN to obtain phonetic embeddings.
For K-means clustering, we use K = 5000 clusters and set σ = 0.01 in (<ref>).
We use the same linear network as the Skipgram model <cit.>, with a word embedding size of 100 and optimise the network with the negative log-likelihood loss.
Intrinsic evaluation. We evaluate the quality of semantic embeddings by measuring similarity scores between isolated word pairs.
We compare these scores to word similarity scores of textual word embeddings generated by an off-the-shelf Skipgram model trained on the transcribed utterances.
Spearman's ρ is used to quantify the similarity between the two sets of word-pair similarities <cit.>.
To obtain a single semantic embedding for each word class, we calculate the average of all AWEs from the same class and report ρ_avg.
Given that we are particularly interested in obtaining semantic embeddings for individual word segments, single-sample performance is also measured by randomly selecting one instance of each spoken word.
This is repeated ten times and averaged to get a single score ρ_single.
Extrinsic evaluation.
We use the setup as <cit.> to evaluate downstream semantic query-by-example (QbE) search performance.
Semantic labels for 1000 test utterances from FACC were collected from human annotators, using a set 67 keyword classes.
Specifically, each of the 1000 utterances was labelled by five annotators,
indicating whether a particular keyword is semantically relevant to that utterance (regardless of whether the word instance appears verbatim in the utterance).
We use the majority decision to assign a hard label for whether a query keyword is relevant to an utterance.
Using these hard labels, we calculate semantic P@10, P@N, EER, and Spearman's ρ.
Here, ρ measures the correlation between a system's ranking and the actual number of annotators who deemed a query keyword relevant to an utterance.
To simplify the QbE task, we still assume that ground truth word boundaries are known: a query AWE is therefore compared to AWEs for the word segments in an unlabelled search utterance.
§ RESULTS
§.§ Intrinsic Evaluation: Semantic AWEs
Table <ref> presents the intrinsic scores of embeddings from the semantic AWE models, trained either from scratch (top section) or using multilingual transfer (bottom).
The benefit of multilingual transfer is evident in the scores of the projection and Cluster+Skipgram approaches, with the latter outperforming all other models regardless of the input features used or whether single or averaged embeddings are evaluated.
The single-sample performance ρ_single is particularly
significant as it shows that individual representations can be compared accurately—a useful property for downstream applications such as semantic QbE (<ref>).
The ContrastiveRNN is the one exception that does not show a clear gain from initialising with multilingual weights compared to training from scratch.
As a sanity check, we evaluate the phonetic multilingual AWEs before semantic training (i.e. the foundation model used for transfer in the bottom section), obtaining a ρ_single = 0.59% and ρ_avg = -0.13%.
As expected, this indicates that phonetic multilingual AWEs do not capture semantic information.
The table also shows the benefit of using self-supervised speech representations as input to AWEs instead of conventional features, as also in previous work <cit.>; we use XSLR features from this point onwards.
Fig. <ref> visualises the semantic embedding space of the Cluster+Skipgram model.
It is clear that the acoustic realisations of semantically related words end up in similar areas in the space. E.g. the model learned that spoken instances of “orange", “red", “blue", “yellow", and “green" should be close to each other.
§.§ Extrinisic Evaluation: Semantic QbE
Table <ref> compares the Cluster+Skipgram (semantic) and multilingual AWE (phonetic) models when used in a downstream QbE system.
We evaluate both exact and semantic QbE, where the latter gets awarded for retrieving exact query matches as well as utterances labelled as semantically related to the search query.
To situate results, we use a random baseline model that assigns a random relevance score to each utterance. (The relatively high scores of the random approach are due to the narrow domain of the evaluation data, Sec. <ref>.)
Looking at the EER and Spearman's ρ for semantic QbE, we see that the Cluster+Skipgram model achieves the highest score, outperforming the purely phonetic AWEs from the multilingual AWE model.
The reason why the phonetic multilingual AWE model outperforms the semantic model in P@10 and P@N is due to its proficiency in detecting exact matches (which are also correct semantic matches).
To get a better sense of the ability of a model to retrieve non-verbatim semantic matches, we construct a difficult artificial semantic QbE task where we mask out all exact occurrences of the query word class in the search collection.
The results are shown in Table <ref>.
Now we see a clear benefit in using the Cluster+Skipgram model, with the phonetic multilingual AWE model becoming close to random search.
Our core goal was semantic QbE, but it is worth briefly touching on the exact QbE performance of the Cluster+Skipgram model in Table <ref>. Although trained for semantics, this model still achieves reasonable exact retrieval performance, with only a drop of between 5% and 10% in scores compared to the multilingual AWE model. It is therefore clear that this semantic model is able to retain phonetic properties while also capturing semantic information related to context.
§ CONCLUSION
We presented several semantic AWE modelling strategies.
We specifically promoted transferring knowledge from a pre-trained multilingual AWE model trained for word-class discrimination.
Our best semantic AWE approach involves a soft clustering on the original multilingual AWEs, serving as input to a Skipgram-like model.
Through intrinsic and extrinsic evaluations, we demonstrated the effectiveness of our strategies in learning semantic representations from unlabelled speech data.
The main shortcoming of our work (as also in others <cit.>), is that the word segmentation is assumed to be known.
This was reasonable given our goal of comparing different semantic AWE approaches on a sensible benchmark, but future work should look into incorporating unsupervised word segmentation methods <cit.> in order to do fully unsupervised semantic AWE modelling.
IEEEtran
|
http://arxiv.org/abs/2307.00364v1
|
20230701152447
|
The future of human-centric eXplainable Artificial Intelligence (XAI) is not post-hoc explanations
|
[
"Vinitra Swamy",
"Jibril Frej",
"Tanja Käser"
] |
cs.LG
|
[
"cs.LG",
"cs.AI",
"cs.CY",
"cs.HC"
] |
120231-152023Y
The future of human-centric eXplainable Artificial Intelligence (XAI) is not post-hoc explanations
Vinitra Swamy vinitra.swamy@epfl.ch
Jibril Frej jibril.frej@epfl.ch
Tanja Käser tanja.kaeser@epfl.ch
EPFL, Switzerland
=====================================================================================================================================================
Explainable Artificial Intelligence (XAI) plays a crucial role in enabling human understanding and trust in deep learning systems, often defined as determining which features are most important to a model's prediction. As models get larger, more ubiquitous, and pervasive in aspects of daily life, explainability is necessary to avoid or minimize adverse effects of model mistakes. Unfortunately, current approaches in human-centric XAI (e.g. predictive tasks in healthcare, education, or personalized ads) tend to rely on a single explainer. This is a particularly concerning trend when considering that recent work has identified systematic disagreement in explainability methods when applied to the same points and underlying black-box models. In this paper, we therefore present a call for action to address the limitations of current state-of-the-art explainers. We propose to shift from post-hoc explainability to designing interpretable neural network architectures; moving away from approximation techniques in human-centric and high impact applications. We identify five needs of human-centric XAI (real-time, accurate, actionable, human-interpretable, and consistent) and propose two schemes for interpretable-by-design neural network workflows (adaptive routing for interpretable conditional computation and diagnostic benchmarks for iterative model learning). We postulate that the future of human-centric XAI is neither in explaining black-boxes nor in reverting to traditional, interpretable models, but in neural networks that are intrinsically interpretable.
§ INTRODUCTION
The rise of neural networks is accompanied by one severe disadvantage: the lack of transparency of their decisions. Deep models are often considered black-boxes because they can produce highly accurate results at the cost of providing little insight into how they arrive at those conclusions. This disadvantage is especially relevant in human-centric domains where model decisions have large, real-world impact webb2021machine,conati2018ai.
The goal of eXplainable AI (XAI) is to circumvent this failing by either producing interpretations for black-box model decisions or making the model's decision making process transparent. As illustrated in Figure <ref>, model explanations range from local (single point) to global granularity (entire model). Moreover, explainability can be integrated into the modeling pipeline at three stages: Intrinsic explainability: traditional ML models such as decision trees explicitly define the decision pathway. In-hoc explainability: interpreting the model gradients at inference or customizing training protocols for additional information; for example, Grad-CAM uses backpropagation to highlight important regions of an input image selvaraju2017grad. Post-hoc explainability: after the decision is made, an explainer is fit on top of the black-box model to interpret the results.
In human-centric domains, researchers and practitioners tend to use either traditional ML models yielding intrinsic interpretability jovanovic2016building,vultureanu2021improving or apply a single post-hoc explainer adadi2018peeking,dovsilovic2018explainable. Unfortunately, recent research shows problematic trends with post-hoc explainers. Explanations might not be faithful to the true model rudin2019stop or inconsistent slack2020fooling and the explanations of different post-hoc explainers for the same model and data point have been shown to vary considerably across different methods Swamyexplainers2022,krishna2022disagreement,brughmans2023disagreement. Furthermore, evaluating the quality of the provided explanations is a challenge, since there is often no ground truth swamy2023trusting,dai2022fairness.
In this paper, we therefore present a call-to-action to address the limitations of current state-of-the-art explainability methods. Previous work rudin2019stop has made a strong argument for moving away from black-box models and for using inherent interpretability (i.e. traditional ML models) for decisions that matter. While we also propose to move away from black-box models and post-hoc explainers, we suggest exploring strategies to make deep learning approaches intrinsically interpretable, guaranteeing transparency, robustness, and trustworthiness of our current AI systems by design. We believe that human-centric domains should profit from both explainability and the recent advances in state-of-the-art machine learning methods (including large language models).
In the following, we define five needs of human-centric XAI: real-time, accurate, actionable, human interpretable, and consistent. We discuss the limitations of current XAI methods, and their inability to meet the requirements for human-centric XAI. We propose to focus on interpretation by design and present two ideas for inherently interpretable deep learning workflows. We hope this paper will serve as a call-to-action and a guideline for achieving consistency and reliability in human-centric XAI systems.
§ REQUIREMENTS FOR HUMAN-CENTRIC EXPLAINABLE AI
Neural networks have an enormous potential for impacting human life, from areas like personalized healthcare or educational tutoring to smart farming and finance. We define human-centric as any application that has a human in the loop, where a human will directly use the results of the model prediction as a basis of their decision-making process. In light of the specific challenges in these human-centric domains national2021human, we have defined five requirements that explanations should fulfill.
* Real-Time: Explanations should be provided in real-time or with minimal delay to support timely decision-making (in the scale of seconds, not tens of minutes) e.g. xu2017real.
* Accurate explanations with certainty: Explanations need to be accurate, reflecting the neural network's decision-making process, and if not, should at least be accompanied by a level of confidence marx2023but,leichtmann2023effects.
* Actionable: Explanations should provide actionable insights, empowering model deployers to take appropriate actions or make informed interventions joshi2019towards.
* Human interpretable: Explanations should be understandable to a broad audience beyond computer scientists, presenting information in a concise and decipherable manner. This often has to do with the interpretability of the input data and how the explanation is conveyed (visuals, text, etc.) hudon2021explainable,haque2023explainable.
* Consistent: Explanations should be consistent across similar instances or contexts, ensuring reliability and predictability in the decision-making process. In a time series of interactive predictions, the explanations should not drastically differ li2021algorithmic.
§ EXPLAINERS OF TODAY: STATE-OF-THE-ART AND LIMITATIONS
Research and adoption of neural network explainability has surged over the last eight years, particularly in human-centric areas. Post-hoc approaches are most commonly favored, as there is no impact on model accuracy and no additional effort required during training.
Local, instance-specific post-hoc techniques such as LIME lime and SHAP shap have been effectively utilized in a variety of models, including those predicting ICU mortality katuwal2016machine, non-invasive ventilation for ALS patients ferreira2021predictive, and credit risk creditrisk. Counterfactual explanations dice,alibi,dhurandhar2018explanations have been used in numerous tasks including document classification martens2014explaining, loan repayment pawelczyk2020learning, and image classification goyal2019counterfactual. Less research has focused on in-hoc methods. For instance, layer relevance propagation lu2020towards has been employed for student knowledge tracing. In addition, concept-activation vectors cav, have been proven successful
in predicting student success asadi2022ripple or identifying skin conditions lucieri2020interpretability.
Each of the post-hoc XAI solutions presented above, among many others not mentioned, have weaknesses for deployment in a real-world setting. The computational time, especially with SHAP, LIME, or counterfactual generation, is in the tens of minutes; not real time enough for users, students, or patients to make a decision based on the explanation in the time the prediction is made (often in the scale of milliseconds). In most cases, there is no measurement of trust or confidence in a generated explanation. The actionability and human-understandability of the explanation is based on the input format. As human-centric tasks often use tabular or time series data, their subsequent explanations are often not concise, actionable or interpretable easily beyond the scope of a data scientist's knowledge <cit.>. Recent research on explanation user design has shown that humans across healthcare, law, finance, education, and e-commerce, among others, prefer hybrid text and visual explanations haque2023explainable, a format not easily provided by current post-hoc libraries. Lastly, the consistency of the explanations is not intrinsically measured; generating an explanation for the next step in the time series could vary greatly from the previous step. Several explainability methods could produce vastly different explanations with different random seeds slack2020fooling.
Furthermore, explanations are difficult to evaluate. Current metrics (e.g. saliency, faithfulness, stability, fairness, measured in the recent OpenXAI metric suite agarwal2022openxai) have aimed to quantify the quality of an explanation, requiring a ground truth which is usually not available. However, to create a metric for a quality explanation, we need to de-bias what humans consider rational and optimize for what the model is actually doing. In this light, the most trustworthy metrics measure the prediction gap (e.g. PIU, PGU), removing features that are considered important by the explanation and seeing how the prediction changes dai2022fairness. This approach, while the best way to evaluate explanations currently, is still time-consuming and imperfect, as it fails to account for cross-feature dependencies.
Recent literature Swamyexplainers2022,krishna2022disagreement,brughmans2023disagreement has examined the results of over 50 explainability methods (e.g. LIME, SHAP, Counterfactuals, gradient-based) with diverse real-world datasets ranging from criminal justice to healthcare to education through a variety of metrics (rank agreement, Jenson-Shannon distance). These works demonstrate strong, systematic disagreement across methods. Validating explanations through human experts can also be difficult: explanations are subjective, and most can be justified. krishna2022disagreement, swamy2023trusting, and dhurandhar2018explanations have conducted user studies to examine trust in explainers, measuring data scientist and human expert preference of explanations. Results indicate that while humans generally find explanations helpful, no one method is recognized as most trustworthy. As further shown by swamy2023trusting, most explanations align with the prior beliefs of validators, and therefore can never be a unbiased solution.
We anticipate that the state-of-the-art in AI will continue to prefer large, pretrained deep models over traditional interpretable models for the foreseeable future; the capabilities and ease-of-use of neural networks outweigh any black-box drawbacks. Our goal is therefore to identify a way to use deep learning in an interpretable workflow.
§ INTRINSICALLY INTERPRETABLE DEEP LEARNING DESIGN
In human-centric applications, there is no margin for error, and it is crucial to prioritize design that is intrinsically interpretable as opposed to imperfect approximations of importance. In this section, we present two ideas towards intrinsically interpretable deep learning workflows. The first, interpretable conditional computation, is interpretable at both the local (single point) and global (entire model) level. The second approach is a global explainability approach.
§.§ InterpretCC: Interpretable Conditional Computation
InterpretCC aims to guarantee an explanation's accuracy to model behavior with 100% certainty, while maintaining performance by input point adaptivity. This approach is inspired by conditional computation in neural networks, initially introduced by bengio2013estimating to speed up neural network computation.
The simplest implementation of InterpretCC is to learn a dynamic feature mask and enforce sparsity regularization on the number of input features. For each point, the goal is to choose as minimal a feature set as possible. While it might seem that some model accuracy will be compromised by this approach, this adaptivity has potential to improve performance by reducing noise.
This idea can be expanded to use expert sub-networks (Figure <ref>) which are dynamically conditioned on the input data to mimic a tree-like network with decision pathways. Instead of restricting the features independently, we can group features meaningfully together (either by human-selected clusterings or automated approaches). Expert sub-networks are trained only using their feature group subset and we can decide to either consider a single sub-network or several with different weightings based on a confidence threshold. We train the network to dynamically choose which route(s) to use based on the input data.
The advantages of InterpretCC are multifold: it has the potential to not comprise accuracy, it has guaranteed interpretability (as the model only uses specific features or feature groups), and it has no additional cost to the traditional development workflow. Additionally, InterpretCC optimizes the interpretability-accuracy trade-off with a customizable sparsity criterion; easy-to-classify points have high interpretability and more difficult-to-classify points do not trade accuracy for interpretability.
§.§ I2MD: Interpretable Iterative Model Diagnostics
Current deep learning performance metrics (accuracy, F1 score) paint a starkly incomplete picture of model strengths and weaknesses. I2MD seeks to address this gap by examining the differential diagnostics of iterative snapshots of model training to build a detailed understanding of model abilities. For example, language models can be interpreted by extracting knowledge graphs during various stages of training swamy2021interpreting and comparing the iterative knowledge graphs to understand which skills the model learns at what time (illustrated in Figure <ref>). Snapshots of pre-trained models (i.e. BERT, GPT3, T5) can provide knowledge graphs that can be directly compared to one another, allowing practitioners to make an informed choice of which model strengths best fits their downstream use case.
Another use case is in iterative self-learning, showcased recently by the DeepMind Alpha models jumper2021highly. Each model generates data to train on, improves itself with that data, and is placed in direct competition with its previous iteration. Using granular diagnostic benchmarks between each iteration of model improvement to track model ability development, the training process can be transparent to the developer. During training or fine-tuning, tailored datasets can be created to target extracted model weaknesses; this results in a more performant model earlier in the training process and closes the loop, integrating XAI results back into the modeling pipeline.
§ CONCLUSION
The evolving landscape of machine learning models, characterized by the ubiquity of large language models (LLMs), transformers, and other advanced techniques, necessitates a departure from the traditional approach of explaining black-box models. Instead, there is a growing need to incorporate interpretability as an inherent feature of the models themselves. In this work, we have discussed five needs of human-centric XAI. We have shown how the current state-of-the-art is not meeting those needs and how post-hoc explainers will always be imperfect measurements of model behavior. We have also presented two initial ideas towards intrinsic interpretable design for neural networks. As researchers, model developers, and practitioners, we must move away from imperfect, post-hoc XAI estimation and towards guaranteed interpretability with less friction and higher adoption in deep learning workflows.
0.2in
theapa
|
http://arxiv.org/abs/2307.02689v1
|
20230705232105
|
Learning Symbolic Rules over Abstract Meaning Representations for Textual Reinforcement Learning
|
[
"Subhajit Chaudhury",
"Sarathkrishna Swaminathan",
"Daiki Kimura",
"Prithviraj Sen",
"Keerthiram Murugesan",
"Rosario Uceda-Sosa",
"Michiaki Tatsubori",
"Achille Fokoue",
"Pavan Kapanipathi",
"Asim Munawar",
"Alexander Gray"
] |
cs.CL
|
[
"cs.CL"
] |
SACHA: Soft Actor-Critic with Heuristic-Based Attention for Partially Observable Multi-Agent
Path Finding
Qiushi Lin and Hang Ma
Manuscript received: February 9, 2023; Revised May 11, 2023; Accepted June 14, 2023.
This paper was recommended for publication by Editor M. Ani Hsieh upon evaluation of the Associate Editor and Reviewers' comments.
This work was supported by the NSERC under grant number RGPIN2020-06540 and a CFI JELF award.
The authors are with the School of Computing Science, Simon Fraser University, Burnaby, BC, Canada {qiushi_lin, hangma}@sfu.ca
Digital Object Identifier (DOI): see top of this page.
August 1, 2023
======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Text-based reinforcement learning agents have predominantly been neural network-based models with embeddings-based representation, learning uninterpretable policies that often do not generalize well to unseen games. On the other hand, neuro-symbolic methods, specifically those that leverage an intermediate formal representation, are gaining significant attention in language understanding tasks. This is because of their advantages ranging from inherent interpretability, the lesser requirement of training data, and being generalizable in scenarios with unseen data. Therefore, in this paper, we propose a modular, NEuro-Symbolic Textual Agent (NESTA) that combines a generic semantic parser with a rule induction system to learn abstract interpretable rules as policies. Our experiments on established text-based game benchmarks show that the proposed NESTA method outperforms deep reinforcement learning-based techniques by achieving better generalization to unseen test games and learning from fewer training interactions.[Code available at <https://github.com/IBM/loa>]
§ INTRODUCTION
Text-based games (TBGs) <cit.> serve as popular sandbox environments for evaluating natural language-based reinforcement learning. The agent observes the state of the game in pure text and issues a textual command to interact with the environment. TBGs are partially observable where the full state of the world is hidden and action commands facilitate the agent to explore the unobserved parts of the environment. The reward signal from the environment is used to improve the agent's policy and make progress in the game.
Text-based games sit at the intersection of two research areas, i.e., language understanding and reinforcement learning. Existing RL agents for TBGs primarily use embeddings for observation as representations and are fed to an action scorer for predicting the next action <cit.>, ignoring the advances in language understanding. On the other hand, there has been a recent surge in neuro-symbolic techniques, particularly those that use symbolic representations, for better language understanding <cit.> through reasoning. In light of exploring such advances for text-based reinforcement learning, this work proposes a neuro-symbolic approach. Our approach, named NESTA (NEuro Symbolic Textual Agent) is a modular approach comprising a generic semantic parser in combination with a symbolic rule induction system as shown in Figure <ref>. The semantic parser translates text into the form of symbolic triples. NESTA uses Abstract Meaning Representation <cit.> as the initial parse which is then transformed into triples. This symbolic representation is used by an adaptation of the Inductive Logic Programming (ILP) system using Logical Neural Networks <cit.> for learning horn clauses as action rules.
NESTA, in comparison to other end-to-end learning approaches, has the following advantages: (a) modular language understanding using pre-trained large language models enabling our system to leverage the advances in semantic parsing.
While such modular semantic parsing-based techniques have been around for other NLP tasks such as reading comprehension <cit.>, knowledge base question answering <cit.>, and natural language inference <cit.>, this work is the first to demonstrate the application for TBGs ; (b) learning symbolic rules for model-free RL using a neuro-symbolic framework facilitates inherent interpretability and generalizability to unseen situations <cit.>. The rules learned by NESTA are abstract and not specific to entities in the training data. These abstract action rules in policies for TBGs enable reasoning over unseen entities during training.
Our main contributions in this work are: (1) We propose a novel and modular neuro-symbolic agent named NESTA. To the best of our knowledge, NESTA is the first to use a generic semantic parser with a rule learning system for TBGs, (2) Our empirical analysis of commonsense-aware textworld games shows that NESTA outperforms deep RL methods by a significant margin. We also show that NESTA has better sample efficiency compared to traditional text-based RL agents obtaining better test performance with up to 5× lesser training interactions, and (3) Our method produces interpretable abstract rules from the rule induction system.
§ NEURO-SYMBOLIC TEXTUAL AGENT
Text-based RL agents for TBGs interact with the environment using text-only action commands and obtain feedback solely as textual observations. As the agent does not have access to global state information, it is modeled as a Partially Observable Markov Decision Process (POMDP) <cit.> represented as (𝒮, 𝒜, 𝒯, R, Ω, 𝒪), where (𝒮, 𝒜, 𝒯, R) represent a Markov Decision Process. Ω represents the finite set of all observations, and 𝒪 represents the observation function representing the conditional distribution over observations for a given action and next state. The goal of the agent is to learn optimal action probabilities at each step such that the expected future reward is maximized.
We present
NEuro-Symbolic Textual Agent (NESTA), a modular approach for TBGs. Figure <ref> illustrates the overview of NESTA which comprises of three primary components:
(a) Semantic Parser, which extracts symbolic representation of the text using AMR as the generic semantic representation, (b) Rule Learner, an ILP-based rule induction module, which learns logical rules that abstract out the entities in the games, making these rules generally applicable to test games containing unseen entities, and (c) Pruner, that reduces the amount of branching factor at each step by pruning actions that do not contribute to the expected future reward. Below, we describe these components in detail.
§.§ Semantic Parser: Text to symbolic triples using AMR
The first step in NESTA is to translate the text into symbolic representation. To this end, inspired by works that address different NLP tasks <cit.>, we use an AMR parser as a generic semantic parser. The use of a generic semantic parse such as AMR allows the system to benefit from independent advances in AMR research. For example, the performance of AMR has improved in Smatch score <cit.> from 70.9 <cit.> to 86.7 <cit.> on LDC2017T10 in the last few years due to advances in large language models. The AMRs are subsequently transformed into a symbolic form using a deterministic AMR-to-triples approach.
Abstract Meaning Representation (AMR):
AMR parsing produces rooted, directed acyclic graphs from the input sentences, where each node represents concepts from propbank frames <cit.> or entities from the text. The edges represent the arguments for the semantic frames. Fig. <ref> shows the AMR graph generated from the sentence “There is a brown golf shoe and a blue moccasin on the cabinet.”. The resultant AMR graph is rooted at the propbank frame with ARG1 and ARG2 edges leading to its children. The other parts of the graph are used to describe the entities for “brown golf shoe” and “blue moccasin”. We use StructBART <cit.> for parsing a text to AMR.
AMR-to-triples:
We design an AMR-to-triples module to extract a set of symbolic facts consisting of generic domain-agnostic predicates from the AMR semantic representation. Fig. <ref> shows the extraction of facts from AMR. The AMR-to-triples module performs a set of graph operations to extract propbank nodes as the predicates and the children entities as the arguments. In the example, the two operands of the “and” node are converted into two symbolic facts with predicate with two separate entities of “brown golf shoe” and “blue moccasins”. We convert the symbolic facts to unary predicates. For example, we convert the predicate with two arguments into single argument facts.
These simplifications result in some loss of representational power but make the task of rule learning simpler.
We also add the commonsense predicates from conceptnet subgraph <cit.> provided by the TWC environment <cit.>.
§.§ Rule Learner: ILP from Rewards
In order to learn interpretable rules that can be debugged by humans, we use the symbolic representation obtained from the above step.
Such symbolic rules are learned from reward signals by interacting with the environment. For this purpose, we use Inductive Logic Programming (ILP) in an RL setting with the objective of expected future reward maximization. We use Logical Neural Networks (LNN) as the differentiable rule learning engine.
Logical Neural Networks:
LNN <cit.> proposes a differentiable rule learning framework that retains the benefits of both neural networks and symbolic learners. It proposes a logical neuron that has the core properties of gradient-based learning similar to a standard neuron but adds logic-aware forward functions and constrained optimization making it suitable for logical operations. This can be illustrated on 2-input logical conjunction (AND) neuron with (x, y) as two logical inputs to the conjunction node. The LNN conjunction neuron generalizes the classical AND logic for real-valued logic by defining a noise threshold (α). The real-values in [α, 1] and [0, 1- α] signify a logical high and logical low respectively. To emulate an AND neuron, LNN uses the standard truth table of the conjunction (AND) gate to obtain the following constraints,
f(x, y) ≤ 1 - α, ∀ x, y ∈ [0, 1-α]
f(x, y) ≤ 1 - α, ∀ x ∈ [0, 1-α], y ∈ [α, 1]
f(x, y) ≤ 1 - α, ∀ x ∈ [α, 1], y ∈ [0, 1-α]
f(x,y) ≥α, ∀ x,y∈ [α, 1]
.
LNN uses the forward function as the weighted Łukasiewicz t-norm, f(x, y; β, w_1, w_2) = β - w_1 ( 1 - x) - w_2 (1 - y), where β, w_1, w_2 are the bias and weights of the inputs. Given a target label, the weights and biases are tuned to learn the logical rule that best describes the data.
ILP-based reward maximization: Our ILP rule learner is based on the LNN rule learning implementation in <cit.>. However, our rule-learning model makes significant modifications to adapt the previous algorithm for model-free policy optimization suitable for text-based RL. Consider the state transition at time step t as (o_t, a_t, r_t, o_t+1), where o_t represents the textual observation, a_t is the action command that yields the reward r_t and takes the agent to the next state with observation o_t+1. AMR-to-triples semantic parser is used to obtain the symbolic state s_t (list of symbolic facts) from o_t as shown in Figure <ref>. At each step, the agent has to choose from a set of admissible action commands which are also converted to their symbolic form. Starting from an initial random policy π, we sample trajectories τ∼π and store the transitions (s_t, a_t, r_t, s_t+1) in a buffer ℬ. We also store the admissible actions set adm_t and the discounted future reward g_t = ∑_k=t^Tγ^k-tr_k, for each step in the buffer, where γ is the discount factor.
From the buffer ℬ, we find a set of template predicates 𝒫={ p | p ∈ s_t, for s_t ∈ℬ}, where p ∈ s_t operation states whether facts with predicate p exist in the symbolic state s_t. We also obtain a set of action predicates 𝒜={ a | a ∈adm_t, for adm_t ∈ℬ} finding all action predicates in the admissible action set.
We initialize ILP rule learner π_a(θ) for each action predicate a ∈𝒜. Action predicates for TBGs typically coincide with the action verbs. The LNN policy is formulated as a weighted conjunction operation over the template predicates 𝒫. The likelihood of action a for abstract lifted variables x,y is given as a conjunction template over the predicate list as follows: unary action likelihood is given as L(a(x)|s_t) = ⋀_k w_k p_k(x) and binary action likelihood is formulated as L(a(x,y)|s_t) = ⋀_k w_k p_k(x) ⋀_m w_m q_m(x,y). The predicates p_k and q_m are 1 and 2 arity predicates in 𝒫 respectively and ⋀ represents the LNN's logical conjunction operator. The weights w_k and w_m constitute the LNN parameters θ that are updated during training. At any given step, the likelihood of each action is normalized over all actions in the admissible action set to obtain the action probabilities.
For training the rule learning model π_a(θ) for a specific action a, we only extract transitions from the buffer containing the action a and store it in a sub-buffer ℬ_a. The model is updated following the policy gradient loss, ℒ = ∇_θ𝔼_(s_t, g_t) ∼ℬ_alog(π_a(a_t=a|s_t)g_t, where the trajectories are sampled from ℬ_a. We assume that π_a gives normalized probabilities for this loss formulation. Therefore, this training procedure yields separate rules learned for each action predicate. Figure <ref> shows the learned rules for each action.
Generalization under Distribution Shift:
Having learned the action rules for each action predicate using dedicated ILP models, NESTA uses the rules for obtaining the action probabilities at each step. This process consists of three steps: (a) For each action in the admissible action list, invoke the learned rule for that action predicate, (b) Assign the abstract variables with the symbolic action arguments, and (c) Match the symbolic facts using entity alignment by root noun matching (instead of an exact match). The probabilities are then obtained by the LNN conjunction node feed-forward operation based on the current weights. This procedure is also used for sampling during training.
Figure <ref> shows the reasoning steps for fixed weights after training is complete. Since the rules learned by NESTA abstract out the entities in the form of lifted variables, human interpretability and generalization to unseen entities is a natural advantage of our method. In addition to this, since we modularize language understanding and RL policy learning into separate modules, our LNN symbolic learner can solely focus on optimal reward performance leading to sample-efficient learning.
§.§ Pruner: Irrelevant Action Pruning by Look-Ahead
The third module in NESTA, tackles the large action space problem in TBGs by removing actions from the admissible commands that do contribute to future rewards in the games. A large number of possible actions at each step can increase the branching factor of the agent at each step during the training and testing leading to a combinatorially large search problem. We employ a look-ahead strategy to find out which actions do not contribute to future reward accumulation. For example, the action (x) returns the description of the entity x, but does not change the state of the game and does not contribute to future rewards. However, for the action (x), although an immediate reward is not obtained on execution, it leads to a future reward when the object x is put in the correct container y using the (x, y) command. Therefore, the action command of type (x) can be pruned but (x) is essential and hence cannot be pruned. This can be computed by looking ahead from the current step and comparing the future reward if that particular action was removed from the trajectory.
Due to AMR error propagation and undesirable credit assignment (for example, (x) command issued just before a rewarded action), the rule learner can assign high action probabilities to non-contributing actions. Therefore, the pruner module is desirable to remove such action predicates (a) by evaluating the total reward in action trajectories with and without the particular action predicate a. More specifically, for each episodic trajectory, we remove the action predicates a and re-evaluate the episodic reward obtained from the environment. If the average episodic reward in both cases, with and without removal of a is the same then the action predicate is not contributing to the future reward. Therefore, it can be removed from the original action predicate set, 𝒜 to obtain the pruned action set 𝒜_pruned for which LNN models are learned.
§ OUTLIER REJECTION IN POLICY TRAINING
The training samples that NESTA collects from interacting with the environment can be noisy and this can affect learning a good policy. There exists two sources of noise: (a) AMR noise, where AMR incorrectly parses the surface text resulting in erroneous identification of entity extraction or relationships between entities, and (b) RL credit assignment noise, where discounted reward gives reward to a suboptimal action taken right before a correct action. Although symbolic reasoners have the advantages of learning from fewer data and better generalization, they are not robust to noise. We mitigate the effect of noise in LNN policy training by using a consensus-based noise rejection method.
Our noise rejection method trains the LNN Policy on multiple subsets of training data and selects the model with the smallest training error as the best model. The multiple subsets of training data are prepared as follows - for each training subset, a particular predicate p from the predicate list 𝒫 is given priority. We only choose state transitions that contain the predicate p ensuring that this predicate will be part of the final learned rule, thus eliminating the source of AMR noise for this predicate (such a subset is rejected if the number of such transitions is less than some threshold percentage). Subsequently, the resulting transitions are sorted by the discounted reward g_t and we only retain the top first k% of this sorted data as training data. This encourages action transition with more immediate average reward gains to constitute the training data.
§ EXPERIMENTAL RESULTS
Our experiments are designed to answer these questions that analyze if NESTA can overcome the common drawbacks of deep RL methods: (i) Can NESTA enable better generalization in test environments? (ii) Does NESTA improve upon sample efficiency while still maintaining good reward performance, (iii) Are the rules learned by NESTA, human interpretable? For comparing the performance of various methods, we use the metrics of normalized score (total reward from the games normalized by maximum reward) and number of steps to reach the goal (lower is better). Our experiments were conducted on Ubuntu 18.04 operating system with NVidia V100 GPUs.
§.§ Environment
We use the textworld commonsense (TWC) environment <cit.> for empirical evaluation of our method. The goal here is to clean up a messy room by placing the objects in the correct containers. The game provides conceptnet sub-graphs relating the game entities which are used as commonsense graphs. TWC provides two splits of testing games: (i) in-distribution games that have the same entities as training games but unseen object-container configuration, and (ii) out-of-distribution games that use new objects not seen during training. This provides a systematic framework for measuring generalization in NESTA and other baseline agents for both within-training distribution and out-of-training distributions. Since we are focusing on generalization aspects, we do not use other textworld games <cit.> because these environments primarily focus on the agent's exploration strategies and are therefore not suitable to evaluate the agent's generalization ability.
§.§ Agents
For baseline agents, we report performance by these deep RL-based methods: (1) Text-based agent that uses a GRU network for observation representation and action scorer units, (2) TWCAgent (Text + CS) that uses combined textual and commonsense embeddings for action scoring, (3) KG-A2C <cit.> that uses extracted knowledge graphs as input, (4) BiKE <cit.> which leverages graph structures in both textual and commonsense information and (5) CBR <cit.> which is the SOTA method using case-based reasoning for improving generalization in text-based agents. We did not compare with previous neuro-symbolic methods <cit.> because they use a hand-crafted game-specific predicate design scheme that was not available for TWC.
§.§ Generalization to Test Games
We evaluate the generalization ability of NESTA on TWC easy, medium and hard games.
Table <ref> and Table <ref> shows the performance of baseline and our agents on in-distribution and out-of-distribution games, including the human performance from <cit.>. For the baseline models, we report scores from <cit.>. For NESTA, we report the mean of 5 independent runs.
For easy games, NESTA gets a perfect score outperforming previous games with similar steps as human performance. For medium and hard games, NESTA greatly surpasses the SOTA agent and needs a lesser number of steps for both in-distribution and out-of-distribution games. For medium out-of-distribution games, NESTA outperforms humans in terms of the number of steps. This might be due to the fact that during human annotation, the subjects would take a larger number of steps for the initial few games due to trial-and-error, thus increasing the average number of steps.
While easy and medium games have a single-room setting, hard games present a two-room setting where the agent might require picking up an object in room 1 and putting it in a container in room 2. This requires learning a complex strategy especially for generalizing to unseen entities. Our method NESTA scores significantly higher compared to SOTA on hard games, thus exhibiting the ability of our method to generalize in complex settings while deep RL methods fail to generalize due to overfitting the training data. Furthermore, our outlier rejection model helps improve the number of steps to reach the goal for both in-distribution and out-of-distribution games.
§.§ Ablation Results with Action Pruning
To study the effect of our action pruning module on deep RL agents, we implemented action pruning on the publicly available TWCAgent code from <cit.>. We follow the exact same methodology for TWCAgent that we used for the NESTA agent. Using the look-ahead method, we obtain 𝒜_retain, the list of action verbs to retain at a specific episode (episode num 10 for this result). For all subsequent training steps, only action verbs a ∈𝒜_retain were retained from the admissible list. We also follow the same strategy for the test games.
Table <ref> shows the results for action pruning for both TWCAgent and NESTA. Firstly, even without action pruning, NESTA outperforms the TWCAgent with action pruning. NESTA+AP shows a higher gain in performance compared to NESTA only, whereas TWCAgent did not exhibit such large improvements. We found that even without AP, TWCAgent learns to avoid sub-optimal actions. However, it suffers from overfitting and hence cannot generalize to unseen configurations and entities.
§.§ Human-in-the-loop Rule Debugging
NESTA enables the user to verify all the learned rules. It provides the facility to add new rules that might be missing or edit the rules if they are sub-optimal.
The ability of human-in-the-loop debugging is what sets NESTA apart from other methods that tend to provide some level of explainability. Table <ref> shows the human-interpretable learned rules for a particular training on hard games. The rule for take(x,y) can be identified as sub-optimal because it implies that the agent should take any object that is present in a container y present in the current room. The human-corrected rule implies the agent should only “take” objects that are not in their assigned location according to conceptnet facts. The human-corrected rule perfectly solves the out-of-distribution hard games in close to the optimal number of steps. This demonstrates that NESTA's human-in-the-loop rule debugging feature can be readily used to achieve favorable performance gains.
§.§ Sample efficient learning
We hypothesize that deep RL policies require a large number of training interactions because they learn both language understanding and action scoring from rewards ignoring external language pre-training. NESTA, on the other hand, decouples language understanding to AMR-based semantic representations while the LNN-ILP rule learner can focus on RL policy optimization resulting in learning from fewer samples. Figure <ref> shows that the NESTA model obtains better scores for both in-distribution and out-distribution games at much fewer training interactions compared to the deep RL text agent. In fact, NESTA can outperform text agents even when it learns from 5× lesser training interactions.
We also computed computational time for NESTA compared to neural agents. Average computational times (out-of-distribution) required for each step for NESTA compared to neural agents. For easy games, the average computation time for neural agents was 0.12±0.06 s, and that for NESTA was 0.16±0.05. The corresponding numbers for medium games were 0.17±0.06 and 0.22±0.06 respectively. NESTA requires extra time due to parsing. However, since it has a lower overall number of steps (almost 5 times lower for easy/medium games from Table <ref>), time per game would be lower or comparable.
§ RELATED WORK
Text-only Agents:
Early work on text-based reinforcement learning agents used an LSTM-based representation learning from textual observations <cit.>, and Q-learning <cit.> in the action scorer of LSTM-DQN to assign probability scores to the possible actions.
<cit.> used LSTM units in the action scorer of LSTM-DRQN to handle the better generalization.
<cit.> further improved generalization and reduced overfitting by training a bootstrapped model, named CREST, on context-relevant observation text.
<cit.> presented one of the winning strategies in the First-TextWorld Competition using the actor-critic algorithm <cit.> for training the policy. Unlike these text-only models, NESTA uses symbolic reasoning over the lifted rules for better generalization and interpretability.
Graph-based Agents:
Instead of relying on the neural models to capture the structure of the observed text
, recent works considered the graph representation of the observed text to guide the agent for better exploration. Graph-based agents from <cit.> build a knowledge graph representation of the textual state for efficient exploration and handling large action space. <cit.> learns a dynamic belief graph from raw observations using adversarial learning on the First Textworld Problems (FTWP). <cit.> proposed a case-based reasoning approach that improves upon existing graph-based methods by reusing the past positive experiences stored in the agent's memory. Unlike NESTA, these graph-based methods suffer from noise in the observation as the graphs are generated from the observed text.
Reasoning-based Agents:
Both text-only and graph-based methods use only the texts observed during the game interaction. <cit.> introduced Textworld commonsense (TWC), text-based cleanup games that require commonsense reasoning-based knowledge about everyday household objects
Recent works tried to enrich text-only agents with commonsense reasoning for exploiting readily-available external knowledge graphs <cit.> and images generated from the observed texts using pre-trained models <cit.>. These methods suffer from noisy features extracted from the external knowledge thus hindering the learning ability of the text-based RL agents. Unlike the traditional deep RL agents, <cit.>
proposed neuro-symbolic agents for TBGs that show near-perfect performance. Related work from <cit.> uses the world model as a symbolic representation to capture the current state of the game. These approaches require hand-engineering of domain-specific symbolic state representation. On the other hand, NESTA presents a generic domain-independent symbolic logic representation with an automatic symbolic rule learner that handles large action spaces and noisy observation with ease.
In other symbolic methods, there are works <cit.> which employ deep learning for neuro-symbolic regression.
Compared to these methods, NESTA aims to improve the generalization to unseen cases, whereas these methods train and test in the same setting. Additionally, neuro-symbolic regression methods have limited interaction with the environment in intermediate steps, and reward is obtained at the terminal state. However, for NESTA we use the symbolic representation from intermediate steps to learn action rules from partially-observable symbolic states.
§ CONCLUSION
In this paper, we present NESTA, a neuro-symbolic policy learning method that modularizes language understanding using an AMR-based semantic parsing module and RL policy optimization using an ILP rule learner. NESTA benefits from prior advances in AMR-based generic parsers for symbolic fact extraction allowing the ILP symbolic learner to solely learn interpretable action rules. NESTA outperforms SOTA models on TBGs by showing better generalization while learning from a fewer number of training interactions. We believe our model is one of the first works combining advances in neural semantic parsing and efficient symbolic planning for text-based RL. We hope this work will encourage future research in this direction.
§ LIMITATIONS
The neuro-symbolic rule learning presented in the paper can handle most generic text-based games. Only in a few specific use cases, additional training of the AMR parser would be required. Since AMR is used for symbolic representation for text-based games, the vocabulary of the extracted triples is limited by the vocabulary of PropBank semantic roles. For applications in a very specific kind of domain where the predicates and entities do not match with this pre-defined vocabulary (for example, specific financial, legal domains, etc.), the AMR semantic parsing engine needs to be retrained first on such specific data before using it for rule learning. However, even in the cases where the testing environment requires additional rules, NESTA allows human-in-the-loop debugging to conveniently add them making it adaptable to generic environments.
§ ETHICS STATEMENT
Our method uses a constrained set of action samples to generate the textual actions in each step. Since this action set is generated from a controlled vocabulary of actions and entities, the produced actions cannot contain harmful content like hate speech and racial biases. Furthermore, our neuro-symbolic model produces human interpretable rules for the action policy thereby making the model transparent and easier to control. Due to these reasons, the ethical risk from this work is low.
acl_natbib
|
http://arxiv.org/abs/2307.02200v1
|
20230705105202
|
Multi-Agent Cooperation via Unsupervised Learning of Joint Intentions
|
[
"Shanqi Liu",
"Weiwei Liu",
"Wenzhou Chen",
"Guanzhong Tian",
"Yong Liu"
] |
cs.MA
|
[
"cs.MA"
] |
On the Adversarial Robustness of Generative Autoencoders in the Latent Space
[
August 1, 2023
============================================================================
The field of cooperative multi-agent reinforcement learning (MARL) has seen widespread use in addressing complex coordination tasks. While value decomposition methods in MARL have been popular, they have limitations in solving tasks with non-monotonic returns, restricting their general application. Our work highlights the significance of joint intentions in cooperation, which can overcome non-monotonic problems and increase the interpretability of the learning process.
To this end, we present a novel MARL method that leverages learnable joint intentions. Our method employs a hierarchical framework consisting of a joint intention policy and a behavior policy to formulate the optimal cooperative policy. The joint intentions are autonomously learned in a latent space through unsupervised learning and enable the method adaptable to different agent configurations. Our results demonstrate significant performance improvements in both the StarCraft micromanagement benchmark and challenging MAgent domains, showcasing the effectiveness of our method in learning meaningful joint intentions.
§ INTRODUCTION
In recent years, cooperative Multi-Agent Reinforcement Learning (MARL) has gained significant attention as a prominent approach for learning effective behaviors in real-world tasks, such as swarming <cit.> and autonomous driving <cit.>. These tasks involve multiple agents working in collaboration within a shared environment.
The most prevalent framework for MARL is Centralized Training with Decentralized Execution (CTDE) methods <cit.>, which addresses practical communication constraints and handles the exponentially growing joint action space. However, CTDE methods still face challenges in real-world applications.
In particular, the popular value-based CTDE algorithm, QMIX <cit.>, may not always approximate the optimal value function, as the monotonicity constraint leads to sub-optimal value approximations, also referred to as the relative overgeneralization problem <cit.>. This results in QMIX being unable to capture value functions that are dependent on other agents' actions. Additionally, current methods tend to perform poorly in ad-hoc team play scenarios <cit.>, where the environment has varying team sizes at test time. The reason for this is that agents must assess and adapt to others' capabilities in order to exhibit optimal behavior in the ad-hoc team play MARL setting <cit.>. However, current methods tend to learn fixed policies, which is not suitable for these dynamic scenarios.
A potential solution to the relative overgeneralization issue is to incorporate joint intentions into the MARL framework <cit.>. In human cooperation, individuals often work towards a common goal without direct access to each other's actions by aligning with a shared joint intention. In MARL, if agents can coordinate their actions based on a shared joint intention, they can choose optimal cooperative actions without requiring information about each other's specific actions, as they are working towards the same purpose. For example, in a pursuit task where multiple agents must work together to attack a prey and a single attack incurs a penalty, if agents can coordinate their actions based on a joint intention such as all attacking together, they can avoid a situation where one agent is afraid of another not attacking, leading to a decision to stay to avoid incurring the penalty. Nevertheless, defining joint intentions is a challenge and manually specified joint intentions lack generalizability. Hence, this work proposes a MARL approach that utilizes unsupervised learning to derive joint intentions.
Our approach for achieving unsupervised learning of joint intentions in MARL consists of the following steps:
* Team partitioning: We partition the team of agents into smaller groups and share joint intentions within each team.
* Hierarchical framework: The approach consists of two levels, a high-level joint intention policy and a low-level behavior policy. The joint intention policy chooses the optimal joint intention from a latent space based on the local state of each team. The behavior policy, on the other hand, is used by every agent in the team to take the optimal action corresponding to the team's joint intention.
* Unsupervised learning of joint intentions: We optimize the mutual information between the joint intentions shared within each team and the changes in the local state of each team. This allows us to map different joint intentions to specific changes in the local state.
* Adaptation to dynamic agent numbers: The learned joint intentions are independent of the number of agents in the team, allowing the approach to adapt to environments with uncertain agent numbers.
* Overcoming non-stationarity: We extend the approach by utilizing mutual information to increase the expressiveness of the mixing network of the joint intention policy's value function, thereby overcoming the non-stationarity of the hierarchical framework.
Experimental results show that our approach outperforms state-of-the-art methods in both monotonic and non-monotonic scenarios in games such as StarCraft and MAgent, as well as in ad hoc team play environment settings. The visualization of joint intentions representations demonstrate the meaningfulness of the learned joint intentions and their impact on agent behavior. Additionally, the analysis of the evolution process of the learned joint intentions highlights their relevance to performance improvement and provides interpretability for the cooperation among agents.
§ RELATED WORK
Currently, the mainstream methods are CTDE <cit.> methods in MARL.
In CTDE methods, VDN <cit.> learns the joint-action Q-values by factoring them as the sum of each agent's utilities. QMIX <cit.> extends VDN to allow the joint action Q-values to be a monotonic combination of each agent's utilities that can vary depending on the state.
However, the monotonic constraints on the joint action-values introduced by QMIX and similar QMIX-class methods lead to provably poor exploration and relative overgeneralization <cit.>.
To address this problem, QPLEX and QTRAN <cit.> aim to learn value functions with complete expressiveness capacity. However, they are reported to perform poorly when used in practice because learning complete expressiveness is impractical in complicated MARL tasks due to the difficult exploration in large joint action space <cit.>.
Other methods involve additional structure to facilitate the convergence of the value function to optimum, such as MAVEN <cit.> hybridizes value and policy-based methods by introducing a latent space for hierarchical control. This allows MAVEN to achieve committed, temporally extended exploration.
UneVEn <cit.> learns a set of related tasks simultaneously with a linear decomposition of universal successor features to generalize to the non-monotonic tasks.
Moreover, G2ANET <cit.> models the importance of relationship between agents by a complete graph and a two-stage attention network. However, these methods lack optimality guarantees and are insufficient to thoroughly address the relative overgeneralization problem <cit.>.
Additionally, all these methods can not estimate the value of actions considering the changing of other agents' actions, which is essential for cooperation in environments with non-monotonic returns. Our method learns the joint intentions shared among agents, enabling agents to cooperate based on awareness of other agents' intentions to completely avoid the mis-coordination that leads to the relative overgeneralization.
Moreover, our method can adapt to ad hoc settings and the learned joint intentions provide interpretability that other methods lack.
Additionally, compared to other methods that learn to model others' intentions or behaviors like <cit.>, our method uses the joint intention as a constraint to avoid mis-coordination instead of predicting other agents' behaviors.
Our work is also related to information theory.
Related work SAC-AWMP <cit.> has verified the effectiveness of information theory in the RL training process.
DIAYN <cit.>, DADS <cit.> use mutual information as an intrinsic reward to discover skills without an extinct reward function.
To optimize the mutual information, <cit.> showed that a discriminability objective is equivalent to maximizing the mutual information between the desired objectives.
In MARL, ROMA <cit.> use mutual information to learn roles for agents corresponding to the past trajectories.
<cit.> learns social influence between agents by estimating the mutual information between their actions.
Similarly, <cit.> uses an intrinsic reward to characterize and quantify the influence of one agent’s behavior on the expected returns of other agents.
However, these methods do not aim to address the relative overgeneralization problem. Our method differs from all these methods. We propose unsupervised learning of joint intentions shared within teams and learn the joint intention-based multi-agent policies to address the relative overgeneralization problem.
§ BACKGROUND
A fully cooperative multi-agent sequential decision-making task can be described as a decentralized partially observable Markov decision process (Dec-POMDP), which is defined by a set of states S describing the possible configurations of all N agents, a set of possible actions A_1, . . . , A_N, and a set of possible observations O_1, . . . , O_N. At each time step, each agent i ∈{1, ..., N} chooses an action a_i ∈ A_i, forming a joint action 𝐮∈ U. The joint action 𝐮 produces the next state by a transition function P_s:S × U → S. The next observation of each agent o ∈ O is updated by an observation function O_p:S → O. All agents share the same reward r:S × U → R and with a joint value function Q_tot = E_s_t+1: ∞ ,a_t+1: ∞ [R_t|s_t, 𝐮_𝐭] where R_t = ∑^∞_j=0γ^j r_t+j is the discounted return.
Partial observability is typically handled by using the history of actions and observations as a proxy for state, often processed by a recurrent neural network: Q_tot(τ_𝐭, 𝐮_𝐭) ≈ Q_tot(s_t, 𝐮_𝐭), where τ_i^t is (o_i^0, a_i^0, ..., o_i^t) and τ_𝐭 = {τ_i^t}_i ∈ A.
In this work, we also introduce a set of latent joint intentions Z describing diverse modes of changes of local state, agents in the same team j share the same z_j ∈ Z. And the joint intentions can be viewed as 𝐳. We use N,M,K_j to refer to the number of agents, the number of teams and the number of agents in team j.
Moreover, non-stationarity in hierarchical reinforcement learning means the transition collected when the low level policy is not well-trained could be useless for high level policy because the low level policy is constantly changing and the collected transition could be different under the current low level policy.
§ METHOD
§.§ Team Partitioning
Our method is designed to build a multi-agent framework that can make the agents in the same team share the same joint intention to cooperate better. Since we know the joint intentions of humans are usually related to describing the environment changes, like passing the ball to the center forward, which means the center striker and center forward work together to change the ball's position.
Therefore, the joint intention shared within the team should also describe the changes of the environment. Furthermore, since our joint intentions are shared within teams, instead of describing the changes of the global state, our joint intention describes the changes of the local state of the team itself.
However, it is difficult to determine how to divide all agents into distinct teams to better present joint intentions.
To address this problem, we propose a team partitioning approach from the perspective of reward decomposition.
Theorem 1:
For all i ∈{1,...,N}, there exists a reward decomposition r̂_̂î that only depends on (o_i^t, u^t-_i, a_i^t) where u^t-_i is the joint actions of agents who can be observed by agent i, so that
[ r_tot(s_t, 𝐮_𝐭) = ∑_i=1^N (r̂_̂î(o_i^t, u^t-_i, a_i^t) + (2^(N-K_i)-1) ×ϵ). ]
where r_tot is the external reward of the environment, K_i is the number of agents that can be observed by agent i, and (2^(N-K_i)-1) ×ϵ is the optimality gap where the ϵ is a small value.
Proof.
First of all, as the total reward of the environment are generated by all kind of possible interaction between agents, we have
[ r_tot(s_t, u_t) = ∑_i=1^N r_1(o_i^t,a_i^t) + ∑_i<j=2^i<j=N r_2(o_i^t, o_j^t, a_i^t, a_j^t) + ... + r_N(o_1^t,..., o_N^t, a_1^t,..., a_N^t). ]
Notable, each item r_k in Eq. (<ref>) exists only when there are interactions between the described agents. Otherwise, it should equal to zero as the reward is represented by other items.
Especially, in decentralized execution settings, we assume that the credit assignment can only learn the reward decomposition that each agent interacts with the agents who can be observed by themselves. The reason is that the decentralized policy can only take action according to information within the observation. If the Q-value function based on the learned reward decomposition needs extra information to be calculated, we must introduce the communication method to transfer the necessary messages. Therefore, for any r_k in Eq. (<ref>) that represents the reward of agents which are not in the view of each other, we model the item as an unlearnable noisy W(x) with the expectation ϵ.
Additionally, most scenarios have a smaller range of interaction than the field of view in practice, which means that the possibility of agents having interaction with agents out of view is low. Therefore, the ϵ is a small value.
Next, we define r̂_̂î(o_i^t, u_i^t-, a_i^t) and decompose it similarly as,
[ r̂_̂î(o_i^t, u_i^t-, a_i^t) = r_1^i(o_i^t, a_i^t) + ∑_j=1,j ≠ i^K_i r_2^i(o_i^t, a_j^t, a_i^t)
+ ... + r_K_i^i(o_i^t, a_1^t,..., a_K_i^t, a_i^t). ]
We notice that for r_k^i = r_k^i(o_i^t, a_j^t,...a_k^t, a_i^t) if r_k^i is not zero, we have
[ r_k^i(o_i^t, a_j^t,...a_k^t, a_i^t) = 1/k r_k(o_i^t, o_j^t,...,o_k^t, a_j^t,...a_k^t, a_i^t). ]
The reason is that if r_k exists, it represents the cooperation reached by agents set (i,j, .., k; j<i<k). Moreover, as the specific joint action is taken by all agents in the set, each agent should equally share the cooperative reward r_k. Then, we can modify Eq. (<ref>) as
[ r̂_̂î(o_i^t, u_i^t-, a_i^t) = r_1(o_i^t, a_i^t) + 1/2∑_j=1,j ≠ i^K_i r_2(o_i^t,o_j^t, a_j^t, a_i^t)
+ ... + 1/K_i r_K_i(o_i^t,o_1^t,...,o_K_i^t, a_1^t,..., a_K_i^t, a_i^t). ]
We construct the following equation,
[ ∑_i=1^N (r̂_̂î(o_i^t, u^t-_i, a_i^t) + (2^(N-K_i)-1) ×ϵ); = ∑_i=1^N r_1(o_i^t, a_i^t) + ∑_i=1^N1/2∑_j=1,j ≠ i^K_i r_2(o_i^t,o_j^t, a_j^t, a_i^t)
+ ... + ∑_i=1^N (C_(N-K_i)^2 + C_(N-K_i)^3 +... + C_(N-K_i)^N-K_i) ×ϵ; =∑_i=1^N r_1(o_i^t, a_i^t)+ 1/2∑_i=1^N∑_j=1,j ≠ i^K_i r_2(o_i^t,o_j^t, a_j^t, a_i^t) + ∑_i=1^N∑_j=1,j ≠ i^N-K_iϵ
+ ... ]
where we use the noisy function's expectation ϵ stands for the corresponding reward items which are related to agents out of view of agent i. These items can be considered as optimality gaps.
Then, we have Eq. (<ref>) equals
[ ∑_i=1^N r_1(o_i^t, a_i^t)+ 1/2∑_i=1^N∑_j=1,j ≠ i^K_i r_2(o_i^t,o_j^t, a_j^t, a_i^t) + ∑_i=1^N∑_j=1,j ≠ i^N-K_iϵ
+ ...; = ∑_i=1^N r_1(o_i^t, a_i^t)+ 1/2∑_i=1^N∑_j=1,j ≠ i^N r_2(o_i^t,o_j^t, a_j^t, a_i^t) + ...; = ∑_i=1^N r_1(o_i^t, a_i^t)+ 1/2∑_i<j=2^i<j=N 2 × r_2(o_i^t,o_j^t, a_j^t, a_i^t) + ...; = ∑_i=1^N r_1(o_i^t, a_i^t)+ ∑_i<j=2^i<j=N r_2(o_i^t,o_j^t, a_j^t, a_i^t) + ... = r_tot(s_t, u_t). ]
So, we have
[ r_tot(s_t, u_t) = ∑_i=1^N (r̂_̂î(o_i^t, u^t-_i, a_i^t) + (2^(N-K_i)-1) ×ϵ). ]
Therefore, we have proved our Theorem 1.
Since r̂_̂î only depends on agent i and agents in the view of agent i, we notice that if we use the joint intention to describe the change of the local observation of agent i, then the joint intention is only related to agent i and agents in the view of agent i. This result indicates that we can use a commander-follower structure to partition agents that the commander agent i uses its observation as the local state of the team and the team members can be determined as all agents in the view field of the commander.
However, it is still not clear how to choose the commander. From Theorem 1, we can see that the optimality gap heavily depends on the number of observed agents. If the local observation of agent i covers more agents, then the gap is smaller. Following this principle, we choose the agents which minimize the optimality gap as commanders. However, the searching time of finding the optimal solution is exponential, so in practice, we use a greedy algorithm to find a partition.
We first add all agents into an unassigned set during each time step. Then, we always find the agent who can observe the most agents as a team commander and make all agents who are being observed as the team members. Then, we remove all these agents from the unassigned set and repeat the iteration approach until no agents are left. Notably, we randomly choose the commander when multiple agents can observe the same number of agents. The algorithm is included in Algorithm <ref>.
This approach can guarantee that every agent belongs to a specific team at each time step and the overall optimality gap is nearly minimized.
Intuitively, the team partition method chooses agents who can observe the most agents as the commanders. This is similar to the natural way of choosing commander that humans usually choose people with the most information about team members as the commander <cit.>.
More intuitive representations of the team partition results are shown in Fig. <ref>.
§.§ Unsupervised Learning of Joint Intentions
Since the team partitioning is clear, we now discuss the joint intention shared within the team to address the relative overgeneralization problem.
As the joint intention makes all agents in the team working towards the same goal. This requires joint intention to have four properties:
1) Executable: All agents in the team must understand the joint intention and take the corresponding actions.
2) Dynamic: The joint intention should be determined based on the states of each time step's teams.
3) Diverse: Each joint intention is expected to describe one specific change of the local state.
4) Learned: The joint intention is expected to be unsupervised autonomously learned from a latent space, not relying on pre-defined knowledge.
To implement these properties, we propose a hierarchical learning framework and an unsupervised learning approach.
§.§.§ Hierarchical Learning Framework
The hierarchical policy of each agent consists of a high level joint intention policy and a low level behavior policy. The high level joint intention policy is designed to choose a joint intention z from a latent space to maximize the return of the environment. The latent space is designed to map to diverse modes of local state changes, which are different joint intentions. We consider a discrete latent joint intention space as the size of the latent joint intention space should be small, otherwise it will lose generality. Therefore, for a team j where agent i is the commander, we model the joint intention policy as π_h(z_j|o_i^t;θ) parameterized by θ, where o_i^t is the observation of agent i. We sample z_j ∼π_h(z_j|o_i^t;θ) at each time step for each team during rollout.
As for the low level behavior policy, it is expected to follow the joint intention given by the joint intention policy to take optimal cooperative actions. We model behavior policy as a joint intention conditioned policy π_l(a_k^t|τ_k^t,z_j;μ) parameterized by μ. Every agent in the team j will follow the joint intention z_j and take action a_k which interacts with the environment.
The joint actions u_j of team j can be viewed as u_j ∼ (π_l(a_0^t|τ_0^t,z_j;μ)...π_l(a_K_j^t|τ_K_j^t,z_j;μ)), where K_j means there are K_j agents in team j. The overall hierarchical framework is shown in Fig. <ref>.
§.§.§ Objectives of Optimization
Introducing latent joint intention space and joint intention conditioned policy does not autonomously learn these four desired properties. To address this problem, we use the information theoretic paradigm of mutual information to learn joint intentions. Specifically, we propose to make behavior policy maximize the mutual information between the next local state o_i^t+1 and current joint intention z_j conditioned on the current local state o_i^t.
[ I(o_i^t+1 ; z_j | o_i^t) =H(z_j | o_i^t)-H(z_j | o_i^t+1, o_i^t)
=H(o_i^t+1| o_i^t)-H(o_i^t+1| o_i^t, z_j) = I(z_j ; o_i^t+1| o_i^t). ]
where I is mutual information and H is entropy.
The meaning of such mutual information is that how much z_j is related to the transition from o_i^t to o_i^t+1.
Therefore, having high mutual information means behavior policy can comprehend each joint intention's meaning and respond accordingly to make the transition described by the joint intention happen.
In other words, optimizing this mutual information can map different joint intentions in the latent space into specific changes of the local state. In this way, we can obtain the unsupervised autonomously learned joint intentions.
However, estimating and maximizing mutual information is often infeasible <cit.>. To address this problem, we introduce a variational posterior estimator which provides a lower bound for the mutual information <cit.>.
[ I(z_j ; o_i^t+1| o_i^t)
=𝔼_z_j, o_i^t, o_i^t+1[logq_ϕ(z_j | o_i^t, o_i^t+1)/p(z_j | o_i^t)] +𝔼_o_i^t, z_j[D_K L(p(z_j | o_i^t, o_i^t+1)) q_ϕ(z_j | o_i^t, o_i^t+1)))]; ≥𝔼_z_j, o_i^t, o_i^t+1[logq_ϕ(z_j | o_i^t, o_i^t+1)/p(z_j | o_i^t)]. ]
where q_ϕ(z_j | o_i^t, o_i^t+1) is the posterior estimator, parameterized by ϕ and p is the true posterior. This lower bound can be estimated by optimizing the following loss of each team:
ℒ_I^j= 𝔼_o_i^t,o_i^t+1[D_KL[p(z_j | o_i^t) q_ϕ(z_j | o_i^t+1, o_i^t)]].
where D_KL is the KL divergence operator.
Proof.
We will derive the posterior estimator in detail and find a tractable lower bound of the mutual information here.
Firstly, we have
[ I(z_j ; o_i^t+1| o_i^t)
=𝔼_z_j, o_i^t, o_i^t+1[logp_ϕ(z_j | o_i^t, o_i^t+1)/p(z_j | o_i^t)]. ]
And we have
[ 𝔼_z_j, o_i^t, o_i^t+1[logp_ϕ(z_j | o_i^t, o_i^t+1)/p(z_j | o_i^t)]
=𝔼_z_j, o_i^t, o_i^t+1[logq_ϕ(z_j | o_i^t, o_i^t+1)/p(z_j | o_i^t)] +𝔼_o_i^t, z_j[D_K L(p(z_j | o_i^t, o_i^t+1)) q_ϕ(z_j | o_i^t, o_i^t+1)))]; ≥𝔼_z_j, o_i^t, o_i^t+1[logq_ϕ(z_j | o_i^t, o_i^t+1)/p(z_j | o_i^t)]. ]
For the last item, we have
[ 𝔼_z_j, o_i^t, o_i^t+1[logq_ϕ(z_j | o_i^t, o_i^t+1)/p(z_j | o_i^t)]
=𝔼_z_j, o_i^t, o_i^t+1[log q_ϕ(z_j | o_i^t+1, o_i^t)] +𝔼_o_i^t[H(z_j | o_i^t)]; =
𝔼_o_i^t+1, o_i^t[∫ p(z_j | o_i^t+1, o_i^t) log q_ϕ(z_j | o_i^t+1, o_i^t) d z_j] +𝔼_o_i^t[H(z_j | o_i^t)]; =
-𝔼_o_i^t+1, o_i^t[CE[p(z_j | o_i^t) q_ϕ(z_j | o_i^t+1, o_i^t)]
+ 𝔼_o_i^t[H(z_j | o_i^t)]. ]
where CE means the cross entropy. In this way, we have
[ ℒ_I^j
=
𝔼_o_i^t+1, o_i^t[CE[p(z_j | o_i^t) q_ϕ(z_j | o_i^t+1, o_i^t)-H(z_j | o_i^t)] .; = 𝔼_o_i^t+1, o_i^t[D_KL[p(z_j | o_i^t) q_ϕ(z_j | o_i^t+1, o_i^t)]]. ]
Then the lower bound can be estimated by optimizing this loss of each team. The proof is finished.
Furthermore, the joint intentions also should connect with the observations of other agents in the team. For example, joint intention of forcing fire requires all agents in the team to approach the target enemy, which means the distance between an agent and the enemy should decrease in its observation. We utilize another mutual information I(o_k^t+1;z_j) to estimate this connection.
Additionally, we find that this mutual information is crucial for the cooperation in homogeneous settings.
As all agents are potential commanders, it is important that all agents in the same team have a similar understanding of the environment to produce the same joint intention. The corresponding loss is used as an auxiliary loss to accelerate the training process, which is
ℒ_A^j= 1/K_j∑_k=0^K_j𝔼_o_i^t,o_k^t+1[D_KL[p(z_j | o_i^t) q_ϕ(z_j | o_k^t+1, o_i^t)]].
However, these optimized objectives so far do not guarantee diversity of the joint intentions, which means each joint intention is expected to describe one specific change of the local state. To address this, we propose another loss to minimize the mutual information between o_i^t+1 and z_j^-, where z_j^- means the unselected z in the latent space. Thus, the loss is
ℒ_D^j= -𝔼_o_i^t,o_i^t+1[D_KL[p(z_j^- | o_i^t) q_ϕ(z_j^- | o_i^t+1, o_i^t)]].
Finally, both joint intention and behavior policies are built on the structure of QMIX class method.
Behavior policy uses the typical QMIX structure while joint intention policy uses VDN network to address the variable number of teams.
And to compensate for the loss of expressiveness of VDN and overcome the non-stationarity of hierarchical framework, we propose a novel mixing method for joint intention policy which will be discussed in the following section.
The TD-error loss of joint intention and behavior policies are
[ ℒ_TD^l=𝔼_π_l[Q_tot^l(s_t, 𝐳_𝐭, 𝐮_t)-y_t^l]^2; y_t^l=r_t+γmax _𝐮_t+1 Q_tot^l(s_t+1, 𝐳_𝐭+1, 𝐮_t+1); ℒ_TD^h=𝔼_π_h[Q_tot^h(s_t, 𝐳_t)-y_t^h]^2; y_t^h=r_t+γmax _𝐳_t+1 Q_tot^h(s_t+1, 𝐳_t+1). ]
Combining all these objective we introduced, the final learning objective of our method is:
[ ℒ_TD^h(θ)+ℒ_TD^l(μ)+∑_j=0^j=M(ℒ_I^j(μ,ϕ)+λ_1 ℒ_A^j(μ,ϕ) +λ_2ℒ_D^j(μ,ϕ)). ]
where M is the number of teams, λ_1, λ_2 are positive multipliers to control the rate of optimization and θ, μ, ϕ are parameters of networks.
§.§ Weighted Value Decomposition
However, there is another problem that we can not directly use the mixing network of QMIX for high level joint intention policy, because the mixing network of QMIX have a fixed dimension of inputs and the numbers of teams are dynamically changing.
Fortunately, we can find an alternative mixing method to address the value decomposition problem. Firstly, we consider reward decomposition at the team level. We have Q_j stands for the joint intention policy's value function of each team, which is the value function of team commander's joint intention policy under fixed behavior policies π_l^* of all agents in the team. If we have r_tot(s,𝐳)= ∑_j=0^j=M r_j(o_i^t,z_j), where r_j means the reward decomposition of team j based on local state and joint intention of the team.
[ Q_tot^h(s,𝐳) =𝔼[∑_t=0^∞γ^t r(s, 𝐳) |π_h,π_l^* ]
= 𝔼[∑_t=0^∞γ^t∑_j=0^M(r_j(o_i^t,z_j)) |π_h,π_l^*]; =∑_j=0^M Q_j(s,𝐳)
≈∑_j=0^M Q_j(o_i^t,z_j). ]
VDN indicates the approximation is because one team's expected future return could be expected to depend more strongly on observations and actions due to the team itself than those due to other teams. However, such an approximation is oversimplified. Unlike QMIX proposes a hyber-network <cit.> conditioning on the global state as a more expressive factorization, our method utilizes the mutual information to estimate the importance of each value function and mixing them. Specifically, our mixing method is
[ Q_tot^h(s,𝐳)=∑_j=0^M Q_j(s,𝐳)
≈∑_j=0^M α_j Q_j(o_i^t,z_j); α_j = K_j · I(o_i^t+1 ; z_j | o_i^t)/∑_j=0^j=K_j(I(o_i^t+1 ; z_j | o_i^t)). ]
where α_j is the weights of each factor. We find that the mixing form is equivalent to assigning different gradients to different value function learning samples.
Proof.
For the weighted value decomposition, the gradients are
[ ℒ_TD^h(s_t, 𝐳_t)
= 𝔼_π_h[Q_tot^h(s_t, 𝐳_t)-(r_t+γmax _𝐳_t+1 Q_tot^h(s_t+1, 𝐳_t+1))]^2; =[∑_j=0^M α_j Q_j(o_i^t,z_t)-(r_t+γ∑_j=0^M α_j Q_j^T(o^t+1_i,z^*_j))]^2. ]
where z^*_j means the optimal action and Q_j^T is the target network. Notably, we use the same α_j in the target Q-value to stabilize the training process. We can transfer it into another formulation
[ ℒ_TD^h(s_t, 𝐳_t)= [∑_j=0^M (α_j Q_j(o_i^t, z_t)-r_t/M-γα_j Q_j^T(o^t+1_i,z^*_j))]^2. ]
If we have
[ y_TD = ∑_j=0^M (α_j Q_j(o_i^t, z_t)-r_t/M-γα_j Q_j^T(o^t+1_i,z^*_j)). ]
Then
[ ∂ℒ_TD^h(s_t, 𝐳_t)/∂θ= 2 · y_TD·∑_j=0^M α_j ∂ Q_j/∂θ. ]
We find that α_j plays a similar role as the learning rate and can control the value of the gradient. Therefore, we prove that our method equals to decrease the learning rate when the samples have a lower mutual information. The proof is finished.
Therefore, the insight behind this mixing is that the team whose mutual information is higher is supposed to produce more meaningful joint actions, and the corresponding posterior may have a good estimation. Thus, our method assigns a higher gradients to learn the samples of these teams.
Moreover, since the non-stationarity is caused by samples generated by not well-trained low level policy, such samples have lower mutual information thus have a lower gradient in our method, which will reduce the impact of non-stationary samples on high level policy. This shows that our mixing method can also reduce the non-stationarity of hierarchical framework.
§ EXPERIMENTS
§.§ Environments and Experimental Settings
For the experiments, we conduct experiments in both monotonic environments and non-monotonic environments to test all methods. Specifically, we adopt a challenging set of cooperative StarCraft II maps from the SMAC benchmark as monotonic environments <cit.> and several scenarios from grid-world platform MAgent as non-monotonic environments <cit.>.
In SMAC, all maps have classified as Easy, Hard and Super Hard. We use two hard scenarios 3s_vs_4z and 5m_vs_6m as well as two super hard scenarios corridor and 6h_vs_8z for experiments. The optimal cooperative actions do not depend on each other's actions in SMAC <cit.>.
In MAgent, we choose two different scenarios pursuit and tiger. The tasks in both scenarios require all agents to take the cooperative joint actions to catch the prey, otherwise the agents receive a penalty for taking the catching action alone. Therefore these scenarios are non-monotonic environments.
Additionally, preys in tiger can recover HP point at each time step, so the agents should learn to let the preys go instead of killing it immediately to get higher return. There are the detailed settings of these scenarios, as shown in Table <ref>.
The global state of MAgent is a mini map (10 × 10) of the global information. The opponent's policies used in experiments are randomly escaping policy in pursuit and tiger.
We use the performance of evaluation episodes with greedy action selection as the final performance. The performance is evaluated by win rate in SMAC and by return reward divided by the number of the agents in MAgent. All experiments are carried out with five random seeds.
In the experiments, we compare our method with G2ANET, QMIX, MAVEN and QTRAN. G2ANET is a communication method while others are decentralized methods. To ensure the comparison is fair, all methods use the same basic hyperparameters and network structures with similar parameters. The network of all compared methods and our behavior policy uses the same LSTM network, consisting of a recurrent layer comprised of a GRU with a 64-dimensional hidden state, with one fully-connected layer before and two after. The network of our intention policy uses two fully-connected layers with 64-dimensional hidden state, with one fully-connected 32-dimensional layer after. The intention latent space's size is 16 in all experiments. Specially, G2ANET use hard attention which is a recurrent layer comprised of a GRU with a 64-dimensional hidden state, with one fully-connected layer after.
As for hyperparameters, we set the discount factor as 0.99 and use the RMSprop optimizer with a learning rate of 5e-4. The ϵ-greedy is used for exploration with ϵ annealed linearly from 1.0 to 0.05 in 70k steps.
The batch size is 4 and updating the target every 200 episodes. The length of each episode in MAgent is limited to 350 steps while in SMAC is unlimited. The λ_1, λ_2 that are used to control the optimization ratio of different objective are both 1.0. All experiments are carried out on the same computer, equipped with an Intel i7-7700K, 32GB RAM and an NVIDIA GTX1080Ti. The system is Ubuntu 18.04 and the framework is PyTorch.
§.§ Performance
We show the results of six different scenarios from SMAC and MAgent in Fig. <ref>.
The results show that our method performs substantially better than all compared methods in all scenarios. In the monotonic environment, we notice that QMIX does not suffer from the monotonic restriction and has a relatively better performance. However, it still fails in the scenarios where the tasks require agents to act simultaneously such as focus fire in 6h_vs_8z. In contrast, our method can autonomously learn joint intentions to capture the interactions among agents and take the corresponding optimal cooperative actions.
Moreover, we notice that our method has a more significant advantage in non-monotonic environments, like pursuit and tiger than in monotonic environments like SMAC maps.
The reason is that tasks in non-monotonic environments require agents to take optimal actions based on each others' actions. All compared methods can not consider others' actions, so they learn a lazy policy that never attacks to avoid penalty. However, our method uses the joint intention to represent the common goal which makes all agents know that other agents will take the cooperative actions corresponding to the common goal. Therefore, cooperation can be achieved. This result indicates that our method indeed addresses the relative overgeneralization problem by following the joint intentions.
However, we find that G2ANET performs less satisfactorily in most scenarios. We believe this is because G2ANET has more hyper-parameters and therefore may lack generalization capabilities.
§.§ Policy Transfer in Ad Hoc Team Play Settings
Since the learned joint intentions are independent of the dynamically changing number of agents in the team, our method is expected to learn a flexible policy to handle cooperation with different numbers of agents. Therefore, we evaluate all methods in ad hoc team play settings.
Specifically, we use scenarios that have the same tasks as the training environment but a different number of agents to evaluate all methods.
We change the number by using a random number in the range of plus 2 to minus 2, which is 4-8 in pursuit, pursuit hard and tiger. The rest of implementations of all ad hoc team play scenario settings are the same as shown in Table <ref>.
The result in Fig. <ref> shows that our method has no significant performance degradation in ad hoc team play settings. However, other methods suffer from the various number of agents and the performance is not comparable to in the original settings. This result indicates that the joint intentions of our method can describe the interactions among various agents, enabling the learned policy to adapt to ad hoc team play scenarios.
§.§ Interpretability of Joint Intentions
§.§.§ Emergence of Joint Intentions
We show the training process of pursuit along with the changes of distributions of joint intentions in Fig. <ref>. We analyze three important distributions to show interpretability of our method. The first one is the number of times each joint intention has been selected in an episode. The result shows that in the early stage of training, the selection of joint intentions is basically random. As the training progresses, the joint intentions’ meaning becomes clearer so that the choices become differentiated. This result indicates that the joint intentions learned by our method are indeed meaningful.
The second distribution is the number of times each joint intention has been selected based on each agent's observation instead of the commander's observation in an episode. The result shows that this distribution gets more similar to the actual joint intentions' distribution as training progresses.
Such results show that the agent's knowledge of the task tends to converge during training. Since the task is fixed, the part of agents that are not selected as commanders but can observe enough information to recognize the surrounding state will have the same intentions as the commanders to solve the task. This also indicates that the learned intentions are meaningful, otherwise agents would not have consistent intentions for the same task.
The last distribution is the continuity of joint intentions. We use the number of consecutive executions of the same joint intention by the agents as the result. The result shows that agents learn to choose and follow a single joint intention for more steps as training progresses. This continuity feature indicates that the learned joint intentions are meaningful and can represent some long-term goals while agents can take corresponding responses to achieve these goals. Furthermore, we find that the continuity curve converges as the performance curve converges. This result indicates that the learning process of joint intentions is related to performance improvement of the policy, which proves that joint intentions can promote performance improvement.
§.§.§ Representation of Learned Joint Intentions
To explain the interpretability of our method, we also present how the behaviors of agents in 3m of SMAC scenarios are related to the learned joint intentions representations in Fig. <ref>. We show the joint actions of all three agents and the corresponding choices of joint intentions. At the beginning of training, the joint actions are irrelevant to the joint intentions. As the training progresses, the joint intentions become more meaningful and the final joint intentions clearly represent two main behavior modes.
The results show the circle marks mainly emerge in the area where all agents are taking attack actions. This means such joint intention represents the behavior mode that the agents choose to attack the target together. Similarly, the star marks are mainly in the movement area and represent the joint intention that the agents choose to move together.
§.§ Ablations
§.§.§ Ablation of Joint Intentions
To evaluate how the performance of our policy relies on the joint intentions, we evaluate our method with empty joint intentions. We normalize the performance of all results with the performance of our method. The results are shown in Table <ref>. Every experiment is carried out with 100 episodes.
The result indicates that cooperation requires following meaningful joint intentions to achieve as the performance that executed without learned joint intentions is close to that of the random policy.
§.§.§ Ablation of Weighted Value Decomposition
We perform several ablations on the 5m_vs_6m and corridor scenarios. We consider training without the weighted value decomposition. The ablations results are shown in Fig. <ref>. The experiment shows that the weighted value decomposition improves the stationarity as the policy trained without it has a significant decrease in the final performance.
§ CONCLUSION
In this work, we propose a novel multi-agent reinforcement learning method based on a hierarchical framework and learns unsupervised joint intentions to tackle the relative overgeneralization problem.
We use a proposed team partition method which can minimize the optimality gap of rewards to share joint intentions among each team. Then, we use a hierarchical framework to learn a high level joint intention policy and a low level behavior policy.
The joint intentions can be unsupervised autonomously learned in a latent space and behavior policy can learn to take optimal cooperative actions corresponding to joint intentions.
Moreover, we propose a weighted value decomposition using mutual information to overcome the non-stationarity in hierarchical reinforcement learning. We also illustrate that our method can adapt to ad hoc team play situations.
The experiments show that our method achieves significant performance improvements by learning meaningful joint intentions in all domains and we illustrate that joint intention learning is related to policy's performance improvement which showcases the efficiency of the learned joint intentions.
plain
|
http://arxiv.org/abs/2307.00922v1
|
20230703104640
|
Hybrid Geometrodynamics: A Hamiltonian description of classical gravity coupled to quantum matter
|
[
"J. L. Alonso",
"C. Bouthelier-Madre",
"J. Clemente-Gallardo",
"D. Martínez-Crespo"
] |
gr-qc
|
[
"gr-qc",
"math-ph",
"math.MP",
"quant-ph"
] |
Departamento de Física Teórica, Universidad de Zaragoza, Campus San Francisco, 50009 Zaragoza (Spain)
Instituto de Biocomputación y Física de Sistemas Complejos (BIFI), Universidad de Zaragoza, Edificio I+D, Mariano Esquillor s/n, 50018 Zaragoza (Spain)
Centro de Astropartículas y Física de Altas Energías (CAPA),
Departamento de Física Teórica, Universidad de Zaragoza, Zaragoza 50009, (Spain)
Departamento de Física Teórica, Universidad de Zaragoza, Campus San Francisco, 50009 Zaragoza (Spain)
Instituto de Biocomputación y Física de Sistemas Complejos (BIFI), Universidad de Zaragoza, Edificio I+D, Mariano Esquillor s/n, 50018 Zaragoza (Spain)
Centro de Astropartículas y Física de Altas Energías (CAPA),
Departamento de Física Teórica, Universidad de Zaragoza, Zaragoza 50009, (Spain)
Departamento de Física Teórica, Universidad de Zaragoza, Campus San Francisco, 50009 Zaragoza (Spain)
Instituto de Biocomputación y Física de Sistemas Complejos (BIFI), Universidad de Zaragoza, Edificio I+D, Mariano Esquillor s/n, 50018 Zaragoza (Spain)
Centro de Astropartículas y Física de Altas Energías (CAPA),
Departamento de Física Teórica, Universidad de Zaragoza, Zaragoza 50009, (Spain)
Departamento de Física Teórica, Universidad de Zaragoza, Campus San Francisco, 50009 Zaragoza (Spain)
Centro de Astropartículas y Física de Altas Energías (CAPA),
Departamento de Física Teórica, Universidad de Zaragoza, Zaragoza 50009, (Spain)
We generelize the Hamiltonian picture of General Relativity coupled to classical matter, known as geometrodynamics, to the case where such matter is described by a Quantum Field Theory in Curved Spacetime, but gravity is still described by a classical metric tensor field over a spatial hypersurface and its associated momentum. Thus, in our approach there is no non-dynamic background structure, apart from the manifold of events, and the gravitational and quantum degrees of freedom have their dynamics inextricably coupled. Given the Hamiltonian nature of the framework, there is no need to search for a consistent quantum stress-energy tensor, but instead we work with the generators of hypersurface deformations over the manifold of quantum states. The construction relies heavily on the differential geometry of a fibration of the set of quantum states over the set of gravitational variables, and the introduction of a notion of quantum connection. The most remarkable physical implications of the construction are norm conservation of the quantum state (even if the total dynamics are non-unitary), the clear identification of the hybrid conserved quantities and the description of a dynamical backreaction of quantum matter on geometry and vice versa, which shall modify the physical properties the gravitational field would have in the absence of backreaction.
Hybrid Geometrodynamics:
A Hamiltonian description of classical gravity coupled to quantum matter.
D. Martínez-Crespo
August 1, 2023
====================================================================================================
§ INTRODUCTION
In the pursuit of understanding the fundamental nature of our universe, the need of a holistic description unifying quantum mechanics and general relativity remains one of the greatest challenges in modern theoretical physics. Within this context, the concept of Quantum Gravity (QG) has emerged, aiming to reconcile the quantum behavior of matter with the curvature of spacetime by making quantum the geometry itself. One of the main approaches, inspired by the similarities with quantum mechanics, is Canonical Quantization of Gravity <cit.>. Such quantization program was the main motivation for the construction of geometrodynamics; the starting point of the quantization process was precisely such Hamiltonian formulation of gravity. In fact, the central result of such canonical quantization is Wheeler–DeWitt equation, resulting from the quantization of the constraints present in geometrodynamics. Because of this, the study of geometrodynamics and its reformulation are still of interest for the community, summarized in <cit.> from ADM's original approach to the more quantum language of Hamiltonian generators of hypersurface deformations over the space of 3-metrics and their momenta.
Nevertheless, most of Canonical QG phenomenology is far from being falsifiable in a foreseeable future, even when it can be extracted from the adynamical Wheeler-deWitt equation. A more humble approach relies on Semiclassical Gravity (SG) theories, which treat matter as quantum states while describing spacetime using classical variables. One example of them is the result of performing a WKB semiclassical limit to Wheeler-deWitt equation <cit.> (where, even if the gravity is technically still considered quantum, its dynamics follows classical paths over which one may consider quantum corrections). Another well known approach relies on the Moller-Rosenfeld equation, G_μν=⟨Ψ|T̂_μν|Ψ⟩, where the expectation value of a energy-momentum tensor of a quantum field acts as a source for Einstein's equations.
The crucial question in these approaches is how quantum matter (or in general, quantum degrees of freedom) influences and shapes the classical variables describing the geometry of spacetime. This intricate problem becomes even more complex when one realizes the absence of a unique vacuum state and the consequent ambiguity surrounding the notion of “particle" on curved spacetime. Consequently, finding a mathematically rigorous framework to account for the backreaction of quantum matter on classical spacetime has become a formidable quest, albeit with fundamental phenomenological implications <cit.>. Of course the backreaction of the matter fields on gravity affects also the propagation of the fields due to the change of the metric, which forces us to find a joint dynamical description to portray this intertwining, even without considering QG and its semiclassical limit, in the spirit of <cit.>.
In this context we situate our work. We are interested in unraveling the extent to which quantum matter fields modify classical general relativity, but, given the greater similitude of geometrodynamical language to quantum mechanics, instead of looking for a modification of Einstein's equations, we seek the appropriate description of geometrodynamics when the matter is of quantum nature and described in a Hamiltonian way. The main difference with Möller-Rosenfeld equation from this perspective is the dynamical arising, leaf by leaf, of the geometry together with the quantum matter from their coupled evolution along a foliation of spacetime, instead of the usual Einstenian picture of looking at the spacetime and its content as a whole, fulfilling a certain compatibility relation. The price to pay, of course, is the loss of general covariance of the framework. The associated boons is the clarity of the definitions of the conserved quantities and their coupling to gravitational degrees of freedom; there is no need to construct a consistent quantum stress-energy tensor (as in <cit.>) in this framework. Nevertheless, as happened in classical geometrodynamics, covariance is recovered for the solution curves to the dynamics, once the so called hamiltonian and momenta constraints are enforced.
As it happens in the case of classical geometrodynamics, we think that the proposal of this hybrid geometrodynamics can be seen as a previous step to the quantization program, and we think it is still interesting to build the whole formalism of QFT in curved spacetime and gravity under a unified geometric language. Perhaps, given the difficult falsifiability of the full quantum picture, interesting novel phenomenology already arises at this level. In the same sense as in molecular dynamics, where Ehrenfest's dynamics (describing molecules with classical nuclei and quantum electrons under a coupled Hamiltonian evolution) reproduce with reasonable exactitude the full quantum dynamics thanks to the hierarchical masses and confinement widths between both subsystems, we like to think of hybrid geometrodynamics as an effective theory whose validity will be related to the hierarchy between the gravitational and quantum field scales.
In such a program, we will consider first classical geometrodynamics, then, develop a quantum field theory over curved spacetime in a Hamiltonian fashion and couple them in a suitable way to build consistent hybrid geometrodynamics. We find that, in order to describe appropriately such quantum field theory over arbitrary dynamical spacetimes, its Hamiltonian picture must be endowed with a novel ingredient, in the form of a quantum connection, that allows us to relate unitarily inequivalent quantizations corresponding to different gravitational backgrounds. As we will show, it turns out that such ingredient, ultimately associated to the parametric family of complex structures defining a set of quantizations, is crucial for the compatibility of hybrid geometrodynamics, further justifying its introduction. Let us get into it.
These ingredients define a Cauchy problem where the initial data must be defined fulfilling certain constraints. In geometrodynamics, such constraints are derived from physical principles, namely the path independence criterium. This criterium can be easily understood: given that the field data contained in a spatial hypersurface should be equal to the one induced by the spacetime field solutions through the foliation map, if some initial field data is evolved from an initial hypersurface through two different “paths" (sequences of intermediate hypersurfaces), until a same final spatial hypersurface is reached, the field content in such final hypersurface should be the same. To fulfill this integral criterium, it must satisfy its differential form, which is the need of commutativity of the differential generators of different deformations at each hypersurface in each path. Given that the representations generators of hypersurfaces form a non-trivial Poisson algebra, they will only commute if the superhamiltonian and supermomenta are zero on shell. These are the so called hamiltonian and momenta constraints. See <cit.> for the canonical construction of geometrodynamics.
We can obtain an analogous result from a Lagrangian perspective. In that case, the constraints are derived from an extreme action principle where the Hilbert-Einstein Lagrangian has been written in 3+1 form and Legendre transformed, showing explicit Lagrange multipliers (the lapse N_s and shift N⃗_s functions) implementing the foliation as suitable constraints. We will consider spacetime a globally hyperbolic Lorentzian manifold in order to define a proper Cauchy problem in this setting. Notice that, while we will stick to Einstein's General Relativity and its metric description of gravity, any alternative description of gravity should also follow this geometrodynamical framework.
On the other hand, in <cit.>, even though most of the discussion is kept general, the matter sources explicitly considered are classical fields. It is our aim to substitute such classical fields by a quantum theory rewritting QFT in Curved Spacetime in a Hamiltonian formalism where the spacetime itself is not a given background as is the case in <cit.>, but is dynamically coupled to the Quantum Fields. In order to do so, we firsly consider Schrödinger wave functional picture for quantum fields in curved spacetime<cit.>, as it is easily adaptable to the non-covariant Hamiltonian framework as in <cit.>. Then we promote the geometric variables from a given background to be part of the kinematics, which ultimately will be part of a set of coupled hybrid equations of motion of gravitational variables and QFT states, posing a hybrid Cauchy problem.
To rigorously build QFT in curved spacetime in this Hamiltonian nature, we secure our mathematical grounds with tools from infinite dimensional calculus and differential geometry in such framework <cit.>. As we need to build differential tensors on our manifold of fields (classical and quantum), we will ask that manifold to be defined in such a way that derivatives can be rigorously proved to exist. In this paper we will reduce all the mathematical technicalities to the minimum, addressing the interested reader to the mentioned references in case they need the technical details. Thus, the manifold of classical fields will be assumed to be the dual (see <cit.> for a careful analysis of the reasons for this choice) to some nuclear-Frechèt space 𝒩 of functions defined over a spatial hypersurface Σ (for example, differentiable functions of compact support 𝒩=C^∞_c(Σ) when Σ is compact). This choice ensures the existence of a well-defined differential calculus on the fields phase space. For analogous reasons, quantum wave functions will be modelled using the set of Hida functions (𝒩) that will densely populate the space of quantum pure states, whose domain will be the manifold of classical fields.
Being the set of Hida functions a nuclear-Fréchet space, it allows us to perform Fréchet differential calculus and ultimately, differential geometry.
This set of functions will be completed under the choice of a scalar product, which is given by a Gaussian measure Dμ over the space of fields, to form a Rigged Hilbert space (𝒩)⊂ L^2(𝒩',Dμ)⊂ (𝒩)', in analogy to Quantum Mechanics. Such a measure can also be proved to exist in our setting, its properties providing us with many useful tools in QFT (see <cit.>).
Moreover, such measure is closely related to the quantization procedure (in Corichi et. al. <cit.>, given by the choice of complex structure J_C for the classical space of fields), or to what in the literature is usually referred as the choice of vacuum state. The main difficulty arises from the fact that the quantization procedure or the vacuum state are time dependent, as a priori they depend on the leaf 3-metric over the spatial hypersurface, lapse and shift (as J_C
does in the static or stationary cases, see <cit.>). This is a widely known fact in the literature, and yields to notions such as norm loss and quantum completeness <cit.>.
In this line, we consider that this dependence introduces a parametric family of quantizations (or vacuum states associated to different measures) and thus a family of unitarily inequivalent Hilbert spaces. Given that the dynamics along the foliation will necessarily change such geometric parameters, we need a way to relate states (and operators) within inequivalent Hilbert spaces or quantizations. To do so we will introduce, as it is done in <cit.> for time dependent quantization, a notion based on parallel transport for quantum states along curves over the space of pure geometrodynamical variables. Thus, the quantum states will become sections over a fibration with base the geometrodynamical variables, and a quantum connection will allow us to relate unitarily inequivalent Hilbert spaces. A different notion of quantum connection over a different bundle, but with certain similarities, has been introduced in <cit.>.
As noted, the Quantum Field theories under consideration will be the result of a certain quantization procedure of a classical field theory. Such classical field theory will be built in a Hamiltonian formalism and, in the spirit of geometrodynamics, the generating functions of hypersurface deformations over such classical phase space will already fulfill Dirac's closing relations, when joined with the pure geometrodynamical supermagnitudes over a total (geometry plus classical matter) Poisson bracket. This consideration will be key to show that their quantum field theoretical analogues (plus classical gravitation) also reproduce the algebra relations of hypersurface deformations. Note that Schrödinger wave functional picture can be considered (at least at the level of the definition of its dynamics) non-perturbative, and, given that we are only interested in the kinematics of the whole quantum state and its interaction with gravity we do not need to consider yet any regularization or renormalization schemes for QFT in curved spacetime. Nevertheless, a similar analysis to the one in <cit.> could be carried out in this Hamiltonian framework, although in this case the geometry of spacetime is not known ab initio, but constructed simultaneously with the evolution of quantum states.
On the other hand, we maintain the geometrodynamical principle of equivalence as stated in <cit.> and we consider only the quantization of those theories whose classical matter supermagnitudes only depend locally (not derivatively) on the 3-dim metric tensor h and do not depend on its momentum π_h, nor on the lapse or shift functions. This property is dubbed ultralocality <cit.>. This banishes non-minimally coupled field theories from this scheme, even though a similar analysis could be carried out in that case.
In conclusion, the main result of our work will be the Hamiltonian representation of hypersurface deformations for a hybrid theory of quantum field matter and classical gravitation and the definition of the hybrid constraints that enforce path independence as in <cit.>. In order to do so, if the quantization procedure was dependent on the geometry, lapse and shift, the quantum connection must fulfill some constraints related to the change of the quantization of the symmetry generators.
The structure of the paper is as follows.
In the second section, we will introduce the essential mathematical ingredients and notation from geometrodynamics, namely: the notation for the geometry of hypersurfaces and foliations, the closing relations of hypersurface deformations, its representation in phase space and the relations that any matter theory must fulfill as long as it is coupled in a non-derivative way with the 3-metric. For a more detailed presentation of these tools, we address the reader to <cit.>.
We continue with a third section devoted to the Hamiltonian representation of Schrödinger wave functional picture for a leaf dependent quantization procedure, were the section nature of quantum states and the quantum connection are crucial. This construction is based on the mathematical tools developed in <cit.>.
In the fourth one we combine the ingredients from the previous sections, the classical Hamiltonian description of the 3-geometry with the Quantum Field Theory for matter, to build Hybrid Geometrodynamics. We identify the hybrid generating functions that, under a hybrid Poisson bracket, reproduce the infinitesimal generators of hypersurface deformations over the hybrid algebra of observables. Then, we impose the Hamiltonian and momenta constraints at the hybrid level and have a glance at the hybrid equations of motion. As happened in the hybrid Hamiltonian model for finite dimensional systems defined by Ehrenfest's dynamics <cit.>, the resulting dynamics cannot be unitary because of the non-linearity arising from the backreaction. Nevertheless, the quantum connection provides with norm conservation for the quantum states along the dynamics, even if both the states and the scalar product (or vacuum state) are dependent on the gravitational degrees of freedom.
Lastly, we discuss some physical implications of quantum matter being sources of gravitation constructed in this way and speculate on their phenomenology, in particular in relation with the phenomenon of quantum completeness <cit.>. In the final Section, we summarize our main results and discuss possible future research lines.
§ A BRIEF SUMMATRY OF CLASSICAL GEOMETRODYNAMICS
§.§ Foliations of space-time and hypersurface deformations.
If we consider space-time as a globally hyperbolic spacetime ℳ with coordinates X^μ, there exists a one parametric family of spatial leaves, ℳ≃{Σ_s| s∈ℝ} foliating ℳ. Each spatial leaf Σ_s is the image under the s-dependent embedding ε_s of a fixed reference hypersurface Σ, and it is characterized by a hyperplane equation τ(X^μ)=s, such that each ε_s(Σ)=Σ_s⊂ℳ is a Cauchy surface ∀ s∈ℝ. From the hyperplane equation, taking 3 embedding coordinates x^i for Σ, one derives for Σ_s the parametric hyperplane eqs X^μ=X^μ(x ,s). The Lorentzian metric g of ℳ induces a Riemannian 3-metric h_s=ε_s^⋆ g on each Σ_s, whose coordinates can be defined as:
h_ij(x ,s)= ^4g_μν(X^α(x ,s))∂_x^iX^μ(x ,s)∂_x^jX^ν(x,s)
Each spatial leaf can also be characterized by a time-like normal vector field n̂_s of norm 1, in terms of which we define the second fundamental form K_ij(s)=-12ℒ_n̂_sh_ij(s), where ℒ_n̂_s represents the Lie derivative with respect to the vector field n̂_s. With these elements, we define an extrinsic geometry on Σ_s describing how the 3-geometry evolves along the 4-foliation. Thus, we relate the initial coordinates in a 3-surface Σ_s and the coordinates on Σ_s+δ_s as a transformation in the normal direction n̂_s to the original hypersurface and a tangential one N⃗.
In terms of this set of coordinates, the foliation is described as:
ddsX^μ(s)=N_s(x ) n^μ_s(x )+N^i_s(x )∂_x^i X^μ(x ,s)
where the expression in coordinates of the lapse function (obviating the omnipresent dependence on x) reads:
N_s:=g_μν(X^α(s))(ddsX^μ(s) )n^ν_s,
measuring the size of the change in the normal direction. Analogously the coordinates of the shift vector are:
N^i_s:=g_μν(X^α(s))(ddsX^μ(s) )∂_x^j X^μ(s) h^ji_s
which weights the change in each of the tangential directions associated to ∂_x^i.
Both objects can be combined to define an evolution generating vector field E at each hypersurface which encodes the foliation as:
E|_Σ_s=N⃗_s+N_sn̂_s
We will define for further use the space of lapse and shift functions (N,N_i)∈ℳ_N, which is characterized by four copies (one for the lapse, three for the shifts) of a suitable functional space.
Now, considering an initial data Cauchy surface Σ_0=ε_0(Σ) embedded in ℳ and E (or lapse and shift ∀ s), one can span the whole foliation {Σ_s} equivalent to ℳ (and all tensors defined at each leaf) when considering its flow α_E^s, so that Σ_s=α_E^sΣ_0. Thus, E characterizes the foliation and relates the embeddings through α_E^s∘ε_0=ε_s.
The role of 4-diffeomorphisms in GR is played in GD by Dirac's group of deformations of a spatial hypersurface embedded in Lorentzian ST (plus constraints). For the relation between both groups see <cit.>.
Identifying an infinitesimal deformation of the hypersurface with the action of ℒ_E for any N and N^i, we see it can be composed of a normal (lapse-weighted) and tangential deformation (shift weighted). The local generators of such normal deformations (D_n) and stretchings (D_t_i) cab be written in terms of the local coordinates
D_n:=n^μ(x )δδ X^μ(x )
D_t_i:=∂_x^i X^μ(x )δδ X^μ(x ) ,
are the generators of the symmetry group that is implicit in our physical magnitudes must be invariant under the choice of foliation. This is the 3+1 version of the general changes of coordinates (diffeomorphism invariance) symmetry present in the full spacetime picture of gravity.
fulfill the following Lie bracket closing relations:
[D_n(x),D_n(x^')]=h^ij(x)D_t_j(x)δ_,i(x,x^')-(x↔ x^')
[D_t_i(x),D_n(x^')]=D_n(x)δ_,i(x,x^')
[D_t_i(x),D_t_j(x^')]=D_t_i(x^')δ_,j(x,x^')+D_t_j(x^')δ_,i(x,x^')
These closing relations are the key ingredient of GD and play an essential kinematical role, prior to the election of the action or the dynamics themselves. The next step is providing a phase space representation (for matter and geometry) to the deformations of hypersurfaces, that must fulfill this very same closing relations (c.r.). The importance of the c.r. is such that they can univoquely determine (under some minor physically sensible assumptions) the 3+1 equivalent to Einstenian gravity (without sources) without imposing an action <cit.>.
Note that they form an algebroid, with the structure constants substituted by functions of h. We introduce the notation:
[D_A(x),D_B(x^')]=∑_Cα_A,B,C(h;x,x^')D_C(x)+∑_Cβ_A,B,C(h;x,x^')D_C(x^')
where α_0,0,j(x,x^')=h^ij(x)δ_,i(x,x^')=-β_0,0,j(x^',x), α_i,0,0(x,x^')=δ_,i(x,x^'),β_i,j,i=δ_,j(x,x^'), β_i,j,j=δ_,j(x,x^'), being antisymmetric in the first two indices (together with interchange of x^' and x.
Such local generators, to provide the infinitesimal deformation of the whole hypersurface, must be integrated over the whole hypersurface with values at s of the lapse and shift functions. For a functional defined over the hypersurface F[X^μ(x )]:Σ→ℝ, an infinitesimal change of the functional to the neighbouring hypersurface in the foliation is defined by:
δ Fδ s:=∫_Σ dx(N_sD_nF+N^i_sD_t_iF)≡ℒ_E F.
§.§ Phase space and Hamiltonian deformations.
§.§.§ The gravitational field
Given the Hamiltonian nature of deformations of hypersurfaces, they can be risen to the field theoretical phase space defined on it. The objective of this section is the representation over phase space (of matter and geometrical fields) of the generators of deformations of the hypersurface through functions over phase space and the Poisson bracket.
Firstly, one must define a phase space. In the metric description of gravity, the kinematical variables are the Riemannian 3-metric h_ij∈Riem(Σ) and its associated momenta π_h∈ T^⋆(Riem(Σ))|_h, defined in terms of the extrinsic curvature and seen as a tensor density of weight 1. The Geometrodynamical phase space contains both variables (h_ij,π^ij_h)∈ℳ_G, constructed as a cotangent bundle:
ℳ_G=T^⋆Riem(Σ)∼Riem(Σ)×Riem^'(Σ)
locally isomorphic to the Cartesian product of the Riemannian 3-metrics and symmetric (0,2)-tensor densities of weight 1 that we identify as distributions (which we have denoted by Riem^'(Σ)).
Resumen de <cit.> sec 4.2.5. The configuration space of geometrodynamics is the superspace 𝒮(Σ)=Riem(Σ)/Diff(Σ), this quotient is ill defined when Σ is a compact manifold but it is possible to cure those problems restringting ourselves to a tinier class of Diff(Σ), we won't treat those problems here and refer to <cit.> and references therein
for further detail. We can describe the theory, though, with a non-reduced phase space in which the kinematical variables are the Riemannian 3-metric h_ij∈Riem(Σ) and its associated momenta π_h∈ T^⋆(Riem(Σ))|_h, defined in terms of the extrinsic curvature and seen as a tensor density of weight 1. The Geometrodynamical phase space is given by ℳ_G that contains both variables (h_ij,π^ij_h)∈ℳ_G, constructed as a cotangent bundle:
ℳ_G=T^⋆Riem(Σ)∼Riem(Σ)×Riem^'(Σ)
locally isomorphic to the Cartesian product of the Riemannian 3-metrics and symmetric (0,2)-tensor densities of weight 1 that we identify as distributions (which we have denoted by Riem'(Σ)).
In order to perform differential calculus on Riem(Σ), we need a mathematical detour to define the notion of derivative with respect to the metric. The reader interested in these mathematical aspects, although standard in infinite dimensional calculus, is provided with some notes in the supplementary material. Otherwise, one can resume with the reading, assuming that the functional derivatives appearing in eq. (<ref>) can be defined and that, operationally, they work as usual.
A este nivel no hay que definir gran cosa para definir las derivadas, no necesitas que π_h sean todas las distribuciones porque no vas a integrar, simplemente serán las tensor densities of weight 1 que están definidas en el mismo espacio que h y si h tiene buenas propiedades no hay más sutilezas
This manifold of gravitational variables ℳ_G, where each point is characterized by a pair of 3-metric tensor field h_ij and associated momenta distribution π^ij_h, is endowed with a Poisson tensor (which is, were proceeds, the weak inverse of the canonical symplectic form ω_g:=-dh_ij;x∧ dπ^ij;x_h) and can be associated to a Poisson bracket over the functions given by:
{ f,g}_G=∫_Σ d^3x δ_[h_ij(x)fδ_π^ij_h(x)]g ∀ f,g∈ C^∞(ℳ_G)
At an operational level, it is enough to consider that such derivatives are local (here the δ(x,x^') is associated to Lebesgue measure) and independent from each other:
δ/δπ^ij(x)π^kl(x^')=δ^kl_ijδ(x^',x) ,
δ/δh_ij(x)h_kl(x^')=δ_kl^ijδ(x^',x)
and δ/δπ^ij(x)h_kl(x^')=δ/δh_ij(x)π^kl(x^')=0 .
§.§.§ The matter fields. The classical fibration
This phase space can be extended to contain the matter fields and their associated momenta, for any kind of tensor field [With analogous considerations as the ones for the purely geometrodynamical variables, i.e. functions of compact support, relating the functional-distributional nature through a Gel'fand triple which allows to identify a suitable dense subset of the cotangent bundle to the matter fields which has Fréchet structure and, lastly, taking into account the dense definitiness of the differential structures<cit.>, which can be checked in the supp. material.].
We denote generically by
(ϕ_1,⋯,ϕ_n,π^1,⋯,π^n)∈ℳ_F
the set of all matter fields (either scalar or tensorial, with domain Σ) and their associated momenta (being densities of weight 1, <cit.>), belonging to the matter manifold ℳ_F. For example, in the case of a scalar field, one may consider the fields in a certain nuclear space (one can check the supplementary material or see <cit.>), in order to be able to define a differentiable structure on them ϕ∈𝒩(Σ) and thus the corresponding cotangent bundle can be written as ℳ_F:=T^⋆𝒩. As in the purely geometrodynamical case, one can define a Poisson bracket over the space of functions over this manifold, C^∞(ℳ_F), which in turn becomes a Poisson algebra which we will denote by 𝒜_F. Operationally it acts, on arbitrary functionals f,g∈ C^∞(ℳ_M), as:
{ f,g}_M:=δ_[ϕ_ixfδ_π^ix]g:=δ_ϕ_ixfδ_π^ix g -δ_ϕ_ixgδ_π^ix f .
To clarify the notation, note that the indices such as i,j,k are discrete indices whose contraction with upper and lower repeated indices is to be seen as Einstein's summation convention. On the other hand, x,y,z represent continuous indices, whose contraction represented by repeated super- and sub-indices implies integration over Σ. In this sense, an object with an upper index represents a density of weight 1, while lower index is assigned to fields over Σ representing points in the linear field manifold but also vectors on it, depending on the case. Using that notation, one must take into account that the field momentum π^ix is a density of weight 1. Therefore, to perform the Fréchet functional derivatives one must restrict (though standard, one can check the supplementary material) the momenta to belong to a certain dense subset of distributions 𝒩' which is representable under a measure, in the following sense:
{ f,g}_C=∫_Σd^3x ∂_[ϕ_i(x)f∂_π^i(x)]g .
where π^i(x) stands for the local value of the representation of the density π^i,x under the Lebesgue measure d^3x.
Note that this implies the presence of an auxiliar Gelfan'd triple with a Hilbert space mediating the duality given by the natural scalar product constructed from the chosen volume 1-form, as is explained in the appendix for the geometric case.
Therefore, the total phase space of geometry and matter denoted as ℳ_C (where the C stands for classical) is given by the product manifold:
ℳ_C=ℳ_G×ℳ_F .
For reasons that will become clear later, we will consider also another product manifold,
ℱ_C= ℳ_N×ℳ_G×ℳ_F ,
where ℳ_N contains the lapse and shift degrees of freedom associated with the space-time foliation. ℱ_C can be considered as a trivial bundle with base manifold ℬ=ℳ_N×ℳ_G and fiber ℳ_F. We see, thus, that the base manifold contains all the geometrical information of the theory, and the fiber, all the matter information. Notice that ℳ_N, containing the information about the space-time foliation, is different from the other two pieces, as their dynamics are not associated to a particular kinematical structure, but will always be associated to the choice of foliation and thus act as time dependent parameters. Therefore, given the path independence principle, even if some tensors for the matter theory may depend on them (thus, their inclusion in the base of the fibration), we anticipate that physical magnitudes, as well as the Poisson tensors over the gravitational and matter submanifolds, will be independent of ℳ_N.
As we argued above, ℳ_C can be given a differentiable structure and considered as a representation of the cotangent bundles of the field spaces, both endowed with a canonical Poisson bracket associated with the canonical symplectic forms.
The set of infinite differentiable functions f(Φ)∈ C^∞(ℳ_C) on this phase space, with the domain variables grouped as Φ∈ℳ_C, form a Poisson algebra of classical (matter and geometrical) observables, for the total Poisson bracket
{·,·}:C^∞(ℳ_C)× C^∞(ℳ_C)→ C^∞(ℳ_C)
which is constructed as the sum of the two Poisson brackets of each Poisson submanifold:
{·,·}=
{·,·}_G+
{·,·}_M
Such Poisson bracket must fulfill Jacobi, Leibniz, bilinearity and antisymmetry. They are fulfilled immediately given that each PB fulfilled them separately and that they fulfill a certain compatibility condition. We will delve into this condition later, in the quantum case.
The objective of the geometrodynamical framework is to identify the flow that α_E induces over the space of 3-metric tensor and momenta ( and matter fields and associated momenta) and thus, reconstructing all physical information of the Universe within such path. With the former geometric description of the phase space of fields and Poisson brackets, this can be done in a Hamiltonian framework, defining on the phase space of fields a curve parametrized by s, which plays the role of the time for the Hamiltonian system.
Notice that, embracing the GD point of view means forgetting about spacetime, adopting a Hamiltonian view of the Universe as the evolution of a 3-geometry, and interpreting the evolution flow α_E as the deformations of the Cauchy hypersurface Σ_0. In the end, such evolution is translated to the space of fields (including the metric) which are defined on it. This means that, for the geometry, the integral curves of E, defining a certain foliation, is analogous to a curve over ℳ_G, being such curve the s-parametrized history of the geometry of spacetime.
Now, one has to postulate that that normal and tangential deformations of the hypersurface, being a physical symmetry, must be symplectomorphisms. This leads to the representation axiom of <cit.>, which is imposed together with the compatibility with closing relations and locality of normal deformations of the 3-metric to reproduce its relation with the extrinsic curvature. In essence, what one is assuming is the Hamiltonian nature of the effect over phase space of normal and tangential deformations (locally denoted by D_n(x), D_t_i(x)). Hence, these vector fields must be the Hamiltonian vector fields associated with certain generating functionals ℋ(Φ), ℋ_i(Φ), which act on generic functionals f(Φ) as:
D_n(x)f(Φ):= { f(Φ), ℋ(Φ(x))}
D_t_i(x)f(Φ):={ f(Φ), ℋ_i(Φ(x))}.
We inherit the nomenclature from <cit.> and refer to the local generating functions ℋ and ℋ_i as superhamiltonian and supermomenta.
On the other hand, the Lie bracket closing relations for the generators of the deformations (<ref>),(<ref>) and (<ref>), must be reproduced by the Poisson bracket commutation relations for the supermagnitudes, in order to be proper representations of Dirac's generators. Even without this representation requirement, it can be argued <cit.> that the supermagnitudes, in order to define a physical evolution related with E, must necessarily fulfill such closing relations. Consequently, the following relations are both a consequence of the kinematics of hypersurface deformations and a requirement for having physical evolution. Thus, they must be enforced prior to the choice of a particular dynamical theory or action:
{ℋ_x,ℋ_x^'}=ℋ^i_xδ_,i(x,x^')-ℋ^i_x^'δ_,i(x^',x)
{ℋ_ix,ℋ_x^'}=ℋ_xδ_,i(x,x^')
{ℋ_ix,ℋ_jx^'}=ℋ_ix^'δ_,j(x,x^')+ℋ_jx^'δ_,i(x,x^')
In conclusion, we can define a representation of the Dirac algebra defining the space-time foliation on the phase space of fields, if it is endowed with a Poisson structure. In the following sections we will analyze this construction for the different sets of fields of our theory: the set of classical fields (gravitational and matter), the set of quantum matter fields (assuming a fixed gravitational background), and the set of hybrid quantum-classical fields, when a classical gravitational field and a quantum matter field evolve coupled in the same model. In all cases, the existence of a Poisson structure on the corresponding phase space of fields, will allow us to define the generators of the Dirac algebra at the corresponding level. And therefore we will be able to consider the geometrodynamical formulation of the corresponding theories.
§.§ Identifying the supermagnitudes.
We are now equipped with the kinematical relations the generating functions must fulfill, which will allow us to elucidate their particular shape in the following fashion.
Given that a tangential deformation can be seen as only an initial data reshuffling <cit.>, we have access on leaf to the result of such deformation. Thus, to identify the supermomenta for any theory it is enough to claim for any variable F in such phase space:
ℒ_δ N^kF={ F,ℋ_kx} N^kx
where ℒ_δ N^kF represents the Lie derivative along δ N^k, encoding the infinitesimal change of F along the flow of a tangential vector field to Σ which characterizes a stretching of it, acting on elements of the field phase space, considered as functions (or distributions) on Σ. As it is argued in <cit.>, the left hand side on the equation can be computed through the use of (<ref>), and the relation above allows us to obtain the expression of the corresponding generating functional ℋ_kx, unambiguously (with a degeneracy that disappears once the closing relations for supermomenta (<ref>) are enforced). The main result of this procedure must be that the supermomenta cannot contain powers (higher than 1) of the canonical momenta in order to represent a deformation without extrinsic information to the leaf, as must be the case for said stretching, given that it can be regarded a spacial-on-leaf diffeomorphism.
From such a consistent representation of the generators of tangential deformations, one can find the superhamiltonian by enforcing the remaining closing relations (<ref>) and (<ref>) for the PB. The result is unique under some additional physical considerations (such as order 2 for the degree on the geometromomenta)<cit.>.
In pure GD (i.e. the matterless case), for metric-based gravity the supermomenta must be:
ℋ_iG(h_ij,π^ij)=-2D_jπ_i^j=-2D_j(h_iαπ^α j)=-2h_iαD_jπ^α j
where D_j=h_ijg^μ i∂_X^μ. From it, one derives the purely geometrodynamical superhamiltonian to be:
ℋ_G(h_ij,π^ij)=12(2κ)√(h)G_ijklπ^ijπ^kl-(2κ)^-1√(h)R(h)
where G_ijkl:=1/2 √(h)(h_i k h_j l+h_i l h_j k-h_i j h_k l) is De Witt's metric.
§.§ Matter closing relations.
We will finish this discussion with the main motivation for this review section: remarking how to include matter in the theory in a equivalence-principle-consistent way, and how to obtain the closing relations for the associated matter supermagnitudes. These relations were derived in <cit.> for Classical matter, but, given its kinematical nature, should be true also for the quantum case which we aim to build in the next Section. In fact, such relations are a must for any matter theory, as long as it has non-derivative coupling with the metric (thus, excluding a priori non-minimal coupling terms in the Lagrangian). To do so, we will invoke the GD equivalent to the equivalence principle as in <cit.>, which implies that the total supermagnitudes must be the sum of the pure geometrodynamical magnitudes ℋ_AG and the matter supermagnitudes ℋ_AM:
ℋ(x)=ℋ_G(h,π_h;x)+ℋ_M(h,ϕ,π;x)
ℋ_i(x)=ℋ_iG(h,π_h;x)+ℋ_iM(h,ϕ,π;x)
The geometrodynamical equivalence principle also implies that matter supermagnitudes are ultralocal (non-derivative dependence on h and independent of π_h). On the other hand, the pure geometrodynamical generating functions, i.e., those corresponding to the gravitational fields do not depend on the matter fields. Consequently, the purely gravitational supermagnitudes already fulfill the PB closing relations (<ref>),(<ref>) and(<ref>) for {,}_G, and, of course, are matter independent (null PB under {,}_M). Thus, given that the total supermagnitudes must fulfill such closing relations for the complete Poisson bracket {,}, and assuming that ℋ_M depends on the 3-metric h only locally, it can be seen that matter superhamiltonian fulfills also (<ref>) for {,}_M, as the former conditions imply:
{ℋ_G(x),ℋ_M(x^')}_G+{ℋ_M(x),ℋ_G(x^')}_G=0
because of locality (on π and h for both derivatives) and antisymmetry.
Regarding the relations that involve supermomenta, on the one hand, as a result of eq. (<ref>) for classical theories, ℋ_iM does not depend on h,π_h at all, and thus it can be seen that it must fulfill also (<ref>) for {,}_M, because:
{ℋ_i(x),ℋ_jM(x^')}_G=0 .
On the other hand, to fulfill (<ref>) we need:
{ℋ_i^G(x),ℋ^M(x^')}_G+{ℋ^M_i(x),ℋ^M(x^')}_M=
ℋ^M(x)δ_,i(x,x^') .
The crossed gravitational-matter term gives:
{ℋ_i^G(x),ℋ^M(x^')}_G=2h_iα(x)D_x^jδℋ^M (x^')δ h_α j(x)
and thus the matter sector term should provide:
{ℋ_i^M (x),ℋ^M(x^')}_M=
-2h_iα(x)D_x^jδℋ^M (x^')δ h_α j(x)+ℋ^M(x)∂_x^iδ(x,x^') .
With these closing relations for matter theories coupled in an ultralocal way to the metric we have assured that the total supermagnitudes given by the sum of the geometrical and matter ones fulfill the closing relations of Dirac's generators under the total Poisson bracket. On the other hand, we have assured that they generate the same transformation as the Lie derivative along the shift vector, for a tangential deformation, as in (<ref>). Both conditions together ensure that they are the appropriate generating functions of hypersurface transformations over the whole phase space.
To do so, we will first move from the implicit general hyperplane equation where the function τ of spacetime coordinates X^μ is set to a constant s, to a representation of the hyperplane in terms of four parametric equations for the spacetime coordinates X^μ of three embedding variables x_σ^i:
τ(X^μ)=s⇒ X^μ=X^μ(x_σ^i) .
This allows us to write the induced spatial metric h_ij in terms of the derivatives of the parametric equations and the spacetime metric:
h_ij(x_σ)=^4g_μν(X^η(x_σ))X^μ_,iX^ν_,j .
§.§ Classical dynamics and constraints.
Once a foliation of spacetime is chosen, the relations within hypersurfaces is given by the flow α_E. In the 3+1 perspective, from a hypersurface one can access the infinitesimally neighbouring one through a normal deformation and a tangential deformation. Once such deformations are represented in a Hamiltonian way such paths along hypersurfaces are mapped to a path over the phase space of fields, which is the solution to the Hamiltonian dynamics, with the role of time being played by the parameter s:
d Fds={ F, H}+∂_s(N^μ x)δδN^μ xF+∂_s F ∀ F∈ C^∞(ℱ_C×ℝ)
where F is any functional over phase space plus lapse and shift variables, with a possible explicit differentiable dependence on s∈ℝ.
The total Hamiltonian H reproduces E through the contraction of the lapse and shift functions N, N^i, with the corresponding generating functions ℋ(x), ℋ_i(x):
H(s)=N^x(s)ℋ_x+N^ix(s)ℋ_ix .
Note that lapse and shift are s dependent functions (each hypersurface with label s relates through E with the neighbouring one, which has a different normal and tangential deformation to define the next one), and, as such, the formalism is analogue to having an explicitly time dependent Hamiltonian function.
Note also that, making use of eqs. (<ref>) and (<ref>) this Hamiltonian can be decomposed as the sum of a purely geometrodynamical and a matter field Hamiltonian H(s)=H_G(h,π_h,N(s),N^i(s))+H_M(h,ϕ,π,N(s),N^i(s)), such that:
H_G(h,π_h,N(s),N^i(s)):=N^x(s)ℋ_x^G+N^ix(s)ℋ_ix^G ,
H_M(h,ϕ,π,N(s),N^i(s)):=N^x(s)ℋ_x^M(h,ϕ,π)+
N^ix(s)ℋ_ix^M(h,ϕ,π) .
We anticipate that this matter Hamiltonian is the function whose quantization will yield the Hamiltonian operator governing the dynamics at the quantum level.
In the end, the integral curves of the s-dependent Hamiltonian field associated to { · ,H(s)}_H reproduce over phase space the action of α_E(s) over the hypersurfaces. Once we have the solution to the Cauchy problem the foliation can be undone obtaining physical data over spacetime which must be the same independently of the foliation chosen. In other words, independently of the path followed from one initial hypersurface Σ_0 to a final one Σ_s_f, given by the lapse and shift functions ∀ s∈[0,s_f), the field content on the final hypersurface must be the same, departing from a given initial Cauchy data.
This is introduced in <cit.> as path independent principle. Naturally, given that the path from an initial hypersurface to a final one is spawned by a continuous set of infinitesimal transformations, independence of the path implies, at the infinitesimal level, commutativity of the associated generators over phase space, for any normal and tangential transformation. In its Hamiltonian representation, for two arbitrary deformations characterized by N_A^x and Ñ_B^x^' this implies:
{ N_μ^xℋ_μ x,Ñ_ν^x^'ℋ_ν x}=N_μ^xÑ_ν^x^'{ℋ_μ x,ℋ_ν x^'}≃ 0
∀μ,ν= 0,⋯,3,∀ N_μ^x,Ñ_ν^x^'∈ C^∞(Σ) .
where ≃ implies that this constraint holds only on shell, i.e. once the physical constraints are satisfied. Given that the functions contracting the local generators are arbitrary, the former condition implies that it must hold locally:
{ℋ_μ x, ℋ_ν x^'}≃ 0 ∀ μ,ν= 0,⋯,3
Given that the total Poisson brackets of the supermagnitudes fulfill the closing relations (<ref>),(<ref>),(<ref>), which are linearly proportional to the supermagnitudes, the nulity of the closing relations on shell can only by given by the so called Hamiltonian and momenta constraints, which is just the nulity of the total local generators:
ℋ(x)≃ 0 and ℋ_i(x^')≃ 0
These constraints will have their analogue at the hybrid level, for the generating functions for quantum matter and classical geometry, in order to preserve path independence.
§ QUANTUM FIELD GEOMETRODYNAMICS.
Let us consider now that instead of a classical matter field, we have a quantum one. Apparently, it should be possible to adapt the construction above to include this new case, as long as we are able to find a Poisson structure on the space of quantum fields behaving in a similar way to the classical field Poisson bracket. We should transport the previous framework to the case of quantum matter sources, defining a Quantum Phase Space made of Quantum Wave Functionals of the quantized matter fields and provided of a Quantum PB. Our guiding light will be the representation of kinematics of hypersurfaces appropriately for the quantum functionals and thus, the search of quantum supermagnitudes fulfilling the matter closing relations under the Quantum PB.
In this section, we will focus on the construction of the Hamiltonian formalism of quantum field theory for a given background spacetime. Nonetheless, the construction of this Hamiltonian picture is much more complicated than in a trivial foliation of flat spacetime, since the gravitational degrees of freedom have an active role in the construction of the quantization mapping. In turn, those gravitational degrees of freedom have a parametric time (leaf label) dependence, and therefore the ingredients defining quantum theory become different for each hypersurface of the foliation. Hence, when considering the action of the derivative with respect to the gravitational fields on the quantum degrees of freedom, the outcome may be non-vanishing. Thus, a careful analysis of the geometrical properties of the manifold of quantum fields, which, as we will see, will depend on the gravitational degrees of freedom, is necessary before considering the extension of geometrodynamics to the quantum realm. This approach is, to our knowledge, different from the usual construction of QFT in curved spacetime and constitutes the first original contribution of this paper.
§.§ Geometry of classical phase space and quantization.
As it is commonly done in Quantum Mechanics, the set of quantum operators is obtained from classical field theory by a quantization mapping Q of the observables over the field manifold,i.e. real functionals over the classical field phase space. On the other hand, the states are suitable representations of the dual of such C^∗ algebra of operators obtained through Q. Such representations of the states must be adapted to the representation of the operators given by the quantization mapping (considering elements of a certain L^2 over which the operators are self adjoint).
In our construction, we will require of a geometrical description of the quantum fields in terms of Poisson brackets and Hamiltonian dynamics, and therefore, it seems natural to use the geometrical formulation of Quantum Mechanics <cit.>, where these structures are canonical. As a result, our space of quantum states will be endowed with a Kähler structure which will allow for a Hamiltonian description of quantum dynamics. Furthermore, quantum operators will be described as real sesquilinear functions of the quantum states in the form F_A(Ψ)=⟨Ψ, A Ψ⟩, which form a suitable Poisson subalgebra with respect to the natural Poisson algebra associated with the symplectic tensor of the Kähler structure. In addition, the quantization procedure links the states, operators and geometrical structures of our quantum system to the ones of the classical matter field theory considered for quantization. Thus, we must prepare such classical theory endowing it with some specific geometrical structures.
Firstly, we know that the classical field theoretical phase space is already provided with a symplectic structure,
ω_C=-dϕ_x∧ dπ^x
regarded as the (weak) inverse of its Poisson tensor. Such symplectic structure, as a tensor, is built independently of the geometrodynamical variables, lapse and shift, which can be mathematically expressed as:
ℒ_{,f}_Gπ_F^*ω_C=0, where
π^*_Fω_C∈ (∧^2T^⋆(ℳ_F×ℳ_G×ℳ_N)) ,
where π_F represents the canonical projection on the cotangent bundle T^*ℳ_F and the Lie derivative is taken with respect to any Hamiltonian vector field represented by { · ,f}_G in the direction of gravitational fields and is assumed to vanish due to the lack of dependence on the gravitational degrees of freedom. Notice that, with this formulation, we have chosen to see ω_C as a two form over the whole manifold, even if it only acts on vectors with components on the matter side, just to define the independence as being constant along any curve that does not change the material data.
This is imposed by the very nature of Hamiltonian dynamics, as the fields ϕ_x, represented as functions, are fundamental kinematical variables, considered independent of any of the geometrical variables, Lapse or shift which we collectively denote by ξ=(h,π_h,N,N^i)∈ℳ_G×ℳ_N. Analogously, in the usual prescription of Hamiltonian dynamics, its cotangent space contains the momenta, considered as densities of weight 1, and thus independent variables of φ, h, π_h,N, N^i. Thus, having defined such canonical coordinates over the cotangent bundle, from the definition of the symplectic structure in eq. (<ref>), it is clearly ξ-independent. In the following, this crucial property will be referred to as leaf independence for any tensor or function fulfilling the same condition as the symplictic form does in eq. (<ref>) .
On the other hand, keeping in mind the objective of geometrodynamics of defining a total Poisson structure for geometry and matter, it is relevant to notice that at least the (h,π_h)-independence of the symplectic structure is very convenient, as it ensures the compatibility of
both Poisson brackets and therefore the total Poisson bracket defined in eq. (<ref>) is also
a Poisson bracket over C^∞(ℳ_G×ℳ_F).
Armed with this leaf independent ω_C, we may add a complex structure of our choice, J_C, to form a Kähler structure. Such complex structure has no meaning at the classical level, but prepares the classical field theory for quantization and defines the set of 1-particle states of the quantum field. In fact, all the possible choices of inequivalent quantization (choices of inequivalent vacua and measures) are reduced to the choice of such a classical complex structure <cit.> whether one works with geometric quantization <cit.> or with algebraic quantization <cit.>.
Besides, while ω_C was constructed leaf-independently, J_C may be chosen to be ξ-dependent. In fact, in <cit.>, physical arguments are made in the case of stationary space times, in favour of a choice of complex structure over the space of solutions that, when projected over Cauchy data on a given leaf <cit.>, yields precisely a complex structure over ℳ_C with such dependence.
To include such particular case and more general ones, in this paper we will consider a general quantization procedure dependent on a complex structure that, in turn, may depend smoothly on (h,π_h,N,N^i). This might be seen as a smooth ξ-parametric family of complex structures. Consequently, we will have a different quantization for each ξ∈ℳ_G×ℳ_N, making up for a ξ-parametric family of smoothly-related but unitarily-inequivalent quantizations. In ordinary Quantum Mechanics, Stone-von Neumann theorem provides us with unitary equivalence, allowing us identify the representation of the same quantum state within different Hilbert spaces. This is not the case in QFT. Given the unavoidable need of relating different quantizations associated to different hypersurfaces along the foliation, we must introduce another equivalence relation not based on unitary transformations. Based on our geometric approach, we will
consider the language of connections and parallel transport.
Notice also that because of the dependence of the quantization process on the gravitational degrees of freedom (3-metric, its momentum and lapse and shift variables), it is natural to consider a fibration τ_ℱ:ℱ→ℬ, where the base manifold is defined as ℬ:=ℳ_G×ℳ_N, the total manifold is defined as ℱ and the fiber (𝒩)^' is a suitable functional space (Hida distributions, see <cit.>) containing all possible Hilbert spaces L^2(𝒩^',Dμ) with respect to different Gaussian measures Dμ. Within this framework, we will bind the representation of quantum states to suitable sections of the bundle ℱ, assigning an element of a (possibly different) Hilbert space to each point of the base. As a result, the quantum states will exhibit an explicit dependence on the gravitational degrees of freedom and the nature of their derivative w.r.t. the leaf variables ξ∈ℬ will be key for the quantum supermagnitudes to fulfill the closing relations of Dirac's group, eqs. (<ref>),(<ref>),(<ref>).
The key motivation to define the states in this way
[
In the particular case of geometric quantization this new section nature of the states (beyond the usual one) arises as early as in the prequantization. The last step to prepare the classical space for a geometric quantization procedure is given by the choice of a Lagrangian submanifold of the space of fields over which the quantum states, as wave functionals, will have its domain. This is done by means of a symplectic potential 1-form, θ_C, which defines a polarization. The choice of such 1-form is usually dependent on J_C, which in turn is ξ-dependent. Thus, we have a ξ-parametric family of spaces of states adapted to a ξ-parametric family of polarizations. For details, see <cit.>.
]
is precisely the Kähler structure with which we have provided the phase space of classical fields ℳ_F and which determines the quantization <cit.>. A Kähler structure is a triad of tensors given by a symplectic form ω_C, a complex structure J_C (both introduced above) and a Riemannian metric μ_C, fulfilling the following compatibility condition:
μ_C(·,·)=ω_C(J_C·,·) .
Note that, given that ω_C as defined in (<ref>) was leaf independent, and that J_C is chosen leaf dependent, μ_C inherits its leaf dependence. Together, they may provide the complexification of ℳ_F under J_C with a hermitian product h_C=(μ_C+iω_C)/2.
The relation with quantization is that such hermitian product is used to define the characteristic functional (analogously, the integral kernel) of a Gaussian measure Dμ_ξ(ϕ) over the space of fields (in other texts, the vacuum state) in the following way:
∫_𝒩^' Dμ_ξ(ϕ)e^(iρ̅_xϕ^x+iρ_xϕ̅^x)=e^-h_C^-1(ρ̅,ρ) ,
where we have chosen complex domain for the measure (this construction is proper of an holomorphic picture, but it can be related with the real Schrödinger picture straightforwardly through a Segal-Bargmann transform <cit.>). This characterizes the Gaussian measure Dμ_ξ univocally, becoming the quantum heir of the leaf dependence of the classical complex structure J_C.
In turn, this measure will provide the scalar product of the usual Hilbert space characterizing the QFT constructed over the space of classical fields, L^2_Hol(𝒩^',Dμ_ξ), where the subindex stands for “holomorphic", following the example above. For any two elements Ψ_1,Ψ_2∈ L^2_Hol(𝒩^',Dμ_ξ), its scalar product is then defined as:
⟨Ψ_1,Ψ_2⟩_ξ=∫_𝒩^'Dμ_ξ(ϕ) Ψ̅_1(ϕ̅)Ψ_2(ϕ)
This construction is thoroughly detailed in <cit.>, where, to relate with the current discussion one must only add the ξ-dependence to J_C.
Notice that we have a different J_C,μ_C,h_C and, lastly, Gaussian measure Dμ_ξ for each ξ∈ℬ. Therefore, through this construction we arrive at a ξ-parametric family of unitarily-inequivalent Hilbert spaces. This result is general, beyond the example used in the construction above, and in the following it will be synthesized in the subindex for the quantum scalar product, ⟨,⟩_ξ, either if the domain is given by real (Schrödinger picture) or complex (holomorphic) fields.
Therefore, notice that not only the quantum sections
σ_Ψ∈Γ(ℱ,ℬ) will yield a different functional representing the quantum state at each point ξ∈ℬ, but at each point ξ∈ℬ, the probabilistic properties of the state
σ_Ψ(ξ)=Ψ∈(𝒩)^' must be considered with respect to a different Hilbert space structure, given by the ξ-dependence of the scalar product. Besides, the necessity of defining a Poisson bracket and therefore a differentiable calculus on the space of quantum states, forces us to consider a special structure known as Gel'fand triple <cit.>, in fact, a ξ-parametric family of them, given by:
(𝒩)⊂ L^2(𝒩',Dμ_ξ)∼ L^2(𝒩',Dμ_ξ)^'⊂ (𝒩)^' .
These structures define a (family of) isomorphism(s) between the space of Hida test functions (𝒩) and a suitable subspace of its dual (𝒩)^', which is an essential tool for managing differential tensors on the space of states (see <cit.> for details). Remarkably, Hida functions conform a nuclear-Frechèt space (𝒩), allowing us to perform Fréchet calculus in a convenient setting (<cit.>). Being dense in all L^2 spaces considered (and ultimately, in the space of Hida distributions (𝒩)^' that includes them all), it justifies the (dense) construction of differential geometry over these infinite dimensional manifolds. This mathematical intricacies will play a pivotal role in the construction of the main kinematical object: the quantum Poisson bracket as a bi-differential operator, or in relation with a quantum symplectic structure. Nonetheless, to focus on the physical aspects, such technical nuances (which are standard in Hida calculus) shall be relegated to the supplementary material for ease of reference.
In conclusion, the choice of ξ-dependent complex structure renders the whole construction leaf dependent and, in the end the quantum states themselves must become leaf dependent to be consistent with this leaf dependence of the Hilbert spaces. Analogously, the self-adjoint operators representing physical magnitudes, for whom the notion of self adjointness is relative to the ξ-dependent scalar product ⟨,⟩_ξ, also must acquire a consistent leaf dependence. Inspired by this, we will postulate such section nature of the states and leave their specific dependence on ξ∈ℬ, together with the choice of J_C, as ingredients to be fixed consistently for the quantization to be compatible with the Hamiltonian formulation of the symmetries of spacetime.
with a dependence on ξ that can be characterized through quantum sections that are parallel transported with a certain connection:
∂_ξ_iσ_Ψ(ξ):=-Γ̂_ξ_i(ξ)σ_Ψ(ξ)∀ξ_i∈[h_ij,π_h^ij,N,N^i]
where Γ̂_ξ_i(ξ) will be referred as “quantum connection” for each base variable ξ_i. This object, following the leaf dependence along the quantization procedure,
Inspired by this, independently of the quantization procedure (given the equivalence of complex structure and vacuum phase and measure between geometric<cit.> and algebraic quantization <cit.>), we may postulate the existence of such section nature of the states, and leave the connection Γ̂, as well as the choice of ξ-dependent complex structure, as ingredients to be fixed consistently for a compatible quantization with the Hamiltonian formulation of the symmetries of spacetime.
While the non-closed term of θ_C must be common to all possible polarizations, as it must yield the same ω_C, its closed terms are different and are usually chosen in reference to the complex structure, J_C. This is because one is interested to chose the real submanifold over the space of fields and momenta (in order to provide Schrödinger functional picture) or the holomorphic submanifold (in the case of holomorphic quantization). Of course, such notions of real and holomorphic are referred to the complex structure and, thus, given a complex structure J_C and the choice of polarization relative to it, θ_C is determined. Nevertheless, states under different polarizations are related by means of a Segal-Bargman transform, and thus, do not belong to inequivalent quantizations if constructed for the same given J_C, although they present different dependences on h,N,N^i, as the choice of submanifold depends differently on such J_C.
§.§ Functional representation of quantum states and leaf dependent quantization. Quantum fibrations
§.§.§ The fibration of quantum states
In the usual wave functional picture of Quantum Field Theory, the states Ψ are represented by elements of a functional space endowed with an Hermitian product ⟨,⟩, associated to a Gaussian measure over the fields Dμ, turning it into a Hilbert space ℋ=L^2(𝒩^',Dμ). In order to define properly the geometric tools required, such as the integral measure, we will consider the domain 𝒩^' as a space of distributions dual to some nuclear-Frechèt space (for example, differentiable functions of compact support, in the case of Σ compact) which represents the classical matter fields ϕ (see <cit.> for an exhaustive elucidation of these intricacies). For such Hilbert space, we may represent the dual element to Ψ with Ψ̅ under the effect of the measure Dμ, by means of the isomorphism of the Hilbert space and its dual ℋ∼ℋ^' associated with Riesz theorem. Such Gaussian measure is defined in terms of the Kähler structure of the classical field theory, as introduced above. Different choices of complex structure will ultimately lead to unitarily inequivalent Hilbert spaces, although the Hida functions (𝒩) will be a dense common subset to all of them, and they will all be subsets of the Hida distributions (𝒩)^'. This inequivalence is also present in the Fock space representation which can be defined from the Hilbert space fixing the measure and a vacuum phase (see <cit.>).
From the preceding section we must remember that, instead of considering a single one of such structures, in our construction we must consider a ξ-parametric family of inequivalent Hilbert spaces, as the complex structure J_C(ξ) can be ξ-dependent, as in<cit.>, completing the aforementioned Kähler structure differently for each ξ. Thus, the representation of the quantum states (considered from the point of view of probability theory, and then dependent on the Hilbert space structure) must become dependent on the classical (gravitational) degrees of freedom. This classical dependence of the quantum states constitutes the most striking difference with respect to
other hybrid physical models where quantum and classical
degrees of freedom interact (as in <cit.>). In those models, one could define the classical and quantum submanifolds separately
and construct the hybrid manifold as the Cartesian product.
We are going to borrow from those models the main
ideas to construct a hybrid dynamics as a Hamiltonian
system, but, now, we must define the manifold of quantum
states already in an entangled way with the classical
(gravitational) variables. Therefore, the quantum states already contain hybrid information, and in the following construction, all quantum structures defined (Poisson bracket, scalar product, relation of operators with physical observables) reproduce the usual construction of Hamiltonian quantum mechanics only fiber-wise for a fibration τ_ℱ:ℱ→ℬ which is locally trivialized into subsets of
ℬ×(𝒩)^' with base ℬ:=ℳ_G×ℳ_N .
Note that the fiber is adapted to the usual functional description of quantum states (in particular, we are choosing the space of Hida distributions <cit.>), and thus, for a fixed element in the base ξ:=(h,π,N,N^i)∈ℬ, we would have the ordinary formulation of geometric quantum mechanics <cit.>, particularized to QFT in <cit.>.
In other words, we have all quantum structures defined consistently, but also differently at each point ξ∈ℬ, and we need a way to relate them. To do so, in the following we will reproduce <cit.>, but with a focus on the physical features, relegating most mathematical details, based on a geometric formulation of QFT, to the notes in the supplementary material.
Notice also that (𝒩)' is too big to represent efficiently the physical quantum states, the structures that we present here, such as the hermitian product, are defined on a dense subset D(⟨,⟩_ξ)⊂ (𝒩)', different for each point of the base and representing the Hilbert space of physical quantum states.
Taking this into account, the representation of the quantum state at a certain geometrical point (ξ_0, Ψ_0) is related to other points in ℱ obtained through a change in ξ. Imagine a curve γ^M(s) in ℬ, and let us consider how a given quantum state transform along γ^M. In order to do that, we must introduce a mechanism to lift the curve on the base to the fibration ℱ, in such a way that it projects on γ^M by the projection τ_ℱ. Hence, a connection on ℱ is required, in such a way that the notion of horizontal direction is defined for the points of the bundle. Associated with this connection, the notion of horizontal lift is well defined and a (locally) unique curve γ^ℱ on ℱ containing the state Ψ_0 exists. Such a curve satisfies that
γ^M(s=0)=ξ_0; γ^ℱ(s=0)=Ψ_0; τ_ℱ (γ^ℱ(s) )=γ^M(s).
In a local description, the connection is associated with a one-form Γ∈Λ^1(ℬ)×Lin(𝒩), which allows us to define the covariant derivative of a section σ:ℬ→ℱ, using the parallel transport defined through the horizontal lift of base curves:
∇_X σ(ξ)=dσ(X)-Γ̂(ξ, X)σ(ξ), X∈𝔛(ℬ)
If we consider those sections which are covariant with respect to the connection
∇_X σ(ξ)=0,
we are defining a trajectory describing the change suffered by the quantum state with an initial state σ(ξ_0)=Ψ_0 due solely to the change on the quantum structures due to a change of ξ∈ℬ, when no quantum evolution is exerted on it. Thus, this would define the evolution of the ξ-dependent representation of a quantum state due to the evolving geometry, but under a vanishing matter Hamiltonian (i.e. in the geometrodynamical framework, matter supermagnitudes which are identically null). [Had we done the same for the classical case on the classical bundle ℱ_C, the classical geometric evolution would define lines where only the gravitational degrees of freedom change. This would correspond to a trivial connection.]
If we consider a non-trivial Hamiltonian operator Ĥ (or, in particular, the Hamiltonian representation of a certain deformation of the hypersurface Σ), the total evolution of the quantum state will be the sum of the image under a covariant section of the geometric transformation (thus, a horizontal change, with respect to the connection) and the effect of Schrödinger functional equation, which is vertical on the fiber of ℱ.
Following this discussion, in order to lift the infinitesimal changes of the geometry (tangent vectors to curves on the base) to the changes induced on the representation of the quantum state (which lives on the fiber) due to the leaf dependence quantization, it becomes relevant to define the subset ℳ_s of sections on the bundle ℱ which are covariant for all vector fields on the base of the fibration, X∈𝔛(ℬ) :
ℳ_s:={σ∈Γ(ℱ)|∇_Xσ=0 ∀ X∈𝔛(ℬ) }
In turn, in terms of these objects (which are illustrated in Figure <ref>) we may consider the horizontal distribution on Tℱ representing the constraints of the transformations of quantum states to be compatible with geometry transformations, given the leaf dependence of the Hilbert space to which they belong. It is defined by the horizontal vector fields on ℱ, or, equivalently, by the directions which are tangent to the former covariant sections taking values at a given point of the fibration f∈ℱ:
ss
Hor^∇(f)={ Tσ_τ(f) (𝔛(ℬ)) f∈ℱ | .
. ∀ σ(τ(f))=f; ∇_Xσ=0 ∀ X∈𝔛(ℬ) },
Hor^∇(f)={ Tσ_τ(f) (𝔛(ℬ)), ∀σ∈ℳ_s| σ(τ(f))=f },
where Tσ_τ(f)(𝔛(ℬ)) represents the image of the differential of the mapping σ:ℬ→ℱ evaluated at the point on the base given by the vertical projection τ(f)∈ℬ acting on all vector fields on the base manifold. Obviously, the tangent space on a point of the fibration can be decomposed as:
T_fℱ=V_fℱ⊕Hor^∇(f); ∀ f∈ℱ,
where V_fℱ represents the set of vertical vectors at f.
When the mathematical dust is settled it becomes clear that the definition of Hor^∇(f) provides us with a way of defining tangent vectors to curves on ℱ that are only sensitive to trajectories on the geometrodynamical data, i.e. they are horizontal. This definition encodes the notion of
an infinitesimal change of representation of the quantum state without proper quantum evolution, which would be a vector on the vertical subspace.
In fact it is clear that tangent elements to physical states should be represented by a fiber bundle such that V_fℱ represent infinitesimal generators of inner transformations to a given Hilbert space associated to the measure Dμ_τ(f) while Hor^∇(f) represent generators of isometries between, possibly unitary inequivalent, infinitesimally close Hilbert spaces. In this sense, under the complete evolution, said representation of quantum states will transform to be outside the original Hilbert space, and inside a new Hilbert space. This justifies the chosen construction of the fibration, where the fiber to which the quantum states belong must be a space that contains all the L^2(𝒩^',Dμ_ξ) through which the quantum state will evolve following the curve over ℬ. Such space is chosen to be Hida distributions (𝒩)^', although, locally (for a given ξ and thus a given Hilbert space), it is too large to accommodate solely the physical quantum states and one should restrict (locally) the construction to a suitable subspace of it (see Figure <ref> for a graphical summary of this construction).
It is important to remark that the construction is, of course, not unique. Every choice of a connection for ℱ, defines a different notion of covariant derivative. Choosing a different connection implies considering equivalent different quantum states for different geometric points (i.e,. different evolutions of the quantum states because of the changing geometry). Nonetheless, as the construction is geometrical, the same connection would define a horizontal lift to the bundle of complex structures, scalar products, linear operators or any other tensor chosen to define a fiber on ℬ. A natural consistency condition is to choose the same connection for all the tensors. When doing that, it makes sense to consider normalized quantum states for all geometrical points, since the scalar product and the states can be chosen to transform covariantly with respect to the same connection (see below).
Nonetheless, remember that geometrodynamical transformations affect also, in a precise way, to the classical matter fields. Thus, for the whole procedure to be consistent, we must find a connection on ℱ, in such a way that the ξ-dependent quantization procedure that we build lifts the geometrodynamical classical generators on the space of classical fields to a consistent set of geometrodynamical quantum generators. As we will see below, such a construction is possible if the quantization mapping and the covariant scalar product and linear operators are related in a particular way.
The only extra ingredient in relation to <cit.> is thus that, given the smoothly leaf dependent quantization inherited to all quantum objects from J_C(ξ), the quantum states become related to the quantum sections over the fibration τ_ℱ.
We know already that geometrodynamics
consists in adapting the vector fields generating the
space-time foliation to the different field-manifolds. In
the particular case of the quantum pure states manifold, quantum
states and operators must be considered as the evaluation of sections
of the fibration over the set of gravitational degrees of
freedom. Thus, the geometrodinamical generators on the
quantum-state manifold must correspond to horizontally lifted vector
fields from the geometrodinamical generators on the space
of gravitational fields with respect to the fibration since they must implement the transformation which represents the change of leaf of the space-time foliation at the level of the quantum fields, which depend on the geometric degrees of freedom.
Having defined the connection, the quantum states are to be defined as the images of the parallel transported sections with respect to such connection:
ℳ_Q:={σ_Ψ (ξ) ∀ξ∈ℬ, ∀σ_Ψ∈Γℱ such that, .
. ∇_X σ_Ψ=0 ∀ X∈𝔛(ℬ) } .
ℳ_Q:={σ_Ψ (ξ) ∀ξ∈ℬ, ∀σ_Ψ∈ℳ_s}
From Equation (<ref>), we can safely consider that ℳ_Q is identical to the fiber of ℱ, but we consider that presented in this form, as the image space of the covariant sections with respect to the connection, the meaning of the horizontal directions in T_Ψℱ is more clear.
Let us observe that, with an element of the space of covariant sections ℳ_s and an element of the base ℬ characterizing geometry, we can obtain the instantaneous quantum state just by evaluating the former on the later. This must not be confused with the history of the quantum state, i.e. the solution curve to the dynamics, as part of the evolution is vertical and, thus, such curve can not be covariant. Nevertheless, following the former isomorphism between ℳ_Q and ℬ×ℳ_s, the history could also be represented as a curve over ℳ_s (the vertical part of the evolution will change from one covariant section to another) together with a curve over ℬ (the horizontal part will change the evaluation point for the covariant sections), yielding together the evolution of the quantum state. These properties will be relevant by the end of the following section, and are illustrated in Figure <ref>.
We have generically denoted ∂_ξσ_Ψ(ξ)=-Γ̂(ξ)σ_Ψ(ξ), but, what we mean by this is that we have a local connection for each local variable in the base:
Once we define a distribution for each one of such variables defining a global direction in the base given by v_B^x:=(ḣ_ij;x,π̇^ij;x,Ṅ^x,Ṅ^ix), considered with respect the basis ξ^i; x=(h_ij; x, π^ij;x, N^x, N^ix) we can see how the quantum section is parallely transported as:
v^i;x_B ∂_ξ^i;xσ_Ψ(ξ)=
-(ḣ_ij;xΓ̂_h_ij;x+π̇^ij; xΓ̂_π^ij;x+Ṅ^xΓ̂_N^x+Ṅ^ixΓ̂_N^ix)σ_Ψ
where the ξ-dependence is not showed for compactness of the notation and the contraction of i,j indices implies summation over the elements of the basis, while the contraction of the continuous x-indices implies integration over Σ. Having this in mind, we will frequently work with the notation ∂_ξσ_Ψ(ξ)=-Γ̂(ξ)σ_Ψ(ξ), to denote the connection associated to any local variable of the base.
Remember, though, that even if the covariant sections are just a tool to define the horizontal distribution Hor^∇ on the bundle ℱ, they represent the transformations of the quantum states under the effect of pure geometrodynamical (and lapse and shift) transformations. Hence, they represent the kinematics of the quantum field theory in the geometrical background where it is defined.
Note that, by definition, the each covariant section, as defined above, once evaluated on the point of the base characterizing the geometrodynamical data ξ_0∈ℬ on the current Σ_s, yield a certain element of ℳ_Q, isomorphic to an element of the fiber (𝒩)^' which is a valid quantum state.
Note that, at the level of QFT in given Curved Spacetime, once the foliation is chosen, the evaluation point ξ_0 is a given time dependent parameter. In the following section we will promote such evaluation point to be part of the kinematics, extending the phase space to contain the quantum sections and the base point over which they must evaluated, as the kinematics of hybrid geometrodynamics considers both elements on the same footing. Summarizing: the necessity of considering the leaf dependence of the quantum theory (complex structure, scalar product, states, operators, etc) to implement geometrodynamical transformations, forces us to introduce a fibration of all those quantum ingredients over the manifold of geometric variables, and quantum objects become the images of sections of that fibration. These section objects are of kinematical nature (required to be consistent with the leaf dependent quantization) and contain more information than just the physical one. True physical objects correspond to the evaluation of those sections on the particular geometric point (gravitational field, momenta, lapse, shift, ...), but the whole section is necessary to encode appropriately the influence of the geometry on them.
Thus, to pose the Cauchy problem we will only need to choose any suitable Hida distribution determining Ψ (plus the geometrodynamical data and in compatibility with the physical constraints introduced later), knowing that the covariant sections of ℱ allow us to consider, with the former definition of ℳ_Q,
∂_ξ_0Ψ=(∂_ξσ_Ψ(ξ))|_ξ_0=-Γ̂(ξ_0)Ψ
This corresponds to the directions at T_Ψℱ which are horizontal with respect to the connection. Obtaining the integral curve from Ψ with that tangent vector, we will define a series of quantum states which are equivalent from the kinematical point of view, since correspond to changes in the geometry only.
The objective of the following construction is to end up determining such connection Γ̂ in order to have consistent hybrid geometrodynamics, providing compatibility in a certain sense for the ξ-parametric family of quantizations (and resulting quantum structures).
§.§.§ Quantum observables
Regarding observables, through quantization they are mapped to be self adjoint operators over the chosen Hilbert space. Given that such quantization procedure Q is J_C dependent, the operators must inherit its leaf dependence. Therefore, it can be considered to define a bundle, with base manifold ℬ and fiber the space of linear operators on the Hilbert space which defines the fiber of ℱ. For such a bundle, we can again consider the sections which are covariant with respect to connections defined on it, providing the corresponding horizontal directions. Of course, among the connections, we may consider again the connection ∇ on the bundle ℱ of the space of states. As we are going to see, the choice of the same connection for both bundles will make problems simpler.
Furthermore, one must realize that the notion of adjointness is referred to the leaf dependent scalar product ⟨,⟩_ξ. More explicitly, let us consider that the adjoint to an operator  is given by Â^†_ξ for that particular scalar product, so that:
⟨Ψ_1|ÂΨ_2⟩_ξ=⟨Â^†_ξΨ_1|Ψ_2⟩_ξ
then, for a different scalar product ⟨,⟩_ξ_2 provided by another element of the ξ-parametric family of Gaussian measures, we find that Â^†_ξ no longer defines the adjoint operator:
⟨Ψ_1|ÂΨ_2⟩_ξ_2≠⟨Â^†_ξΨ_1|Ψ_2⟩_ξ_2
Therefore, the set of self-adjoint operators, i.e. those that fulfill Â=Â^†, must acquire a different representation for each Hilbert space in the ξ-parametric family.
In the functional representation, it can easily be apprehended by considering that, in particular, the functional derivatives representing the quantization of the field momenta are chosen to be self-adjoint with respect to a leaf dependent measure Dμ_ξ
[As an example one can consider geometric quantization to illustrate this dependence. Given that the symplectic potential θ is chosen relative to J_C and thus it inherits its leaf dependence, the quantization of any element of the classical field Poisson algebra f∈ C^∞(ℳ_F×ℳ_G×ℳ_N) given by Q(f)=(X_f+θ(ξ)X_f)+f, with X_f={·,f}_c, acquires extra dependences on ξ, in addition to the dependence that f may have already presented on ξ. Note that this dependence is possible even if we only quantize the matter fields (for example, the Hamiltonian in the classical field theory, depended on Lapse, shift and 3-metric h) and, in fact, is the only one that may appear if J_C was chosen ξ-independently, as happens in the more trivial case considered in <cit.>.
] (and w.r.t. the vacuum phase, which adds an imaginary multiplicative term, which is also leaf dependent <cit.>).
The necessity for this sectional nature of the quantum states can be further justified under the prism of the C^⋆-algebras. For the observables under consideration, we may think of the leaf dependence of the former structures as a ξ-parametric family of representations of such C^⋆-algebra: both operators and the scalar product(from which the statistical notion of expectation value is taken) depend on ξ, and, as a natural consequence, if we identify a certain abstract quantum state with the set of probabilities assigned to possible results of physical measurements of observables, such object acquires a ξ-parametric family of functional representations σ_ψ:ℬ→ (𝒩)^'.
This leads us to the definition of the algebra of observables 𝒜_Q within this geometrical formulation. As in <cit.> (and in its hybrid case, <cit.>), it is precisely given by the set of functions f:ℳ_Q→ℝ, given by expectation values of linear operators over ℳ_Q which are Hermitian for the scalar product under consideration ⟨,⟩_ξ. We denote such space of operators as H_⟨,⟩(ℳ_Q), and thus we define:
𝒜_Q:={ f_A(Ψ;ξ):=⟨Ψ|Â(ξ)|Ψ⟩_ξ | ∀Â∈ H_⟨,⟩_ξ(ℳ_Q)}
Notice that the elements of such algebra acquire, due to the quantization procedure, a threefold ξ-dependence. The first one arises from the sectional nature of the states, so the simplified notation in (<ref>) means Ψ=σ_Ψ(ξ). The second one comes from the scalar product itself, explicitly denoted by ⟨,⟩_ξ, associated to the leaf dependent Gaussian measure Dμ_ξ (and ultimately to J_C(ξ), or to the leaf dependent vacuum section). Lastly, the operators are also ξ-dependent (as the notion of self-adjointness is referred to ⟨,⟩_ξ), so we must consider Â=Q_ξ(A) where Q_ξ(A) is a certain ξ-dependent quantization (see <cit.>,<cit.>) of the classical field function A∈𝒜_C adapted to the vacuum phase and measure, to make it appropriately self-adjoint. As the whole construction is tensorial, all three dependencies must be consistent with each other, as we will discuss below.
There is a fourth leaf dependence of the elements of the algebra, but this one is natural of the classical field magnitudes before quantization (denoted by A∈𝒜_C above), as they are assumed to have already been constructed in conjunction with geometrodynamics. For example, the classical field theoretical Hamiltonian in (<ref>) was lapse, shift and 3-metric dependent. Once quantized, it maintains such dependence and acquires the three former ones.
This algebra can be endowed with a Quantum Poisson Bracket (QPB) {,}_Q that reproduces the Lie bracket of operators <cit.>, and thus, for any two elements f_A,f_B∈𝒜_Q, defined as f_A:=⟨Ψ|Â|Ψ⟩ and f_B:=⟨Ψ|B̂|Ψ⟩, its QPB is given by:
{ f_A,f_B}_Q=-iħ⟨Ψ| [A,B]|Ψ⟩=-ħ^-1f_i[A,B] .
For future use, we define the algebra resulting under the completion of 𝒜_Q under the ordinary product of functions:
𝒜̅_Q={∏_i=1^n f_A_i| f_A_i∈𝒜_Q ∀ i∈[1,n];∀ n∈ℕ}
Given this definition for the PB, Jacobi identity and anticommutativity are inherited because of the commutator structure of linear operators. Bilinearity, is inherited because the Hermitian product is bilinear and so are the operators.
Over such algebra, Leibniz can be imposed for such ordinary product of functions, defined for the product of two elements of 𝒜_Q as:
{ f_A,f_Bf_C}_Q={ f_A,f_B}_Qf_C=+{ f_A,f_Bf_C}_Q
and generalized to higher orders trivially.
It can also be seen under the optic of a geometric formulation (see appendix, based on a generalization of <cit.>) that this property is in fact inherited from the definition of the PB in terms of the symplectic form ω_Q over ℳ_Q, {fg,h}_Q:=ω_Q(X_fg,X_h)=ι_X_hd(fg)=f{ g,h}+{ f,h}_Q g, where one only needs to consider the distributive property of the exterior derivative. From this relation with a symplectic form, this property can also be seen considering that the Poisson bracket will be the usual bilinear derivative operator, written in some second quantized Darboux coordinates (see <cit.>) as:
{ f,g }_Q=∫_𝒩Dβ(ϕ)∂_[Φ(ϕ)f∂_Π(ϕ)]g
with an auxiliar measure Dβ to contract the derivatives.
Note that to define such derivatives one needs to perform (Fréchet) differential geometry<cit.>, one would like to restrict the states to the common subspace of any L^2(𝒩^'), given by the Hida functions (𝒩), which is why the rigged Hilbert space (<ref>) is relevant, to densely extend the notion of derivative from the nuclear space to its dual.
Even though the elements of 𝒜_Q are leaf dependent, we claim that the Poisson bracket itself must be leaf independent, in the sense that:
∂_ξ{ f,g}_Q={∂_ξ f,g}_Q+{ f,∂_ξ g}_Q ∀ξ∈[h,π,N,N^i]
In the following section, we argue that the main reason for geometric independence of the QPB is to make it compatible with the geometrodynamical Poisson Bracket, {,}_G, so the sum of both is an appropriate Poisson bracket (fulfilling Jacobi identity) over the functions on the hybrid manifold that form the Poisson algebra of hybrid observables, allowing us to find hybrid generating functions representing the generators of hypersurface deformations. Nevertheless, two further justifications can already be given for the leaf independence of the quantum Poisson bracket.
The first one goes in regard to the (h,π_h) independence- Given our Hamiltonian formalism, we must claim that hypersurface deformations must be symplectomorphisms (for a symplectic structure, weak inverse of the Poisson tensor under consideration, for further inquiry check the supplementary material). Nevertheless, this deformations also modify the values of the geometric variables, and so, if the Poisson bracket (weak inverse of a quantum symplectic structure) depended on them, we wouldn't be able to construct a symplectomorphic representation.
The second one is in consideration of the dependence on lapse and shift functions. One would argue that the kinematical structure should be leaf independent in order to implement path independence of the theory, reproducing the algebra of generators of symmetries as functions over Cauchy data on each leaf, without additional dependencies on the particular choice of foliation, given by E. If done otherwise, the closing relations of local generators of hypersurface deformations would become lapse and shift dependent, and thus, such functions could not be considered to be Lagrange multipliers for the Hamiltonian and momenta constraints.
In fact, even if regarded as a design choice, in this construction we aim to build the Hamiltonian kinematical relations before the choice of dynamics, and the lapse and shift functions are of dynamical nature, as they characterize the evolution vector field E. Therefore, the kinematics must be leaf independent, which is portrayed in (<ref>).
Condition (<ref>) implies a particular relation between the connection chosen for the states and the ξ-dependence for operators and scalar product. For simplicity on this proof we will use the bra-ket notation. As the construction is tensorial, we can consider the covariant derivatives of different types of sections (of different bundles):
* For the quantum states, given by (1,0) tensors (associated with the evaluation on a point ξ∈ℬ of sections in ℳ_s), their covariant definition implies
∂_ξ|Ψ⟩=-Γ_h|Ψ⟩ .
Where the subindex h is just to state that this connection that depicts the covariant transformation of the vectors is relative to the ξ-dependent hermitian product. Here we are assuming that the connection is adapted in such a way that it does not mix the bra-ket representation.
* For their duals ⟨Ψ| in the Hilbert space, we may derive its dependence. Now, horizontal directions correspond to tangents to sections of the bundle ℱ^*, the dual bundle of ℱ. We have introduced around (<ref>) that the notion of adjoint is leaf dependent as it is relative to the leaf dependent scalar product. We define the adjunction C_ξ for a given ξ∈ℬ as C_ξ(|Ψ⟩_ξ):=⟨Ψ|_ξ, and analogously, C_ξ(Â|Ψ⟩)=⟨Ψ|Â^†_ξ. Notice that C_ξ mixes maximally the bra-ket representation and thus we must proceed with care, as it is a bundle isomorphism C:ℱ→ℱ^*. With these considerations:
∂_ξ⟨Ψ|_ξ=∂_ξ C_ξ(|Ψ⟩_ξ)=
C_ξ(∂_ξ|Ψ⟩_ξ)+⟨Ψ|_ξ (C_ξ^-1∂_ξ C_ξ)
where we have defined T̂(ξ):=(C_ξ^-1∂_ξ C_ξ) as the object of adjointness transport, and we make use of eq. (<ref>) to realize that
∂_ξ⟨Ψ|_ξ=-⟨Ψ|Γ̂^†_ξ_h+⟨Ψ|_ξT̂=-⟨Ψ|Γ̂_h^T
where we have defined Γ̂_h^T:=Γ̂^†_ξ_h-T̂.
* Finally, linear operators, denoted by Â, are (1,1) tensors whose construction from classical magnitudes via the quantization mapping also depend on ξ. Therefore, we can write the derivative of the expectation values of operators as:
∂_ξ f_A=⟨Ψ|Â^'|Ψ⟩
where Â^':=(-ÂΓ̂_h-Γ̃_h^TÂ+∂_ξ(Â)).
We will show now that these connections must be almost anti-self adjoint in order to fulfill (<ref>). Thus, with an analogous definition as above for the primed operators, the left hand side of (<ref>) yields:
-ħ^-1∂_ξ f_i[Â,B̂]=-ħ^-1⟨Ψ| (i[Â,B̂])^'|Ψ⟩
nevertheless, the right hand side yields:
{∂_ξ f_A,f_B}_Q+{ f_A,∂_ξ f_B}_Q=
-ħ^-1⟨Ψ| i([Â^',B̂]+[Â,B̂^'])|Ψ⟩
and one can easily check that
[Â^',B̂]+[Â,B̂^']-([Â,B̂])^'=0⇔Γ̃^T_h=-Γ̂_h
that is, this connection for the bra-ket notation of the sections representing the quantum states must be anti-selfadjoint (with the term T correcting for the leaf dependent notion of adjointness) in order to have a leaf independent Quantum Poisson bracket. In <cit.> this result is obtained from the construction of the quantum symplectic form in a compatible way with a time dependent quantization.
Note that this constraint for the connection precisely implies the covariant derivative of the operator  in eq. (<ref>), as, under Γ̂^†_h-T=-Γ̂_h, we obtain Â^':=∂_ξ(Â)+[Γ,Â]=∇_ξÂ. This is natural, from the geometrical point of view, since we are considering the covariant derivative of a (1,1)–tensor for the fiber of ℱ. Our conclusion is that the condition holds if all the leaf dependencies inherited from the leaf dependent quantization of the quantum objects correspond to the parallel transport of the same connection ∇.
Let us now exemplify this in the functional notation, where the connection Γ̂ appears as a linear operator over the quantum states, defined as in (<ref>). In order to do so, we resort to our former example of functions of complex domain, but the construction is analogous in the real case. We proceed to check what does this condition imply in the functional notation for the states and the scalar product:
∂_ξ f_A=∫_𝒩^'Dμ_ξ(ϕ)(Ψ̅Â(-Γ̂Ψ)+Ψ̅∂_ξ(Â)Ψ
+(-Γ̂Ψ)ÂΨ
+F(ϕ,ϕ̅)Ψ̅ÂΨ)
where the last term F:=∂_ξ(Dμ_ξ)Dμ_ξ accounts for the derivative w.r.t. ξ of the leaf dependent Gaussian measure (strictly defined in terms of a Radon-Nikodym derivative <cit.>). To illustrate the shape of this term, in the particular case of holomorphic quantization of a scalar field considered in <cit.>, for a Gaussian measure with field covariance Δ^xy(ξ) and inverse of the covariance K_xy(ξ), one can consider F(ϕ,ϕ̅)=(-ϕ^x∂_ξ K_xyϕ̅^y-Tr(∂_ξΔ^xy K_yz)
[Although this expression must be taken with care, always together with the measure acting over the functions, given the distributional nature of the derivative of the metric, and regarding its definition through a projective limit through finite dimensional spaces <cit.>.
].
On the one hand, through a total derivative, we may find the operator Γ̂^† that fulfills
∫_𝒩Dμ_ξ(ϕ)Γ̂Ψ̂_1Ψ_2=∫_𝒩Dμ_ξ(ϕ)Ψ̅_1(Γ̂^†Ψ_2) ,
which defines the adjointness for such scalar product when considered ∀Ψ_1,Ψ_2∈ L^2(𝒩^',Dμ_ξ).
On the other hand, taking into account the multiplicative nature of F and identifying T:=F(ϕ,ϕ̅), we can make use of the former definition Γ̂^T:=Γ̂^†-T
to write:
∂_ξ f_A=∫_𝒩^'Dμ_ξ(ϕ)(-Ψ̅ÂΓ̂Ψ+Ψ̅∂_ξ(Â)Ψ-Ψ̅Γ̂^TÂΨ) .
Following the same procedure in this notation as in eqs. (<ref>),(<ref>) for the bracket notation, we arrive to the conclusion that
Γ̂^T=-Γ̂ .
Expanding on the previous example, for the holomorphic case (for the given F above) it can be shown that this equation is fulfilled (although it is not the only solution) when the connection is given by
Γ̂Ψ(ϕ):=12ϕ^x(∂_ξ K_xy)Δ^yz∂_ϕ^zΨ(ϕ)
which is the connection for time dependent quantization reached in <cit.> following a different argument, not based (solely) on the physical requirement of leaf independence of the kinematics. Beyond the choice of holomorphic picture to exemplify, the procedure is analogous in any other picture, and its relation with the connection for the Schrödinger case (real field polarization) is straightforward taking into account the Segal-Bargman transform relating both pictures, as described in <cit.>.
As a general result we may conclude that, in order to fulfill the leaf independence of the kinematics encoded in (<ref>), any connection defining the quantum states and the ξ-parametric family of Gaussian measures that fulfills the constraint defined in eq. (<ref>) (relating the derivative of the ξ-parametric family of Gaussian measures, the connection and its adjoint) is valid.
A more geometric approach, associating the scalar product to a Kähler structure, and the properties of the compatible connection with its relations with the symplectic and complex structures, can be found in the supplementary material.
§.§.§ Preservation of the norm
From this constraint for the connection, together with the condition of not mixing bras and kets (or in the functional example given, the fact that it preserves the holomorphic nature of Ψ(ϕ)), we obtain a crucial physical implication: the scalar product of two quantum states, both being sections and with the scalar product being also ξ-dependent, does not change along the curves over the base of the bundle, i.e.:
∂_ξ⟨Ψ_1|Ψ_2⟩=0 ∀ξ∈[h,π,N,N^i].
Although this result is derived from physical claims, under a geometrical perspective this is a natural consequence of our tensorial construction. If we choose the same connection for all the quantum tensors, and we restrict ourselves to considering kinematical transformations, which are tangent to covariant sections of the different bundles involved, scalar magnitudes such as the norm, must be constant.
In particular, this leads to the conservation of norm of the quantum state (⟨Ψ|Ψ⟩) under changes of geometric variables, lapse and shift:
∂_ξ⟨Ψ|Ψ⟩=0 ∀ξ∈[h,π,N,N^i]
which makes striking contrast with the usual 3+1 picture of QFT in curved spacetime where norm loss is ubiquitous, making room for the phenomena of quantum completeness <cit.>.
As a summary, the main difficulty with a Hamiltonian picture of QFT in generic dynamical spacetimes is that the quantization procedure, or the choice of complex structure J_C over the field manifold, is most generally constructed in a leaf dependent way (this is also the case of the most physically sensible choice of J_C to our knowledge, the restriction to a spatial hypersurface of the one proposed in <cit.> and also used in <cit.>).
This implies that, for each ξ∈ℳ_G×ℳ_N,
* the states acquire a different functional representation as a Hida distribution σ_Ψ(ξ)=Ψ∈(𝒩)^',
* the scalar product will be associated to a different Gaussian measure Dμ_ξ over the space of fields and, in turn, the states are bound to be seen belonging to a different Hilbert space, L^2(𝒩^',Dμ_ξ),
* the operators acquire a different representation to be self adjoint for such measure (and vacuum phase).
Thus, the usual notions of Fock spaces, creation and destruction operators, number of particle operators, etc. are constructed in an a priori foliation dependent way, which does not come as a surprise to the reader familiar with QFT in curved spacetime. While this may spoil the particle interpretation of the quantum states this is not relevant for our purpose; with the aim of constructing consistent hybrid geometrodynamics compatible with an equivalence principle the object of interest is the Poisson structure of QFT. Following the most sensible physical choice in our judgement, this structure has been made invariant in order to properly reproduce kinematical relations of symmetry generators and the foliation independence principle of geometrodynamics. This postulate is mathematically captured in (<ref>). The consequence is that the quantum states, constructed as sections, or more plainly put, smooth Hida-distribution-valued functions over the geometric manifold, σ:ℳ_G×ℳ_N→(𝒩)^', must fulfill eq.(<ref>) for a connection that fulfills (<ref>).
In order to separate clearly the leaf dependent (associated to physical interpretation) and leaf independent (associated to the symplectic) geometric structures, in the supplementary notes we abandon the usual functional picture of Hamiltonian QFT, in order to borrow the Poisson structure for QFT from the more abstract Geometric Formalism of Quantum Mechanics developed in <cit.> where the symplectic structure is readily available, and the Poisson tensor will be associated to its (weak) inverse. This Kähler formalism was adapted in <cit.> to QFT for a time dependent quantization, and the resulting connection already fulfilled (<ref>), but instead of claiming (<ref>), an equivalent condition is claimed for the time dependence of geometric structures.
§.§ Physical observables, generating functions of symmetries and supermagnitudes.
Armed with this Poisson algebra 𝒜_Q, we are able to represent the infinitesimal generators of symmetries, and in particular of hypersurface deformations, in a Hamiltonian language for QFT.
The association of physical magnitudes to operators is usually elucidated through the quantization procedure Q(f) of the functions representing such magnitudes at the classical level. As we are interested in the quantization of the classical geometrodynamical generators, we consider now the generating functional A∈𝒜_F of a certain symmetry a:ℳ_F→ℳ_F over classical space of matter fields. This functional is mapped through the quantization Q to a linear operator Â:=Q(A) over the quantum states, and, in turn, it is mapped to a quantum generating function f_Â(Ψ)=⟨Ψ|Â|Ψ⟩, in the quantum algebra 𝒜_Q. Regarding the Hamiltonian fields, at the quantum level we try to reproduce the usual Hamiltonial dynamics over the classical fields, where the infinitesimal generator of the symmetry acting any other element in the classical algebra B is given by the associated Hamiltonian field X_AB={ B,A}_C. At the quantum level, the infinitesimal generator of the symmetry is analogously given by the quantum Hamiltonian field X_f_A, that acting over any other element of the quantum algebra f_B∈𝒜_Q is given by:
ℒ_X_f_Af_B=ω_Q(X_f_A,X_f_B)={ f_B,f_A}_Q=-iħf_[B̂,Â]
Note that this construction does not ensure the preservation of the subalgebra of constraints or generators under quantization, as the generating functions of two symmetries at the classical level and at the quantum level may not have the same closing relations under their respective Poisson brackets. We will return to this issue later.
Let us consider now the generating functions of hypersurface deformations for QFT. Being N^xℋ_x^M(ϕ,π) the generating function over classical fields of a normal deformation characterized by N^x, it is mapped through the quantization to a linear operator over the quantum states, Q(N^xℋ_x^M)=N^xQ(ℋ_x^M):=N^xℋ̂_x and thus, the associated element of the quantum Poisson algebra is given by
N^xf_ℋ̂_x=N^x⟨Ψ|ℋ̂_x|Ψ⟩∈𝒜_Q
Thus, we define the quantum superhamiltonian as the infinitesimal local generator of a normal deformation (still to be contracted with a distribution, N^x) in analogy with the classical case:
ℋ^Q(x):=⟨Ψ|ℋ̂_x|Ψ⟩
Analogously, the generating function N^ixℋ_ix at the classical field theory level of a tangential stretching of the hypersurface characterized by the vector distribution N^ix, is mapped through the quantization to the quantum generating function:
N^ixf_ℋ̂_ix=N^ix⟨Ψ|ℋ̂_ix|Ψ⟩ .
Thus, the associated local generators define the quantum supermomenta:
ℋ_i^Q(x)=⟨Ψ|ℋ̂_ix|Ψ⟩
It is interesting to define at this point the quantum Hamiltonian operator Ĥ, given by the quantization of the classical field matter hamiltonian N^xℋ^M_x+N^ixℋ_ix^M:
Ĥ:=N^xℋ̂_x+N^ixℋ̂_ix
which of course has acquired further dependences on ξ∈ℬ in order to be self-adjoint for the given scalar product at each ξ.
On the other hand, if the compatibility of the symmetries is to be preserved under the quantization procedure, it would appear as a desirable property for the quantization Q that the generators of the symmetries of the system commuted the same way they did in the classical field theory before constraints were imposed. In particular, they should commute with the generating function for the dynamics to be a proper (time preserved) physical symmetry.
However, it is well known from Groenewold's no-go theorem<cit.> that Dirac's quantization relations:
Q({ f,g}_c)=-iħ[Q(f),Q(g)]
do not hold for arbitrary functions f,g∈𝒜_c over classical phase space and thus the Poisson algebra structure of the functions in classical field theory is not mapped to the algebra of linear operators over quantum states provide with its commutator, 𝒜_Q. Nevertheless, a crucial result from Groenewold's no-go theorem is that, if either f or g is a quadratic polynomial, then the equality is satisfied. Besides, if { f,g}_c=0, the equality is also trivially fulfilled.
Note that, in particular, the classical field theoretical supermagnitudes are up to quadratic in momenta. As explained in <cit.>, the matter supermomenta ℋ_ix must be linear on matter momenta (quadratic in all field variables) as a consequence of the representation postulate. It ensures that the transformation on the fields is properly associated to an stretching on their domain, which is equivalent to a spatial diffeomorphism intrinsic to the leaf, and thus, if higher powers of the momenta appeared in its generating function, such deformation would acquire an incoherent extrinsic nature.
On the other hand, in <cit.> the matter superhamiltonian is restricted, as an axiom to be no more than quadratic on momenta (sensibly for the usual Hamiltonian construction, even more so regarding quantization), in order to have the usual canonicity relations between fields and momenta in relation with the generated dynamics.
As a result we know, from Groenewold's no-go theorem, that Dirac's relations for quantization elements, and the former elements in the quantum Poisson algebra reproduce the closing relations of the classical algebra:
{ N^μ xf_ℋ̂_μ x,N^νx^'f_ℋ̂_νx^'}_Q=N^AxN^Bx^'f_Q({ℋ_μ x,ℋ_νx^'}_c))
where μ,ν represent indices 0,⋯,3, being the non-zero associated to a basis of tangential stretchings and the zeroth, to normal deformations. This result will be crucial to show that the quantum supermagnitudes fulfill the appropriate closing relations for a matter theory.
We must add that being quadratic on field momenta becomes, in this construction, a requisite for the theory to be suitable for quantization in a natural way, to seize this exception of Groenewold's theorem. Otherwise, higher orders in momenta would lead to different closing relations on the quantized generators from the ones fulfilled by the classical theory, which already represented Dirac's generators Lie bracket closing relations, and thus, quantized supermagnitudes would not be representing the closing relations of Dirac's algebra, which is a requisite for consistent matter theories <cit.>.
One must not forget about a fundamental difference of this whole construction in the case of Quantum Geometrodynamics from the case of ordinary QM (even when coupled to classical variables as in <cit.>): in the current case the quantization procedure Q is leaf dependent. As explained before, the dependence of the operator with the leaf variables will be different from the one of the classical function, ∂_ξ Q(f)≠ Q(∂_ξ f), as the quantization procedure itself is an additional source of leaf dependence.
We remind that this dependence arises as follows. Once a scalar product defining a Hilbert space and a vacuum state are chosen (or, equivalently, a complex structure over the classical phase space of fields<cit.>), the representation of the quantized (linear) operators is defined accordingly to be self adjoint and consistent with the vacuum phase. The extension to higher order operators is given by the choice of ordering (either Wick's or Weyl's are both considered in <cit.> in an adapted language to this formalism). Therefore, all this procedure denoted by the quantization Q ends up being leaf dependent, as so was the scalar product and vacuum phase (or J_C), and while it serves us to map the functions over the classical field phase space to the space of linear operators over quantum states, it is done in a ξ-dependent fashion, properly adapted to be internal operators, at each ξ, to the ξ-dependent Hilbert space defined:
Q_ξ:C^∞(ℳ_F)→ H_⟨,⟩_ξ (ℳ_Q)
In this line it is relevant to consider that, given the condition (<ref>) for the connection, the elements of 𝒜_Q depend on ξ as:
∂_ξ f_Â=⟨Ψ|(∂_ξ Q(A)+[Γ̂,Q(A)])|Ψ⟩
In the following section it will be shown that when the observable A in (<ref>) is a supermagnitude, such closing relations in the whole hybrid theory will lead to consistency requirements between the leaf dependence of the quantization and the commutator with the connection. Let us proceed.
§ HYBRID GEOMETRODYNAMICS.
The ultimate goal of the preceding construction is the definition of a Hybrid Phase Space constructed from Classical Geometrodynamical variables and Quantum Wave Functionals of the quantized matter fields and provided of a Hybrid Poisson Bracket. In the previous section, we have proved that if we consider the bundle ℱ to represent the classical and the quantum states, the leaf-dependent tensorial structures on the set of quantum states can replace their classical analogues for classical matter fields, building the geometrodynamical generators as Hamiltonian vector fields with respect to the canonical quantum Poisson bracket { , } _Q. The next and final step is to combine the quantum supermagnitudes with the pure geometrodynamical ones to construct hybrid supermagnitudes with physical meaning that fulfill the total closing relations and fulfill the Momenta and Hamiltonian Constraints on shell, being therefore conserved by the dynamics. Thus, in this section we promote the gravitational degrees of freedom of the former section, from a given background (h_ij
(x,s),π^ij_h(x,s)) with parametric space-time dependencies, to kinematical variables, being their dynamics no longer externally given, but coupled with quantum matter fields.
Therefore, we proceed now to substitute the classical matter of Section 2 as sources of gravitation in a geometrodynamical framework, by the Quantum Field Theory described as in Section 3.
§.§ Hybrid phase space.
The construction of the hybrid phase space ℳ_H, in <cit.>, is made from the cartesian product of the quantum and classical submanifolds, given that they are constructed independently of one another. In the case of hybrid geometrodynamics this is built analogously,
considering the manifold of hybrid states the bundle ℱ introduced in the previous section.
ℳ_H=ℱ.
Any point in the bundle defines a pair of a classical state (determined by the values of h_ij, π^ij, N, N^i), and a quantum state Ψ. Nonetheless, the quantum states are dependent on a connection ∇ defined on the bundle, which allows to decompose the tangent space at any point of ℱ as Equation (<ref>), the horizontal directions being tangent to the covariant sections of the bundle. Physically, these directions represent the set of quantum states which are related by a re-arrangement in the geometry only, without any quantum dynamics.
In contrast with the case in the previous section, now the purely geometrodynamical variables are not associated to a background spacetime, but are promoted to be part of the kinematical variables. Thus, now we are going to consider geometrodynamical transformations acting on both types of states; analogously to what happened in the case of the geometrodynamical construction for classical matter fields analyzed in section 2. We are going to prove that the quantum tools introduced in the previous section will replace their classical analogues and define a consistent implementation of geometrodynamics for the hybrid setting. Thus, the quantum Poisson bracket will be combined with the classical PB to create a well-defined hybrid Poisson bracket on the set of functions of ℱ, which will replace the functions of the classical manifold ℳ_G×ℳ_F. Note that the manifold of lapse and shift functions, ℳ_N, is present in the base of the bundle due to the parametric family of quantizations (representation of quantum states, operators and scalar product) which depend on them and need a way to be related (through the horizontal bundle defined above also for such variables), but they will not affect the kinematics (definition of Poisson tensors and closing relations for the local generators of hypersurface deformations) of either ℳ_G nor ℳ_Q.
but the quantum manifold is already dependent on the geometric degrees of freedom, being the quantum states given by the local evaluation of the quantum sections defined in (<ref>). Remember that the motivation to introduce the sections of quantum objects was to implement the kinematical structure that allows us to define derivatives of quantum objects w.r.t. ξ∈ℬ, such as ∇_ξσ_Ψ=0. Now, we encounter the same problem, but, besides, the geometric degrees of freedom become an ingredient of the model and not a parameter external to the theory. Hence, geometrodynamical transformations must act on both sets. Therefore, as it happened with the quantum degrees of freedom, the representation of objects becomes rather large but, at the same time, simple to understand. Sections will not directly represent physical states, but both the state and its kinematical information in relation with the ξ-dependence of the quantization.
Thus, a physical hybrid state is characterized by the geometrical variables and the Hida distribution for the quantum state given by the section at such point of the base, (h_ij,π^ij_h,N,N^i,Ψ=σ(h,π_h,N,N^i))∈ℳ_H-states and, intuitively,it encodes all the physical information for each spatial leaf Σ_s. Note that this expression for the hybrid state (which also includes the lapse and shift functions for completeness, but are not kinematical variables) is the representation of the section σ_Ψ as the local trivialization of the fibration on a given point of ℬ.
One must realize that, while such evaluation point was externally given in the former section as a parameter, in this section it will be a dynamical variable, and thus, we must consider the manifold evaluation points (which is precisely B as part of the kinematical information. In this sense, the kinematical hybrid state can be regarded as (h_ij,π^ij_h,N,N^i,σ_Ψ∈ℳ_H-kin, given by such element of the base and the quantum section, which once performed the evaluation of the latter on the former, will yield the physical hybrid state .
Therefore, the kinematical hybrid phase space can be identified with the cartesian product of the base (from which the particular geometry of the leaf is chosen) and the space of quantum sections (seen as an application yet to be evaluated on the geometry data to yield the quantum state),
each point of it providing us with the geometry of the leaf h,π_h, the lapse and shift functions N,N^i, and a quantum section σ_ψ. Nevertheless, this phase space is too large to accommodate the physical phase space of the theory, but it inherits trivially the necessary kinematical structures from ℳ_Q. To recover the physical states we must constrain the system to a submanifold ℳ_H-states that takes the quantum state picked up from ℳ_Q, evaluated on the purely geometrodynamical state picked from the first element in the cartesian product, ℬ. The constrain from the kinematical phase space to the physical one is defined by the projection
from which it follows ℳ_H-states= ⋃_σ∈ℳ_QG(σ) with G(σ) being the graph of the section.
Cada vez me gusta más mi otra versión, la del fibrado con fibra ℬ× (𝒩)^'. De esta manera tienes la simplecticidad, suma de paréntesis de Poisson, etc, directamente. Yo diría, además, que tiene que ser construible a partir de esta construcción tuya. Además, se justifica más sencillamente la elección de los observables: se corresponden con las funciones sobre la fibra del fibrado que propongo, que es precisamente lo que tienes en cada situación física
This is why, regarding the section as such graph, we need this somewhat duplicated structure for the hybrid phase space (as the base appears on ℳ_Q for the sections, but also in the cartesian product defining ℳ_H), to pick up firstly the evaluation point, and secondly the graph that (once evaluated on such point) yields the quantum state. [Given this apparent reduplication, one could be tempted to define the hybrid phase space as the quantum sections alone, as we already had geometrical data on the base of the fibration. Nevertheless, from an element of ℳ_Q defining the section, we could not define the geometry of Σ_s, but instead all possible quantum states for all possible geometries and we would be lacking the physical information that should be encoded in a point of the configuration space.
]
Note that, this difference between physical state and kinematical state is precisely the same difference between the quantum sections belonging to ℳ_Q and the quantum states as introduced in the previous section, but now, inherited to the hybrid case.
Note also that, as happened in the previous section, the dynamics of lapse are shift are not included in the Hamiltonian description, but a result of the choice of foliation of spacetime. Thus, such dynamics are externally given and, in fact, the choice of any particular dynamics for them should not have any physical impact, as they are associated with a symmetry of the formalism, each dynamics defining a path among possible hypersurfaces reproducing a certain foliation, which is at the core of the path independence principle as stated in <cit.>.
Therefore, armed with a Hybrid state providing the whole information on a Cauchy surface of initial data and with the reconstruction of the evolution field E from the lapse and shift at each foliation label, we can spawn the physical information for the whole spacetime. Consequently, the whole quantum matter and geometrical content of the Universe is equivalent through E to the initial conditions on Σ_0, i.e. the intrinsic 3-metric h_ij(0) on Σ_0, its conjugate momenta π^ij_h(0), and the quantum state Ψ=σ(h(0),π_h(0),N(0),N^i(0)) for such initial leaf data that populates the quantum fields all over Σ_0.
The connection for the quantum section will allow us to determine the change on the quantum state due to the change of the geometric variables, gluing together the unitarily inequivalent Hilbert spaces defined for infinitesimally neighbouring hypersurfaces.
§.§ Algebra of observables.
As was the case in the hybrid systems studied in <cit.>, The hybrid algebra of observables can be represented by the expectation value of self adjoint linear operators over the quantum manifold ℳ_Q which had an infinitely differentiable dependence on the classical variables. In this sense, in terms of abstract C^⋆-algebras one can consider that the hybrid algebra of observables is given by 𝒜_H:=𝒜_G⊗𝒜_Q <cit.>. In such framework, the operators given by a classical function multiplying the identity, were considered purely classical observables, while the ones independent of the classical degrees of freedom, were thought of as purely quantum. This differentiation was based on the fact that, given that norm of the quantum state was conserved, the expectation value of the identity was a conserved quantity, fixed to the unity.
In this current geometrodynamical case the spirit is the same, but we have already taken one step ahead as 𝒜_Q was defined as the expectation value of leaf dependent operators, for hermitian product and states that were also leaf dependent, so the quantum observables are already dependent on the classical variables, which are now of kinematical nature, no longer an external parameter. Thus, in our framework the hybrid algebra of observables is precisely identified with the quantum one
𝒜_H:=𝒜̅_Q ,
where 𝒜_Q is defined as in Equation (<ref>), noticing that now their dependence on ξ∈ℬ is not regarded as an external parametric dependence. Notice, nonetheless, that the resulting set is isomorphic to the completion under the product of 𝒜_G⊗𝒜_Q.
We are already considering its completion as defined in (<ref>) for reasons that will become clear later. We will now illustrate a bit how this hybrid observables are constructed and how they represent physical magnitudes, either material or purely geometrical.
Consider first a classical field theoretical magnitude, A∈ C^∞(ℳ_F) that only depend on field on matter fields, not on geometric fields. It can be expressed as a polynomial on matter field distributions ϕ^x and their associated momenta π^y as:
A=∑_ij a_ijx⃗,y⃗ϕ^ix⃗π^jy⃗
where the coefficient functions a_ijx⃗y⃗ are leaf independent. The element of 𝒜_Q given by ⟨Ψ| Q(A)|Ψ⟩ would have been a purely material observable at the classical level, but has acquired a hybrid nature due to the leaf dependence of the quantization, states, and scalar product.
Let us now consider a function B over the classical matter fields whose coefficients already depended on the geometric variables at the classical level (for example, this is the case for the superhamiltonian of a classical scalar field), so:
B:=∑_ij b_ijx⃗,y⃗(h,π_h) ϕ^ix⃗π^jy⃗,
the element of 𝒜_Q given by ⟨Ψ| Q(B)|Ψ⟩ is a mixed matter-geometrical observable both before and after quantization, but has acquired a different leaf dependence due to the leaf dependence of the quantization, states, and scalar product.
Lastly, if we consider a function g solely of the geometric variables, g(h,π)∈𝒜_G, and perform a quantization procedure for the matter fields, given its independence on such fields we get Q(g)=g𝕀. Thus, the hybrid observable representing this purely geometrical observable is given by f_g𝕀=g(h,π)⟨Ψ|𝕀|Ψ⟩∈𝒜_H. Given the norm conservation under changes of leaf variables derived from (<ref>), and if the total hybrid dynamics still preserve the norm of the quantum state, under the initial constraint of norm unity, the observable in the geometric algebra of observables and its hybrid counterpart are equal at all times and under all symmetry transformations, f_g𝕀(h,π,Ψ)=g(h,π)
[
If the theory had been constructed to have norm loss for the quantum states, then one would have defined explicitly 𝒜_H:=𝒜_G∪𝒜̅_Q, to be able to include the purely gravitational observables independently of the norm of the quantum state.].
Summarizing, in our framework, the hybrid algebra of observables is constituted by functions (and products of them) defined as the expectation value under the leaf dependent representation of the quantum state Ψ∈ℳ_Q and leaf dependent scalar product ⟨,⟩_ξ of the Hermitian operator Q_ξ(f) resulting from the quantization procedure (again, leaf dependent) for the matter fields of functions over the space of classical matter fields and geometric variables, f∈ C^∞(ℳ_G×ℳ_F).
Given the construction of the abstract 𝒜_H equivalent to 𝒜_G⊗𝒜_Q and the Poisson structures present in the representations as functions over each phase space of 𝒜_G and 𝒜_Q, we may endow the representation of 𝒜_H with a bilinear operator given by the sum of both of them:
H=G+Q ,
which can be shown to be a Poisson bracket over the set of observables contained in 𝒜_H. Leibniz, antisymmetry and bilinearity immediately inherited from the properties of {,}_G and {,}_Q over their respective algebras. Nevertheless, to fulfill Jacobi identity (and thus, be a proper Poisson bracket) a compatibility condition must be enforced, which is precisely eq. (<ref>), justifying even more so the leaf independence of the quantum PB.
It is trivial to check that 𝒜_H forms a Poisson algebra under {,}_H, while 𝒜_Q is not a proper subalgebra, given that
{ f_A,f_B}_G=f_∂_h A+[Γ_h,Â]f_∂_π B+[Γ_π,Â]-(A↔ B) ,
where we have made use of (<ref>), compacted in the subindex notation. Hence, the definition of 𝒜_H to be the quantum algebra completed under the ordinary product, 𝒜̅_Q.
Over this hybrid Poisson algebra, the infinitesimal generators X_A of any Hamiltonian transformation a:ℳ_H→ℳ_H acquire a Hamiltonian representation over 𝒜_H through its generating function, f_A, such that, for any hybrid observable F:
ℒ_X_A F={ F,f_A}_H ∀ F∈𝒜_H
In particular for the generating functions representing the local generators of hypersurface deformations, the hybrid superhamiltonian ℋ_H and supermomenta ℋ_iH are
constructed invoking the equivalence principle as in <cit.>, where the hybrid supermagnitudes must be built as the sum as the pure geometrodynamical (classical) supermagnitude and the matter (quantum) supermagnitude:
ℋ_H:=ℋ(h,π;x)⟨Ψ|Ψ⟩+ℋ_Q(h,Ψ,Ψ̅;x) ,
ℋ_iH:=ℋ_i(h,π)⟨Ψ|Ψ⟩+ℋ_iQ(h,Ψ,Ψ̅) .
Note however, that the fact that the gravitational supermagnitude appears multiplied by the norm of the quantum state is just for the supermagnitudes to strictly belong to 𝒜_H, but in our framework ⟨Ψ|Ψ⟩ may just be a constant which can be set to 1.
It is thoroughly argued in <cit.> that, on the classical field case the matter supermomenta was independent of the gravitational variables, and the matter superhamiltonian was only local on h (i.e. independent of the derivatives of the metric or the gravitational momenta). Nevertheless, in this hybrid case the quantization procedure (and states and scalar product) adds non trivial dependences on the geometric variables, lapse and shift, which may even be of derivative nature. For example, the complex structure J_C in <cit.> adapted to the foliation presents spatial derivatives of the metric and of N, N^i; and so will the quantum connection).
Howbeit, we will restrict to the case where the quantization does not depend on the geometrodynamical momenta and thus the sections representing the states are constant on π_h.
∂_π_hσ_Ψ(ξ)=0, Γ_π_h=0 and
∂_π_h⟨Ψ| Q(f)|Ψ⟩=⟨Ψ| Q(∂_π_hf)|Ψ⟩
where the last equality implies that we allow the observables to depend on π_h as did before the quantization, but do not acquire further dependences on it from the quantization procedure, scalar product or states. The dependence on the 3-metric, lapse and shift remain fully general.
We consider this case for study because, firstly, this includes the case considered in <cit.>, and secondly, we judge it to be simpler, but also the most physical choice, which can be argued as follows.
Given the geometrodynamical principle of equivalence as stated in <cit.>, the classical field geometrical structures and generating functions must be regarded as independent of π_h (they may depend on the intrinsic geometry of the hypersurface, as is h, but not on extrinsic information). If the complex structure J_C over the field manifold used to define the quantization is chosen in reference to the Hamiltonian field over such space (as is the case of <cit.>, following some physical criterium such as, positive energy flux along E), then there is no source of π_h dependence in the quantization procedure. In any case, the discussion for the general case with arbitrary dependence on all geometric variables is easily (though cumbersomely) generalizable from the following one.
Returning to the hybrid supermagnitudes, we will now see how they fare regarding the closing relations appropriate for Dirac's generators, (<ref>),(<ref>) and (<ref>).
Let us make two observations to simplify the problem. Firstly, for the purely geometrodynamical supermagnitudes we know that: i) under the gravitational bracket they already fulfilled all closing relations, and ii) the quantum Poisson bracket of them with any other function in the algebra will be null, as the identity commutes with all operators.
Secondly, the gravitational bracket over any two quantum supermagnitudes is null {ℋ_μ Q(x),ℋ_ν Q(x^')}_G=0 given the restriction to π_h independent quantization and the independence of the classical field supermagnitudes on it, as argued in <cit.>.
Thereby, for μ,ν=0,⋯,3 representing superhamiltonian if the index is zero, and the corresponding supermomenta if not zero, we may write that the hybrid Poisson bracket of any pair of hybrid supermagnitudes yields:
{ℋ_μ H(x),ℋ_ν H(x^')}_H={ℋ_μ(x),ℋ_ν(x^')}_G+
{ℋ_μ Q(x),ℋ_ν Q(x^')}_Q+{ℋ_μ(x),ℋ_ν Q(x^')}_G+
{ℋ_μ Q(x),ℋ_ν(x^')}_G
On the quantum side, we must remember that the classical field supermagnitudes are up to quadratic in momenta and we have made use of the exception for (up to) quadratic polynomia on Groenewold's no go theorem, obtaining that the quantum Poisson bracket of quantum supermagnitude is the expectation value of the quantization of the classical field Poisson bracket of classical field supermagnitudes, summarized in eq. (<ref>).
At the level of local generators, implies that the quantum supermagnitudes ℋ_Q and ℋ_iQ already fulfill (<ref>) and (<ref>) for the quantum Poisson bracket, as so did they classical field counterparts.
On the other hand, the supermomenta-superhamiltonian PB is however a bit trickier. For such quantum Poisson bracket, we obtain the expectation value of the quantization of (<ref>):
{ℋ_iQ(x),ℋ_Q(x^')}_Q=
-2h_iα(x)D_x^j⟨Ψ| Q(δℋ^M (x^')δ h_α j(x))|Ψ⟩+ℋ_Q(x)∂_x^iδ(x,x^') .
However, the crossed gravitational terms yield, as in (<ref>):
{ℋ_i(x),ℋ_Q(x^')}_G=2 h_iαD_x^jδℋ_Q(x^')δ h_α j(x)
and
{ℋ_iQ(x),ℋ(x^')}_G=G_ijkl(x^')π^ij(x^')δℋ_iQ(x)δ h_α j(x^')
Therefore, in order to fulfill (<ref>) for the total hybrid supermagnitudes, we must have:
G_ijkl(x^')π^ij(x^')δℋ_iQ(x)δ h_α j(x^')+2 h_iαD_x^jδℋ_Q(x^')δ h_α j(x)
-2h_iα(x)D_x^j⟨Ψ| Q(δℋ^M (x^')δ h_α j(x))|Ψ⟩=0 .
Given that neither the quantization procedure nor the classical field supermagnitudes depend on π_h, from Equations (<ref>) and (<ref>), we obtain that the first term of equation (<ref>) must be null on its own, as the rest of the terms do not depend on π_h:
δℋ_iQ(x)δ h_α j(x^')=0 ∀ Ψ,h,N,N^i⇒
δ Q(ℋ_iM(x))δ h_α j(x^')+[Γ̂_h,Q(ℋ_iM(x))]=0
obtaining a new constraint for the quantum connection. In order for the two remaining terms to cancel, we must fulfill:
h_iαD_x^jδℋ_Q(x^')δ h_α j(x)-h_iα(x)D_x^j⟨Ψ| Q(δℋ^M (x^')δ h_α j(x))|Ψ⟩=0 .
which, using eq. (<ref>) implies one last constraint on the quantum connection:
δ Q(ℋ^M(x))δ h_α j(x^')+[Γ̂_h,Q(ℋ^M(x))]=Q(δℋ^M (x^')δ h_α j(x))
Moving onto the other two closing relations, given that the two first brackets in the right side of equation (<ref>) already fulfilled (<ref>) and (<ref>) for their respective Poisson structure, the sum of the remaining last two, given by the gravitational bracket crossing gravitational and quantum matter supermagnitudes must be null in order to fulfill such closing relations for the hybrid supermagnitudes:
{ℋ_μ(x),ℋ_ν Q(x^')}_G+{ℋ_μ Q(x),ℋ_ν(x^')}_G=0
for μ=ν=0 and μ≠ 0,ν≠ 0.
For the supermomenta-supermomenta PB, these terms are null because of equation (<ref>).
In the classical case, these terms for the superhamiltonian-superhamiltonian PB cancel because the matter superhamiltonian depends only ultralocally on h thus it does not depend on π_h, while the purely geometrodynamical superhamiltonian depends locally on π_h and, thus, the antisymmetry of the PB yields the desired cancellation. In the quantum case, making use of eq. (<ref>), one directly inherits such property from classical theory.
At this point we can claim that we can succesfully reproduce the closing relations for the generators of Dirac's group of hypersurface deformations on a hybrid geometrodynamical phase space for QFT and classical geometry, at the cost of a quantum connection for the quantum states which must fulfill three constraints given by eqs. (<ref>,<ref>,<ref>).
Note that all these conditions is just analogous to claiming that quantum supermagnitudes behave as their classical field counterparts, regarding their dependence on h,π_h, i.e.:
⟨Ψ| Q(δℋ_AC (x^')δ h_α j(x))Ψ⟩=δ/δh_α j(x)ℋ_AQ (x^')
Generalizing this result, it would appear desirable that the quantum connection allows us to extend this property to the generating functions of any symmetry of the system and, ideally, to the whole algebra of operators:
∂_ξ Q(A)+[Γ̂,Q(A)]=Q(∂_ξ A)
Notice that this expression represents the compatibility condition of the quantization mapping (which is not tensorial), and the behavior of the quantum connection ∇. To what extent (if any) it can be achieved for arbitrary observables is beyond the scope of this paper and will be subject of future investigation.
Therefore, we can summarize this section by stating that we have found the hybrid generating functions of hypersurface deformations given by (<ref>) and (<ref>), that appropriately reproduce the closing relations of Dirac's group for the hybrid Poisson bracket (<ref>), under the consistency requirement given by (<ref>) which implies constraints on the quantum connection.
Note that all this construction depends on the generating function at the quantum level being given by eq. (<ref>).
This is the simplest case, a more general case is considered in the appendix, where the generating functions can be shifted by the expectation value of an operator proportional to the identity, but with a priori non-trivial dependences on h,π_h. In such case, eq. (<ref>) can be softened in an appropriate manner to allow less restrictive quantizations or choices of J_C. This is possible because the quantum states would contain the same physical information if they belong to the same projective ray.
§.§ Constraints.
Lastly, we must enforce the path independence criterium, i.e. the physical Cauchy data (h(0),π(0),Ψ(0)) defined on an initial hypersurface Σ_0 should yield under evolution the same Cauchy data (h(t),π(t),Ψ(t)) on a final hypersurface Σ_t, independently of the path chosen from one leaf to another, i.e. of the sheaf of intermediate hypersurfaces Σ_τ ∀τ∈(0,t) that conforms the foliation of that region of spacetime.
Given that from one leaf its infinitesimally following leaf in the foliation can be generated through a combination of normal and tangential deformations, path independence implies at the infinitesimal that any two arbitrary deformations should provide the same physical data independently of the order such deformations are applied. This means that, when evaluated on physically relevant matter and metric distributions, i.e. a hybrid state (h,π_h,Ψ), the application of the deformations should commute and thus the closing relations should be null on shell (i.e., over the submanifold defined by the constraints). Consequently, as in the classical case, the Hamiltonian and momenta first class constraints must be enforced on Hybrid Geometrodynamics. Thus, one can only consider physical situations (initial data) where the following equations are fulfilled:
ℋ_H(h,π,Ψ;x)≃ 0
and
ℋ_iH(h,π,Ψ;x)≃ 0 .
As a technical note, one may consider that the nulity required in these constraints for hybrid supermagnitudes that involve the expectation value of quantum local operators must be seen as the nulity of all expectation values of hybrid operators constructed as contractions of any distribution with the supermagnitudes f^xℋ_Hx=0 ∀ f^x∈ D'(Σ).
These constraints, together with the closing relations (which are linear on the supermagnitudes and, thus, null on shell) ensure that we have successfully enforced the 3+1 equivalent to the General Covariance of Einstein's gravity in the hybrid theory, representing Dirac's group of hypersurface deformations together with the first class constraints that ensure path independence, and therefore, foliation invariance.
Besides, in relation with the dynamics that we will see in the following section, this implies that, on physical data, the total Hamiltonian function f_H (Geometric and Quantum part, which contains the expected value) is always null for the whole hybrid universe, similarly to the Wheeler-DeWitt equation, but in this case, this nulity applies at each leaf and the 3-metric and its momentum are still classical.
§.§ Dynamics for generic hybrid observables and preservation of first class constraints.
For a generic element of the hybrid algebra of observables F∈𝒜_H the effect of the hypersurface deformation characterized by the evolution field E_s, characterized by a normal deformation of size N^x and tangential deformations given by N^ix, should reproduce (<ref>). The Hamiltonian representation of Dirac's algebra allows us to identify:
ℒ_E^⋆ F={ F,f_H}_H
being the Hamiltonian function
f_H:=N^xℋ^H_x+N^ixℋ^H_ix.
Let us consider that such F is now an appropriate function over hybrid phase space that is also lapse, shift and s-label dependent, constituting a (N,N^i,s)-parametric family of hybrid observables. This makes up for the most general case of dynamical quantity in the Hamiltonian framework. The dynamics of such functional F will be given by:
d_s F={ F,f_H}_H+(Ṅ^x∂_N^x+Ṅ^ix∂_N^ix+∂_s)F
where we are taking into account that lapse and shift functions contain their own dynamics, given by Ṅ^x=∂_s N^x and Ṅ^ix=∂_s N^ix, known beforehand and determining the choice of foliation (or a certain path along possible hypersurfaces).
Let us now examine the case of the Hamiltonian and momenta constraints. If we consider the effect of the Hamiltonian field over the supermagnitudes, given that the closing relations (<ref>,<ref>,<ref>) are linearly proportional to the constraints, and therefore, null on shell, we obtain:
{ℋ^H_μ x,f_H}_H≃ 0 ∀μ=0,⋯,3
Thus, the only contribution to the dynamics of the supermagnitudes when the constraints are enforced is their lapse and shift dependence. Such dependence was not present in the classical field theoretical case, but is acquired through the quantization procedure and the sectional nature of quantum states. Nevertheless, such dependence must be made null, given that the constraints must be preserved during the dynamics in order to maintain the leaf independent principle.
Given that this must be true for all foliations or paths between hypersurfaces, it must be true for any choice of evolution for lapse and shift. Consequently, we must enforce that:
δ/δN^μ(x^')ℋ_ν(x)=
δ/δN^μ(x^')ℋ_ν^Q(x)= 0 ∀ μ,ν=0,⋯ ,3
which implies, at the level of the quantum connection,
δ/δN^μ(x^')Q(ℋ^M_ν(x))+[Γ̂_N^μ(x^'),Q(ℋ^M_ν(x))]=0
Note, therefore, that if eq. (<ref>) is not fulfilled, even though the constraints are conserved under the Hamiltonian dynamics given by {,f_H}_H, they would not be conserved under the curve the lapse and shift functions follow. Therefore, at some step in the evolution the hybrid states would abandon the submanifold of null supermagnitudes and the path independence would be lost from that point onward, falling into the Hamiltonian analogue of losing general covariance.
Note that this is equivalent to extend equation (<ref>) to apply for derivatives not only with respect to purely geometrodynamical variables h,π_h but also for lapse and shift (as the classical field magnitudes did not depend on them). This starts depicting a general trend: symmetry generating functions at the quantum level must present the same kinematical relations and leaf dependence as the classical field theory to properly close with the pure geometrodynamics, and the quantum connection is chosen to provide such compatibility (associated to the leaf-dependent family of vacua invariant under the symmetries).
Furthermore, it could be argued that the generating functions of symmetries are to be considered as kinematical objects, and as such, even if built in terms of leaf dependent objects through this leaf dependent quantization, they must be intrinsically independent of lapse and shift and not acquire further dependences on h due to the quantization procedure.
Lastly, based on the representations over the hybrid phase space of hypersurface deformations, we can evolve a general hybrid functional over hybrid phase space, F(Ψ,Ψ̅,h_ij,π^ij):ℳ_H→ℂ, through the whole foliation. Infinitesimally, a general deformation of the hypersurface acts on F through its phase space representation as:
δ Fδ s=Fℋ_HxHN^x s+Ψℋ_iHxHN^ix=Ff_HH .
To write this in a more compact notation, we have defined the lapse-shift-dependent Hybrid Hamiltonian function as:
f_H(N,N^i):ℳ_H→ℝ
f_H(h,π,Ψ,Ψ̅;N,N^i):=ℋ_xN^x+ℋ_ixN^ix+⟨Ψ|Ĥ_Q|Ψ⟩ .
§.§ Hybrid equations of motion and properties of the dynamics.
Having introduced above the evolution equations for any hybrid observable, we consider illustrative to write down explicitly the equations of motion governing the dynamics of the 3-metric, its associated momenta and the quantum states. For the 3-metric from a hypersurface to a neighbouring one can only be by the extrinsic curvature and the spatial diffeomorphism, its differential equation is as in ordinary geometrodynamics, matter independent:
d/ds h_ij(x)={ h_ij,f_H}_G
=2N(x)G_ijkl(x)π^kl(x)+2(D_iN^k(x))h_kj(x)
On the other hand, the geometric momenta does couple with matter:
d/dsπ^ij(x)={π^ij,f_H}_G=
-∂_h_ij(x)H_G-N^x^'⟨Ψ| Q(∂_h^ij;xℋ^M_x^')|Ψ⟩
where the first term is the the usual from pure geometrodynamics and the second one is the coupling with quantum matter, which, by mercy of the constraints on the quantum connection, has no contribution from the quantum supermomenta, given (<ref>), and the derivative w.r.t. h of the quantum superhamiltonian fulfills (<ref>). We identify this last term as the backreaction of quantum matter into the gravitational dynamics. Lastly, the quantum state evolves as:
d/dsΨ={Ψ,f_H}_Q+{Ψ,f_H}_G+Ṅ^x∂_N^xΨ+Ṅ^ix∂_N^ixΨ
=(-iħĤ-{ h_ij;x,f_H}_GΓ̂_h_ij;x-Ṅ^xΓ̂_N^x-Ṅ^ixΓ̂_N^ix)Ψ
where we remind that such quantum connections must fulfill eqs. (<ref>,<ref>,<ref>,<ref>), and the notation for continuous indices implies the contraction through integration over Σ of the local connections with the s-derivative of their associated local variables ({ h_ij;x,f_H}_GΓ̂_h_ij;x=∫_Σ d^3x∂_π^ij(x)f_HΓ̂_h_ij(x), and equivalently for lapse and shift s-derivatives and local connection). Besides, we remind that the quantum Hamiltonian operator Ĥ and the connections are (h,N,N^i)-dependent.
This system of equations defines the hybrid dynamics. Note that it cannot be unitary, given the non-linearity arising from the mutual backreaction. Nevertheless, norm conservation is assured by the quantum connection, even if the definition of the scalar product (and thus, of the Hilbert space) is leaf dependent, as already happened in Ehrenfest equations for finite dimensional hybrid systems <cit.>.
Note, on the other hand, that the connections appear acting as operators on Ψ, but they are not self adjoint for the scalar product over such leaf, ⟨,⟩_ξ. Thus, for such scalar product associated to a certain ξ∈ℬ content of the leaf, they would induce also a non-unitarity. In fact, the main property of such connection is that it always maps the quantum state out of the original Hilbert space to which it originally belonged (given by ⟨,⟩_ξ), in order to fit it in the infinitesimally neighbouring Hilbert space under the path along the section of scalar products ⟨,⟩_ξ+δξ, given by the infinitesimal change δξ of (h,π_h,N,N^i). Nevertheless, the quantum connection compensates for the change of the scalar product itself under changes of ξ, as seen in eq. (<ref>), and for such leaf dependent definition of scalar product, we achieve norm conservation. The proof is immediate by applying eq. (<ref>) to the norm:
d/ds⟨Ψ|Ψ⟩_ξ={⟨Ψ|Ψ⟩_ξ,f_H}_G+Ṅ^μ x∂_N^μ x⟨Ψ|Ψ⟩_ξ
+iħ⟨Ψ|[Ĥ,𝕀]Ψ⟩_ξ=0
where the first two terms are null because the image through horizontal sections of the tangent fields on ℬ conserve the ξ-dependent scalar product, while the quantum Poisson bracket is null because it involves the commutator of the self-adjoint (for the given ⟨,⟩_ξ) Hamiltonian operator with the identity.
Contrarily to the case of the horizontal lift of the geometric transformation, the term with the quantum Hamiltonian is Hilbert space preserving, not accounting for changes of the base, but only for the vertical evolution on the leaf.
Therefore, one may decompose the hybrid dynamics of the quantum state as the sum of a vertical vector field on the fibration and a horizontal one. We must remember that the quantum state as an element of ℳ_Q is defined as σ_Ψ|_ξ, the evaluation of a covariant section σ_Ψ∈ℳ_s on a given point ξ∈ℬ. In this context, the vertical vector, which is given by the quantum Poisson bracket that provides the Schrödinger-like term, provides the change within the space of ℳ_s, the covariant sections, but keeps evaluation point ξ∈ℬ static. On the other hand, the horizontal one, given by the image through σ_Ψ of the gravitational Poisson bracket and the tangent vector to the curve defining the lapse and shift along the foliation, provides the change of the evaluation point in ℬ, without changing the covariant section σ_Ψ. Therefore, together they yield the total evolution of the quantum state, taking into account both the change of Hilbert space due to the change of ξ and the change of quantum state within the original Hilbert space, as is illustrated in Figure <ref>.
§ DISCUSSION.
We have constructed a hybrid geometrodynamical framework with classical 3-metric and associated momenta and quantum field theoretical matter. In order to succesfully represent the infinitesimal generators of Dirac's group of hypersurface deformations in a Hamiltonian way over this hybrid manifold, the introduction of the quantum connection has shown to be crucial. Such quantum connection is related to a notion which is widespread in the literature: the time dependence of the vacuum state in quantum field theory in curved spacetime, which leads to time dependent Fock spaces, or analogously, field measures defining a Hilbert space. Depending on the interpretation, this usually lead to a pandemonium of phenomena, from norm loss of quantum states of difficult physical understanding from the probabilistic point of view, to dubious particle interpretations when Lorentz invariance was missing in the first place. Under this formalism, there is no need of such interpretations, as norm loss is entirely avoided and the variety of Hilbert spaces is clear from the beginning.
In particular, using the mathematical construction of <cit.>, we have introduced the concept of a ξ-parametric family of quantizations (equivalent to the time dependent vacuum), each of them characterized by a measure, a representation of operators and states. Such quantizations are unitarily inequivalent, contrarily to the finite dimensional case. Therefore, we introduce a non-unitary operator to relate them, which does not preserve the Hilbert state structure it acts on, but instead transports the state to the new Hilbert space resulting from the change of ξ. Such operator represents functionally the connection 1-form. From the requirements for consistent hybrid geometrodynamics, we obtain that it must fulfill certain constraints, which in the auxiliary mathematical notes are related to the covariance of Kibble's Kähler structure (<cit.>). It has remarkable physical implications; it leads to norm conservation throughout evolution, ensures that the quantum supermagnitudes (generating functions of hypersurface deformations) reproduce Dirac's algebra and helps preserving throughout the foliation the hybrid geometrodynamical constraints.
On the other hand, the reader might have notice that the process that has lead to the identification of the quantum supermagnitudes is slightly misaligned with the spirit of geometrodynamics <cit.>. Contrarily to invoke the representation postulate to find the supermomenta as purely spatial diffeomorphisms on leaf, ℒ^⋆_N^iΨ={Ψ,ℋ_i^Q}_Q, we have resorted to the quantization of the appropriate generating functions over the classical field theory to identify such generating function. In the end, the procedure is analogous, for a consistent quantization procedure of the observables yields the appropriate operators representing the transformation for the functions adapted to a certain L^2, taking into acount that momentum operator is represented as a self-adjoint derivative w.r.t. the measure and vacuum phase, in order to be self adjoint.
The geometrodynamical principle of equivalence is a key notion in <cit.> for the inclusion of of classical matter sources in geometrodynamics. We keep this notion, but apply it only to the classical field theory considered for quantization, and, given that the choice of quantization is made referent to the classical field structures as in <cit.> , the quantized version does not acquire extra dependencies on the geometric momenta. Nevertheless, we do allow for it to acquire through the quantization procedure new (non-ultralocal) dependencies on the derivative of the 3-metric.
As discussed in the introduction, invoking the geometrodynamical equivalence principle for the classical matter fields as in <cit.> bans non-minimally coupled theories from this framework, and thus, we have only studied, for the material subsystem, theories resulting from the quantization of classical supermagnitudes that did not depend on derivatives of h nor on π_h. The extension of hybrid geometrodynamics to non-minimally coupled cases and its implementation regarding the GD equivalence principle, will be explored in future works. On the other hand, one may speculate that, to some extent, some sort of non-minimally coupled field theories could still emerge as effective theories for matter test fields obtained from the current theory were matter fields are sources. The idea would be that the effect of the backreaction of matter on gravity, and its subsequent effect on the propagation of the fields due to the change of metric, makes up for a gravity-mediated self-interaction. Therefore, if the perturbation on the geometry is small, it could be “integrated out", i.e. absorbed into the matter theory as a new dynamical pole on the propagator of the fields coupled to a background gravity, considering for this new effective theory the geometry unaffected by matter “test fields".
Furthermore, N,N^i have a dynamical origin, not a kinematical one. Therefore, the fact that the quantization procedure is dependent on them seems rather unnatural, as it renders the quantum kinematical structures (such as the scalar product and the space of states) N,N^i-depdendent, obscuring the difference between kinematics and dynamics. Nevertheless, thanks to eq. (<ref>), at least the generating functions of the symmetries are forced to be independent on them. In any case, a choice of complex structure for quantization that, together with the connection, fulfills all the constraints, but is not lapse and shift dependent, would seem desirable in this sense, although, a priori, it would differ from the case considered in <cit.>.
In order to heal this unnatural dependence a possible path to follow would be to consider the whole Kähler structure dependent on both h and π_h and try to recover <cit.> choosing a submanifold whose constraints allowed us to rewrite π_h in terms of h, lapse and shift. This procedure is, nonetheless, very difficult to attain in practice because of the non polynomial nature of the geometrodynamical constraints. The reader might be familiar with this kind of problems from the study of the Wheeler-DeWitt equation. There is another possibility that we can borrow from the history of quantum gravity. In terms of the Hamiltonian description of gravity in terms of the Ashtekar-Barbero variables, the ones used in Loop Quantum Gravity for canonical quantization, the constraint becomes much more tractable and thus the dependence of the Kähler structure might be easier to elucidate. We refer to <cit.> for further details. This programme would be called Hybrid connection dynamics and will be the subject of future work.
No sé si me convence mucho. Yo no daría indicaciones sobre soluciones alternativas que queremos seguir, porque nos faltan todavía algunos meses para poder tener resultados. Otra cosa es de resultados ya casi listos. ¿Qué gana el paper con esta información que damos?
In addition to the former discussion, it is relevant to recognize that the notion of equivalence principle for Quantum Field Theory has been questioned in the literature, although in our construction its enforcing is not required explicitly at the quantum level. However, one of the implications of hybrid geometrodynamics yields some light on this issue and goes as follows. In the same sense that Ehrenfest's theorem almost reproduces Newton's equations for the expectation values, we can argue that the expectation values of the operators resulting from quantizing classical matter supermagnitudes almost obey this hybrid geometrodynamical equivalence principle. In this way we find an interpretation under the light of the equivalence principle for the particular leaf dependence of hybrid observables,
∂_ξ⟨Ψ| Q(A)|Ψ⟩=⟨Ψ| Q(∂_ξ A)|Ψ⟩
associated to any classical symmetry generating function A. Note that, in the case of leaf dependent quantization, this is only possible thanks to the inclusion of the quantum connection. Therefore, “purely material" physical observables, at least at the level of expectation values, not of measurement results, have their local laws with the appropriate dependence on the geometry of spacetime, as the classical magnitude A fulfilled the geometrodynamical principle of equivalence.
Another important principle in the construction of geometrodynamics is path independence, achieved under the hybrid constraints ℋ_μ H(x)=0 (ensured their preservation through (<ref>) and their first class nature for the Hamiltonian {,f_H}_H). These constraints might have important phenomenological consequences, as we have hybrid conserved magnitudes for the whole universe. For example, starting from non-divergent initial data for matter and geometry, if at any point of the evolution the pure geometrodynamical part becomes divergent (formation of a singularity, for example in a hybrid dust collapse model), it must be accompanied by a same-sized divergent expected value of the quantum operator, as we know that the Hamiltonian constraint is preserved. To what extent this might prevent the dynamical formation of singularities when matter sources are quantum will be the subject of future work.
In this line, one realizes that, if the phenomenon of quantum completeness as stated in <cit.>,<cit.> had a backreaction of quantum matter on gravity as in this work, the quantum matter fields would not be able to act as sources of divergent geometry. Such phenomenon is based on the loss of norm for the quantum states while approaching geometric singularities. In that sense, singularity formation starting with non-singular initial data would be to some extent protected in hybrid geometrodynamics if norm loss was allowed, as, in the process of forming the singularity, the material sources would become smaller the nearer to the formation of the singularity.
Nevertheless, in our framework norm loss does not take place, even if the norm is computed inside different Hilbert spaces with different scalar products. The usual argument of time dependent vacuum state is still valid, given by the leaf dependence of such vacuum state, where the leaf variables are dynamical. Precisely, in our work it is the source of the consideration of inequivalent Hilbert spaces, as it provides the associated measure, albeit such measure and the non-vacuum states are always properly normalized. To what extent the phenomenology of quantum completeness could find its way into our formalism, given that norm loss is healed by the quantum connection relating the ξ-parametric family of scalar products, is still a matter of discussion.
We conclude this work by quoting Robert M. Wald who has guided the intertwining of Quantum field theory and curved spacetime for almost half a century. The quote is an extract from <cit.> on the consistency of a stress-energy tensor for quantum fields as source for classical gravity:
“But if the five axioms [of QFT in Curved Spacetime] are inconsistent in a nontrivial manner, then unless one can somehow evade the arguments of Section II [leading to the five axioms] one would be forced to conclude that “back reaction" effects cannot be treated within the context of the semiclassical approximation."
In this context, a natural question that we will need to answer in future investigations is whether or not something has been gained regarding this issue in our depiction of the backreaction in terms of Hamiltonian framework for Dirac's spatial hypersurface deformations, instead of the four dimensional covariant one based on the quantum stress energy tensor.
Lastly, the coupling of quantum matter with classical gravitational degrees of freedom is still a matter of discussion, as the backreaction is given by expectation values of local operators and, therefore, we must regard it as an effective theory, which may approximate well enough the phenomena of more fundamental theories (such as quantum gravity) in a certain range of application. Outside such range of application, some apparently spurious phenomenology, could arise. An example of it is the consideration of very massive highly delocalized wave functions, as two non-spacially-overlaping entangled wave packets far away from each other. Such state would yield an apparent mass in its mid point acting as a gravity source, where no matter is actually present. Nevertheless, this would require very high masses with relatively high spatial variance for the wave packet, given the hierarchy of scales yielded by the gravitational constant κ, for its effect to be probeable as a gravitational perturbation, and such quantum states are definitely unlikely and most probably beyond the range of validity of the theory. Reference for all this?
There are still some other caveats even within its range of application, such as the measurement theorem for the quantum matter. If it is described by instantaneous collapse of the wave function into the eigenstate associated to the resut of the measure obtained, it leads to an abrupt change of the sources for the metric which may violate the hamiltonian and momenta constraints, leading to the subsequent loss of path independence and, ultimately, of general covariance of the solutions. Thus, we present the current theory as a way to evolve from initial conditions until just before the measurement takes place, then we find new compatible initial conditions taking into acount the results of the measurement, and resume the geometrodynamical evolution along the foliation. Another solution could be dispensing with the wave functional collapse completely, describing the measurement process by a fast dynamical interaction with a material measurement device that quickly decoheres the quantum state<cit.>, including its description as a new matter source in the geometrodynamical description.
Esta parte me parece enormemente especulativa y bastante confusa. Además, en realidad no podemos decir nada al respecto. ¿Por qué incluirlo?
Si os parece podemos cerrar con un resumen de las novedades que supone el paper, posing conditions on the quantization to be consistent with geometrodynamics y añadiendo una estructura matemática más allá de la de Wald para QFT in CS. Sería repasar los siguientes puntos: Los estados son secciones, los observables deben transformar de una cierta manera bajo derivadas, la conexion adquiere unas ligaduras. La elección de estructura compleja debe ser la que permita cumplir las constraints. La naturaleza de conexion va en la línea de la formulacion tensorial de Kibble. Si no queréis volver a repasarlo, ya he hablado demasiado, también podemos dejarlo así.
§ ACKNOWLEDGEMENTS.
We express our sincere gratitude to Prof. J.L. Cortés for his insightful comments. We also extend our appreciation to Jan Głowacki for engaging discussions and sharing his extensive knowledge on geometrodynamics. Furthermore, we would like to thank M. Schneider for stimulating conversations that sparked our curiosity in the phenomenology of quantum completeness in relation with our framework.
The authors acknowledge partial finantial support of Grant PID2021-123251NB-I00
funded by MCIN/AEI/10.13039/501100011033 and by the European Union, and of
Grant E48-23R funded by Gobierno de Aragón. C.B-M
and D.M-C acknowledge financial support by Gobierno de Aragón through the grants
defined in ORDEN IIU/1408/2018 and ORDEN CUS/581/2020 respectively.
plain
|
http://arxiv.org/abs/2307.02452v2
|
20230705172342
|
LLCaps: Learning to Illuminate Low-Light Capsule Endoscopy with Curved Wavelet Attention and Reverse Diffusion
|
[
"Long Bai",
"Tong Chen",
"Yanan Wu",
"An Wang",
"Mobarakol Islam",
"Hongliang Ren"
] |
eess.IV
|
[
"eess.IV",
"cs.CV",
"cs.RO"
] |
LLCaps: Learning to Illuminate Low-Light Capsule Endoscopy
L. Bai et al.
Department of Electronic Engineering, The Chinese University of Hong Kong (CUHK), Hong Kong SAR, China
The University of Sydney, Sydney, NSW, Australia
Northeastern University, Shenyang, China
Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, UK
Shun Hing Institute of Advanced Engineering, CUHK, Hong Kong SAR, China
b.long@link.cuhk.edu.hk, tche2095@uni.sydney.edu.au, yananwu@cuhk.edu.hk, wa09@link.cuhk.edu.hk, mobarakol.islam@ucl.ac.uk, hlren@ee.cuhk.edu.hk
LLCaps: Learning to Illuminate Low-Light Capsule Endoscopy with Curved Wavelet Attention and Reverse Diffusion
Long Bai1 ⋆
Tong Chen2
Long Bai and Tong Chen are co-first authors.
Yanan Wu1,3
An Wang1
Mobarakol Islam4
Hongliang Ren1,5
Corresponding author.
August 1, 2023
==========================================================================================================================================================
Wireless capsule endoscopy (WCE) is a painless and non-invasive diagnostic tool for gastrointestinal (GI) diseases. However, due to GI anatomical constraints and hardware manufacturing limitations, WCE vision signals may suffer from insufficient illumination, leading to a complicated screening and examination procedure. Deep learning-based low-light image enhancement (LLIE) in the medical field gradually attracts researchers. Given the exuberant development of the denoising diffusion probabilistic model (DDPM) in computer vision, we introduce a WCE LLIE framework based on the multi-scale convolutional neural network (CNN) and reverse diffusion process. The multi-scale design allows models to preserve high-resolution representation and context information from low-resolution, while the curved wavelet attention (CWA) block is proposed for high-frequency and local feature learning. Moreover, we combine the reverse diffusion procedure to optimize the shallow output further and generate images highly approximate to real ones. The proposed method is compared with eleven state-of-the-art (SOTA) LLIE methods and significantly outperforms quantitatively and qualitatively. The superior performance on GI disease segmentation further demonstrates the clinical potential of our proposed model. Our code is publicly accessible at
https://github.com/longbai1006/LLCapsgithub.com/longbai1006/LLCaps.
§ INTRODUCTION
Currently, the golden standard of gastrointestinal (GI) examination is endoscope screening, which can provide direct vision signals for diagnosis and analysis. Benefiting from its characteristics of being non-invasive, painless, and low physical burden, wireless capsule endoscopy (WCE) has the potential to overcome the shortcomings of conventional endoscopy <cit.>. However, due to the anatomical complexity, insufficient illumination, and limited performance of the camera, low-quality images may hinder the diagnosis process <cit.>. Blood vessels and lesions with minor color changes in the early stages can be hard to be screened out <cit.>. Fig. <ref> shows WCE images with low illumination and contrast. The disease features clearly visible in the normal image become challenging to be found in the low-light images. Therefore, it is necessary to develop a low-light image enhancement framework for WCE to assist clinical diagnosis.
Many traditional algorithms (e.g., intensity transformation <cit.>, histogram equalization <cit.>, and Retinex theory <cit.>) have been proposed for low-light image enhancement (LLIE). For WCE, Long et al. <cit.> discussed adaptive fraction-power transformation for image enhancement. However, traditional methods usually require an ideal assumption or an effective prior, limiting their wider applications. Deep learning (DL) provides novel avenues to solve LLIE problems <cit.>. Some DL-based LLIE schemes for medical endoscopy have been proposed <cit.>. Gomez et al. <cit.> offered a solution for laryngoscope low-light enhancement, and Ma et al. <cit.> proposed a medical image enhancement model with unpaired training data.
Recently, denoising diffusion probabilistic model (DDPM) <cit.> is the most popular topic in image generation, and has achieved success in various applications. Due to its unique regression process, DDPM has a stable training process and excellent output results, but also suffers from its expensive sampling procedure and lack of low-dimensional representation <cit.>. It has been proved that DDPM can be combined with other existing DL techniques to speed up the sampling process <cit.>. In our work, we introduce the reverse diffusion process of DDPM into our end-to-end LLIE process, which can preserve image details without introducing excessive computational costs. Our contributions to this work can be summarized as three-fold:
–We design a Low-Light image enhancement framework for Capsule endoscopy (LLCaps). Subsequent to the feature learning and preliminary shallow image reconstruction by the convolutional neural network (CNN), the reverse diffusion process is employed to further promote image reconstruction, preserve image details, and close in the optimization target.
–Our proposed curved wavelet attention (CWA) block can efficiently extract high-frequency detail features via wavelet transform, and conduct local representation learning with the curved attention layer.
–Extensive experiments on two publicly accessible datasets demonstrate the excellent performance of our proposed model and components. The high-level lesion segmentation tasks further show the potential power of LLCaps on clinical applications.
§ METHODOLOGY
§.§ Preliminaries
§.§.§ Multi-scale Residual Block
Multi-scale Residual Block (MSRB) <cit.> constructs a multi-scale neuronal receptive field, which allows the network to learn multi-scale spatial information in the same layer. Therefore, the network can acquire contextual information from the low-resolution features while preserving high-resolution representations. We establish our CNN branch with six stacked multi-scale residual blocks (MSRB), and every two MSRBs are followed by a 2D convolutional layer (Conv2D). Besides, each MSRB shall require feature learning and the multi-scale feature aggregation module. Specifically, we propose our curved wavelet attention (CWA) module to conduct multi-scale feature learning, and employ the selective kernel feature fusion (SKFF) <cit.> to combine multi-scale features, as shown in Fig. <ref> (a).
§.§.§ Denoising Diffusion Probabilistic Models
Denoising Diffusion Probabilistic Models (DDPMs) <cit.> can be summarised as a model consisting of a forward noise addition q(i_1:T|i_0) and a reverse denoising process p(i_0:T), which are both parameterized Markov chains. The forward diffusion process gradually adds noise to the input image until the original input is destroyed. Correspondingly, the reverse process uses the neural network to model the Gaussian distribution and achieves image generation through gradual sampling and denoising.
§.§ Proposed Methodology
§.§.§ Curved Wavelet Attention
Curved Wavelet Attention (CWA) block is the core component of our CNN branch, which is constructed via a curved dual attention mechanism and wavelet transform, as shown in Fig. <ref> (b). Firstly, the input feature map F_in is divided into identity feature F_identity and processing feature F_p. Medical LLIE shall require high image details. In this case, we transform the F_p into wavelet domain F_w to extract high-frequency detail information based on discrete wavelet transform. F_w is then propagated through the feature selector and dual attention module for deep representation learning. Finally, we conduct reverse wavelet transform (IWT) to get F'_p, and concatenate it with F_identity before the final output convolution layer.
We construct our curved dual attention module with parallel spatial and curved attention blocks. The spatial attention (SA) layer exploits the inter-spatial dependencies of convolutional features <cit.>. The SA layer performs the global average pooling and max pooling on input features respectively, and concatenates the output F_w(mean) and F_w(max) to get F_cat. Then the feature map will be dimensionally reduced and passed through the activation function.
However, literature <cit.> has discussed the problem of local illumination in LLIE. If we simply use a global computing method such as the SA layer, the model may not be able to effectively understand the local illumination/lack of illumination. Therefore, in order to compensate for the SA layer, we design the Curved Attention (CurveA) layer, which is used to model the high-order curve of the input features. Let IL_n(c) denote the curve function, c denote the feature location coordinates, and Curve_(n-1) denote the pixel-wise curve parameter, we can obtain the curve estimation equation as:
IL_n(c)/IL_n-1(c)= Curve_n-1(1-IL_n-1(c))
The detailed CurveA layer is presented in the top of Fig. <ref> (b), and the Equ. (<ref>) is related to the white area. The Curve Parameter Estimation module consists of a Sigmoid activation and several Conv2D layers, and shall estimate the pixel-wise curve parameter at each order. The Feature Rescaling module will rescale the input feature into [0, 1] to learn the concave down curves. By applying the CurveA layer to the channels of the feature map, the CWA block can better estimate local areas with different illumination.
§.§.§ Reverse Diffusion Process
Some works <cit.> have discussed combining diffusion models with other DL-based methods to reduce training costs and be used for downstream applications. In our work, We combine the reverse diffusion process of DDPM in a simple and ingenious way, and use it to optimize the shallow output by the CNN branch. Various experiments shall prove the effectiveness of our design in improving image quality and assisting clinical applications.
In our formulation, we assume that i_0 is the learning target Y^* and i_T is the output shallow image from the CNN branch. Therefore, we only need to engage the reverse process in our LLIE task. The reverse process is modeled using a Markov chain:
p_θ(i_0: T)=p(i_T) ∏_t=1^T p_θ(i_t-1| i_t)
p_θ(i_t-1| i_t)=𝒩(i_t-1 ; μ_θ(i_t, t), Σ_θ(i_t, t))
p_θ(i_t-1| i_t) are parameterized Gaussian distributions whose mean μ_θ(i_t, t) and variance Σ_θ(i_t, t) are given by the trained network. Meanwhile, we simplify the network and directly include the reverse diffusion process in the end-to-end training of the entire network. Shallow output is therefore optimized by the reverse diffusion branch to get the predicted image Y. We further simplify the optimization function and only employ a pixel-level loss on the final output image, which also improves the training and convergence efficiency.
§.§.§ Overall Network Architecture
An overview of our framework can be found in Fig. <ref>. Our LLCaps contains a CNN branch (including a shallow feature extractor (SFE), multi-scale residual blocks (MSRBs), an output module (OPM)), and the reverse diffusion process. The SFE is a Conv2D layer that maps the input image into the high-dimensional feature representation F_SFE∈ℝ^C × W × H <cit.>. Stacked MSRBs shall conduct deep feature extraction and learning. OPM is a Conv2D layer that recovers the feature space into image pixels. A residual connection is employed here to optimize the end-to-end training and converge process. Hence, given a low-light image x∈ℝ^3 × W × H, where W and H represent the width and height, the CNN branch can be formulated as:
F_SFE=H_SFE(x)
F_MSRBs=H_MSRBs(F_SFE),
F_OPM=H_OPM(F_MB)+x
The shallow output F_OPM∈ℝ^3 × W × H shall further be propagated through the reverse diffusion process and achieve the final enhanced image Y∈ℝ^3 × W × H. The whole network is constructed in an end-to-end mode and optimized by Charbonnier loss <cit.>. The ε is set to 10^-3 empirically.
ℒ(x, x^*)=√(Y - Y^*^2+ε^2)
in which Y and Y^* denote the input and ground truth images, respectively.
§ EXPERIMENTS
§.§ Dataset
We conduct our experiments on two publicly accessible WCE datasets, the Kvasir-Capsule <cit.> and the Red Lesion Endoscopy (RLE) dataset <cit.>.
Kvasir-Capsule dataset <cit.> is a WCE classification dataset with three anatomy classes and eleven luminal finding classes. By following <cit.>, we randomly select 2400 images from the Kvasir-Capsule dataset, of which 2000 are used for training and 400 for testing. To create low-light images, we adopt random Gamma correction and illumination reduction following <cit.>. Furthermore, to evaluate the performance on real data, we add an external validation on 100 real images selected from the Kvasir-Capsule dataset. These images are with low brightness and are not included in our original experiments.
Red Lesion Endoscopy dataset <cit.> (RLE) is a WCE dataset for red lesion segmentation tasks (e.g., angioectasias, angiodysplasias, and bleeding). We randomly choose 1283 images, of which 946 images are used for training and 337 for testing. We adopt the same method in the Kvasir-Capsule dataset to generate low-light images. Furthermore, we conduct a segmentation task on the RLE test set to investigate the effectiveness of the LLIE models in clinical applications.
§.§ Implementation Details
We compare the performance of our LLCaps against the following state-of-the-art (SOTA) LLIE methodologies: LIME <cit.>, DUAL <cit.>, Zero-DCE <cit.>, EnlightenGAN <cit.>, LLFlow <cit.>, HWMNet <cit.>, MIRNet <cit.>, SNR-Aware <cit.>, StillGAN <cit.>, MIRNetv2 <cit.>, and DDPM <cit.>. Our models are trained using Adam optimizer for 200 epochs with a batch size of 4 and a learning rate of 1 × 10^-4. For evaluation, we adopt three commonly used image quality assessment metrics: Peak Signal-to-Noise Ratio (PSNR) <cit.>, Structural Similarity Index (SSIM) <cit.>, and Learned Perceptual Image Patch Similarity (LPIPS) <cit.>.
For the external validation set, we evaluate with no-reference metrics LPIPS <cit.> and Perception-based Image Quality Evaluator (PIQE) <cit.> due to the lack of ground truth images. To verify the usefulness of the LLIE methods for downstream medical tasks, we conduct red lesion segmentation on the RLE test set and evaluate the performance via mean Intersection over Union (mIoU), Dice similarity coefficient (Dice), and Hausdorff Distance (HD). We train UNet <cit.> using Adam optimizer for 20 epochs. The batch size and learning rate are set to 4 and 1 × 10^-4, respectively. All experiments are implemented by Python PyTorch and conducted on NVIDIA RTX 3090 GPU. Results are the average of 3-fold cross-validation.
§.§ Results
We compare the performance of our LLCaps to the existing approaches, as demonstrated in Table <ref> and Fig. <ref> quantitatively and qualitatively. Compared with other methods, our proposed method achieves the best performance among all metrics. Specifically, our method surpasses MIRNetv2 <cit.> by 3.57 dB for the Kvasir-Capsule dataset and 0.33 dB for the RLE dataset. The SSIM of our method has improved to 96.34% in the Kvasir-Capsule dataset and 93.34% in the RLE dataset. Besides that, our method also performs the best in the no-reference metric LPIPS. The qualitative results of the comparison methods and our method on the Kvasir-Capsule and RLE datasets are visualized in Fig. <ref> with the corresponding heat maps. Firstly, we can see that directly performing LLIE training on DDPM <cit.> cannot obtain good image restoration, and the original structures of the DDPM images are largely damaged. EnlightenGAN <cit.> also does not perform satisfactorily in structure restoration. Our method successfully surpasses LLFlow <cit.> and MIRNetv2 <cit.> in illumination restoration. The error heat maps further reflect the superior performance of our method in recovering the illumination and structure from low-light images. Moreover, our solution yields the best on the real low-light dataset during the external validation, proving the superior performance of our solution in real-world applications.
Furthermore, a downstream red lesion segmentation task is conducted to investigate the usefulness of our LLCaps on clinical applications.
As illustrated in Table <ref>, LLCaps achieve the best lesion segmentation results, manifesting the superior performance of our LLCaps model in lesion segmentation. Additionally, LLCaps surpasses all SOTA methods in HD, showing LLCaps images perform perfectly in processing the segmentation boundaries, suggesting that our method possesses better image reconstruction and edge retention ability.
Besides, an ablation study is conducted on the Kvasir-Capsule dataset to demonstrate the effectiveness of our design and network components, as shown in Table <ref>. To observe and compare the performance changes, we try to (i) remove the wavelet transform in CWA blocks, (ii) degenerate the curved attention (CurveA) layer in CWA block to a simple channel attention layer <cit.>, and (iii) remove the reverse diffusion branch. Experimental results demonstrate that the absence of any component shall cause great performance degradation. The significant improvement in quantitative metrics is a further testament to the effectiveness of our design for each component.
§ CONCLUSION
We present LLCaps, an end-to-end capsule endoscopy LLIE framework with multi-scale CNN and reverse diffusion process. The CNN branch is constructed by stacked MSRB modules, in which the core CWA block extracts high-frequency detail information through wavelet transform, and learns the local representation of the image via the Curved Attention layer. The reverse diffusion process further optimizes the shallow output, achieving the closest approximation to the real image. Comparison and ablation studies prove that our method and design bring about superior performance improvement in image quality. Further medical image segmentation experiments demonstrate the reliability of our method in clinical applications. Potential future works include extending our model to various medical scenarios (e.g., surgical robotics, endoscopic navigation, augmented reality for surgery) and clinical deep learning model deployment.
§.§.§ Acknowledgements.
This work was supported by Hong Kong RGC CRF C4063-18G, CRF C4026-21GF, RIF R4020-22, GRF 14216022, GRF 14211420, NSFC/RGC JRS N_CUHK420/22; Shenzhen-Hong Kong-Macau Technology Research Programme (Type C 202108233000303); GBABF #2021B1515120035.
splncs04
§ SUPPLEMENTARY MATERIALS FOR “LLCAPS: LEARNING TO ILLUMINATE LOW-LIGHT CAPSULE ENDOSCOPY WITH CURVED WAVELET ATTENTION AND REVERSE DIFFUSION”
|
http://arxiv.org/abs/2307.01265v1
|
20230703180006
|
Inferring reionization and galaxy properties from the patchy kinetic Sunyaev-Zel'dovich signal
|
[
"Ivan Nikolić",
"Andrei Mesinger",
"Yuxiang Qin",
"Adélie Gorce"
] |
astro-ph.CO
|
[
"astro-ph.CO",
"astro-ph.GA"
] |
firstpage–lastpage
[
Emanuele Berti
August 1, 2023
==================
The patchy kinetic Sunyaev-Zel’dovich (kSZ) signal is an integral probe of the timing and morphology of the epoch of reionization (EoR). Recent observations have claimed a low signal-to-noise (S/N) measurement, with a dramatic increase in S/N expected in the near future. In this work, we quantify what we can learn about the EoR from the kSZ signal. We perform Bayesian inference by sampling galaxy properties and using forward-models of the kSZ as well as other EoR and galaxy observations in the likelihood. Including the recent kSZ measurement obtained by the South Pole Telescope (𝒟_3000^pkSZ = 1.1_-0.7^+1.1μK^2) shifts the posterior distribution in favor of faster and later reionization models, resulting in lower values of the optical depth to the CMB: τ_e = 0.052_-0.008^+0.009 with a 68% confidence interval (C.I.).
The combined EoR and UV luminosity function observations also
imply a typical ionizing escape fraction of 0.04_-0.03^+0.05 (95% C.I.), without a strong dependence on halo mass.
We show how the patchy kSZ power from our posterior depends on the commonly-used parameters of reionization. For a given midpoint and duration, the EoR morphology only has a few percent impact on the patchy kSZ power in our posterior. However, a physical model is needed to obtain tight constraints from the current low S/N patchy kSZ measurement, as it allows us to take advantage of complimentary high-z observations.
Future high S/N detections of the patchy kSZ should decrease the current uncertainties on the timing of the EoR by factors of ∼2 – 3.
cosmology: cosmic background radiation – dark ages, reionization, first stars – diffuse radiation – large-scale structure of Universe – early Universe – galaxies: high-redshift
§ INTRODUCTION
The epoch of reionization (EoR) is a major milestone in the Universe's evolution. Although many questions remain, recent years have seen a dramatic increase in the volume of data available to probe the cosmological frontier. These include: (i) high-redshift QSO spectra <cit.>; (ii) Lyman alpha emitting galaxies <cit.>; (iii) the optical depth to the CMB <cit.>; (iv) UV luminosity functions <cit.>; (v) preliminary upper limits on the 21 cm power spectrum <cit.>. This trend is set to culminate in the coming decade with 21-cm maps of the first billion years from the Square Kilometre Array (SKA)[<https://www.skatelescope.org/>].
A complementary probe that has arguably seen less attention is provided by the patchy kinetic Sunyaev-Zel'dovich (kSZ) signal. The kSZ is sourced by the Doppler shifting of CMB photons that scatter off of free electrons, resulting in secondary temperature anisotropies. It is typically separated into post-EoR (or homogeneous) and EoR (or patchy) contributions. The patchy kSZ is determined by the timing, duration and morphology of the EoR. Thus, measuring its shape and amplitude could inform us about the evolution of this cosmic milestone as well as the galaxies that sourced it <cit.>.
Measurements of the patchy kSZ have historically focused on the angular multipole l=3000 (roughly corresponding to 4 arcmin, or a comoving scale of 20 Mpc during the EoR). At lower multipoles the primary CMB anisotropies are increasingly dominant, while at higher multipoles systematics such as the cross-correlation between the thermal Sunyaev-Zel'dovich (tSZ) and dusty galaxies become even more challenging. The two telescopes actively targeting the kSZ, the Atacama Cosmology Telescope (ACT) and the South Pole Telescope (SPT), have until recently only published upper limits <cit.>. Strong foregrounds, including bright extragalactic sources,
as well as modelling uncertainties remain very challenging. However, the SPT collaboration recently claimed a low signal to noise (S/N) measurement of the patchy kSZ signal: 𝒟_3000^pkSZ = 1.1_-0.7^+1.1 μK^2 <cit.>.
A new analysis of this data, which introduces a consistent treatment of cosmology and reionization across the multipole range, set a consistent upper limit of 𝒟_3000^pkSZ < 1.58 μK^2 <cit.>.
These relatively low values qualitatively point to a much later and more rapid EoR compared to original estimates <cit.>. Future telescopes, such as the Simons Observatory[<https://simonsobservatory.org/>] <cit.>, CMB-Stage 4[<https://cmb-s4.org/>] <cit.> and CMB-HD[<https://cmb-hd.org/>] <cit.>, should help better characterize the CMB foregrounds and related systematics to narrow down error bars.
However, interpreting a tentative detection of the kSZ is difficult. Firstly, one needs to statistically separate the homogeneous and patchy contributions from the total kSZ power. Secondly, the patchy kSZ power is an integral measurement of the EoR, and as such is prone to strong astrophysical parameter degeneracies. Robust interpretation therefore must rely on additional, complementary observations of the EoR and high-redshift galaxies.
Here we quantify what we can learn from the recent kSZ measurement using a fully Bayesian framework. Unlike previous works, we directly sample empirical properties of galaxies that drive the EoR, creating 3D lightcones on-the-fly. This allows us to: (i) self-consistently sample different EoR morphologies when comparing against kSZ observations (instead of the common approach of fixing the morphology and empirically varying the midpoint and duration of the EoR); (ii) combine independent high-z galaxy and EoR observations when computing the posterior; and (iii) set physically-meaningful priors.
This paper is organized as follows. In Sec. <ref> we discuss how we compute the patchy kSZ signal. Our Bayesian framework, combining the kSZ with complementary observations, is summarized in Sec. <ref>. We present and discuss our results in Sec. <ref>. In Sec. <ref> we quantify how accurately the midpoint and duration of reionization can predict the patchy kSZ at l=3000. Finally, we conclude in Sec. <ref>.
Throughout this work, we assume standard ΛCDM cosmological parameters (Ω_m, Ω_b, Ω_Λ, h, σ_8, n_s = 0.321, 0.049, 0.679, 0.67, 0.81, 0.963), consistent with the latest estimates from <cit.>. Unless stated otherwise, we quote all quantities in comoving units.
§ THE PATCHY KINETIC SUNYAEV-ZEL'DOVICH SIGNAL
The secondary temperature anisotropy of the CMB due to the kinetic Sunyaev-Zel'dovich effect in the line of sight (LoS) direction û can be written as:
δ T_kSZ = Δ T/T(û)
= σ_ T∫dz (dt/dz) e^-τ_e(z) n_e û·𝐯
= σ_ T∫dz (dt/dz) e^-τ_e(z) x_e n_b û·𝐯.
Here, σ_T is the Thomson scattering cross-section, n_e is the number density of electrons[Here we assume helium is doubly ionized at z<3, and singly ionized at the same fraction as hydrogen during the EoR.] which can be expanded as the product of the ionized fraction (x_e) and baryon density (n_b), 𝐯 is the velocity of electrons, and τ_e is the optical depth of CMB photons up to redshift z:
τ_e (z) = σ_ T∫_0^z dz' c dt/dz' n_e .
The redshift integral in equation (<ref>) is generally separated into a post-reionization (or homogeneous) component and one due to patchy reionization. Observations measure the total kSZ power spectrum[The power spectrum is defined as 𝒟_l^kSZ = l(l+1)/2πC_l^kSZ, where C_l^kSZ = T_CMB^2 |δ T_kSZ (k)|^2, T_CMB is the mean CMB temperature and δ T_kSZ (k) is the Fourier transform of δ T_kSZ.], which is the sum coming from these two respective components: 𝒟_l^kSZ = 𝒟_l^hkSZ + 𝒟_l^pkSZ. The post-reionization kSZ power spectrum, 𝒟_l^hkSZ, is dominated by fluctuations in n_b during the era of cluster formation at z≲1 (e.g. ), while the patchy kSZ power spectrum, 𝒟_l^pkSZ, is dominated by order unity fluctuations in x_e during the EoR at z≳5 (e.g. ). Constraining the patchy kSZ thus requires statistically accounting for the post-EoR (homogeneous) kSZ signal; we summarize how this was done for recent observations in Sec. <ref>. Because we do not know a priori the reionization redshift, here we define the patchy kSZ component as the contribution to equation (<ref>) of redshifts above z ≥ 5. We note that this is a lower value compared to some previous choices in the literature. It is motivated by recent Lyman alpha forest data whose interpretation requires a late reionization, ending at 5.3 ≲ z ≲ 5.6 (, Qin et al. in prep).
The kSZ power is typically measured at l=3000; smaller multipoles are increasingly dominated by primary CMB anisotropies, while larger multipoles become swamped by other foregrounds such as dusty galaxies <cit.>. During the EoR, l=3000 roughly corresponds to physical scales of ∼20 cMpc. Therefore, measurements of the patchy kSZ at this multipole are sensitive to the EoR morphology on these scales, as well as the timing and duration of the corresponding epochs. Simulation box sizes larger than about 300 cMpc are sufficient to capture the ionization power spectra on those scales <cit.>. Unfortunately, the kSZ is determined to leading order by the velocity-ionization cross power, and much larger scales (above 1 cGpc) are required to capture the fluctuations in the velocity field and corresponding velocity-ionization cross-power at l∼3000 <cit.>. Given that radiative transfer simulations on such large scales are computationaly prohibitive, more approximate schemes are required to calculate the patchy kSZ signal.
The patchy kSZ power is sometimes computed analytically (with some terms calibrated to smaller numerical simulations; e.g. ) but at the price of neglecting the contribution of higher order correlations (above two points) which can represent up to 10% of the total patchy power <cit.>. More importantly, it is difficult to associate prior probabilities on the "effective" parameters of such models; priors are important for inference from a low S/N detection whose likelihood is not strongly constraining.
Instead, in this work, we choose to compute the patchy kSZ signal by ray-tracing through large 3D lightcone simulations with approximate radiative transfer (so-called semi-numerical simulations; e.g. ). Our self-consistent approach allows us to incorporate multi-frequency observations of the EoR and high-z galaxies in the likelihood. We discuss how this is done in the following section.
§.§ Computing the patchy kSZ from galaxy-driven EoR simulations
In this work we extend the public simulation package 21cmFAST[<https://github.com/21cmfast/21cmFAST>] (e.g. ) to forward-model the patchy kSZ signal together with other observables. 21cmFAST is a semi-numerical code used for generating cosmological simulations of the early Universe. It computes the evolved density and velocity fields using second-order Lagrangian perturbation theory (e.g. ). The ionization field is generated from the density field by comparing the cumulative number of ionizing photons produced by galaxies to the number of hydrogen atoms plus cumulative number of IGM recombinations, in spherical regions with decreasing radii, R <cit.>. Specifically, a cell is marked as ionized if at any radius:
n_ion≥ (1 + n_rec )(1 - x_e ),
where n_rec is the cumulative number of recombinations per baryon computed according to the sub-grid scheme of <cit.>, x_e accounts for pre-ionization by X-ray photons, and n_ion is the cumulative number of ionizing photons per baryon, with quantities averaged over the sphere of radius R:
n_ion = 1/ρ_b∫_0^∞d M_ h dn (M_ h, z | R, δ_R)/d M_ h f_duty M_∗ f_esc N_γ/b.
Here dn/dM_ h is the conditional halo mass function, N_γ/b is the number of ionizing photons per stellar baryon, f_esc is the escape fraction of ionizing photons, and f_duty corresponds to the fraction of halos that host star forming galaxies.
Here we adopt the flexible parameterization from <cit.>. Specifically, f_duty decreases exponentially below a characteristic mass scale, M_turn, due to inefficient gas cooling and/or feedback (e.g. ):
f_duty(M_ h) = exp(-M_ h/M_turn).
The ionizing escape fraction f_esc and stellar mass M_∗ are taken to be power law functions of halo mass:
f_esc (M_ h) = f_esc, 10(M_ h/10^10 M_⊙)^α_esc,
M_∗ (M_ h) = f_∗, 10 (M_ h/10^10 M_⊙)^α_∗( Ω_b/Ω_m) M_ h .
Here, f_esc,10 is the ionizing photon escape fraction normalized to the value in halos of mass 10^10 M_⊙, f_∗,10 is the fraction of galactic gas in stars also normalized to the value in halos of mass 10^10 M_⊙, and α_esc and α_∗ are the corresponding power law indices. Both f_ esc and f_∗≡ f_∗,10(M_ h/10^10 M_⊙)^α_∗ have a physical upper limit of 1. This model also assumes that the star formation rate can be expressed on average as the stellar mass divided by some characteristic time scale:
Ṁ_∗ (M_ h, z) = M_∗/H^-1(z) t_∗,
where H(z) is the Hubble parameter and t_∗ is the characteristic time-scale for star formation (with this definition, its value varies from zero to unity).
This six-parameter galaxy model (f_∗, 10, α_∗, f_esc, 10, α_ esc, M_turn, t_∗) is able to capture the average properties of the faint galaxies that dominate the ionizing photon budget, both from theoretical models and observations (e.g. ). Further details about the code and the parametrization can be found in <cit.> and <cit.>.
For a given combination of astrophysical parameters, 21cmFAST outputs 3D lightcones of the relevant cosmological fields. We thus compute the patchy kSZ signal by ray-tracing through the ionization, density and LoS velocity lightcones, directly calculating the integral in equation (<ref>), accounting also for the angular evolution of û <cit.>.
In Fig. <ref> we show an example of this procedure using a simulation that is 1.5 Gpc on a side. The astrophysical parameters of this simulation are taken from the posterior distribution of (discussed further below), specifically: [log_10(f_∗,10), α_∗, log_10(f_esc,10), α_esc, log_10(M_turn), t_∗] = (-1.42, 0.614, -1.78, 0.474, 8.62, 0.392 ). The midpoint of EoR is at z_r = 6.1, while the neutral fraction drops to zero at z_ end = 4.9. The duration of the EoR, defined throughout as Δ_z ≡ z(x_ HI =0.75)- z(x_ HI=0.25), is Δ_z = 0.76 and the CMB optical depth for the simulation is τ_e = 0.042. In the top panel we show a 2D slice (with a thickness of 1.4 Mpc) through the neutral fraction lightcone. In the bottom panels, we show the map of the patchy kSZ signal and the corresponding angular power spectrum. While this model was chosen to have patchy kSZ power that agrees with the median estimate reported by , complementary EoR and galaxy observations pull the posterior towards larger values of the l=3000 kSZ power, as we quantify further below.
§.§ Observations of the patchy kSZ
Observing the kSZ power spectrum is very challenging due to the presence of strong foregrounds as well as the primary CMB anisotropies. Deep integration over multiple frequencies is essential in separating these different components of the power spectra.
Over the past decade, ACT and SPT have published increasingly tighter upper limits on the cosmic kSZ signal <cit.>. Using SPT-SZ and SPTpol measurements at 95, 150 and 220 GHz, combined with a prior on the CIB-tSZ foregound from <cit.>, <cit.> recently claimed a 3σ measurement of the total kSZ power: 𝒟_3000^ kSZ = 3.0 ± 1.0 μK^2 (68% C.I.).
To isolate the patchy contribution to this total kSZ power, the authors subtracted an estimate of the z<5.5 homogeneous component
based on the simulations of <cit.>: 𝒟_3000^ hkSZ = 1.65 μK^2. The uncertainty around this value is bracketed
by rescaling the best guess by a factor of 0.75 and 1.25.
Doing so and using the bispectrum prior on tSZ, <cit.> estimate the patchy kSZ power at l=3000 to be 𝒟_3000^pkSZ = 1.1 ^+1.0_-0.7 μK^2 (68% C.I.).
Since our choice of lower bound in this work is z=5.0 instead of z=5.5, we add to the patchy kSZ estimate from <cit.> the contribution of the homogeneous component over the redshift interval 5<z<5.5. We estimate this be approximately 0.1 μK^2 (e.g. fig. 6 in ; fig. 5 in ).
Therefore we use the following observational constraint when performing inference in Section <ref>: 𝒟_3000^pkSZ = 1.2 ^+1.0_-0.7 μK^2 (68% C.I.).
A more robust foreground model and a consistent analysis across scales can improve constraints, as demonstrated in <cit.> where the authors give an upper limit of 𝒟_3000^pkSZ < 1.58 μK^2 (95% C.L.) using the same data as <cit.>.[As this project was started before the publication of <cit.>, here we use the original patchy kSZ estimate by <cit.>. The estimate in <cit.> would imply an even later reionization than shown here, also consistent with the newest analysis of the Lyman alpha forest spectra (Qin et al. in prep) as well as the forest dark fraction (; Campo et al. in prep).
We aim to revisit this in future work when more of these new constraints become public.] Reducing the uncertainties on the total kSZ require deeper integration, lower noise levels, and more frequency channels to better characterize foregrounds and systematics, which future telescopes such as CMB-S4 and the Simons Observatory (e.g. ) are expected to achieve. Furthermore, robustly isolating the patchy component of the total kSZ signal requires exhaustively sampling models of galaxy clusters in order to better characterize the post-reionization (homogeneous) component. Motivated by upcoming data and improved analysis, we also perform a forecast run from a mock measurement with error bars corresponding to the uncertainty expected from future experiments. This is presented in <ref>.
§ COMPLEMENTARY EOR AND GALAXY OBSERVATIONS
We now have several, independent observational probes of the EoR which can help constrain astrophysical parameters <cit.>.
Here we follow <cit.>, who used the same galaxy parametrization as we do, and use the following observational data:
* Lyman α forest opacity distributions – the 5.4 ≤ z ≤ 6.0 probability density functions (PDFs) of the forest effective optical depth, τ_ eff≡ - ln⟨ f ⟩_ 50 Mpc, computed from the mean normalized flux, f, of the QSO sample in <cit.>. <cit.> showed that this data require reionization to end late, z ≤ 5.6 (see also ).
* Dark fraction in the Lyα and Lyβ forests – the fraction of QSO spectral pixels that are dark (zero transmission) in both Lyman alpha and Lyman beta from the sample in <cit.>. This so-called dark fraction provides a model-independent upper limit on the neutral hydrogen fraction, with the value at z∼5.9 corresponding to x̅_ HI < 0.06 + 0.05 (1σ). This dataset favors earlier reionization models.
* High-redshift galaxy UV luminosity functions (UV LFs) – the 1500 Å restframe UV LFs at z=6-10, estimated by <cit.>. To constrain our models, we assume a conversion factor between the star formation rate (SFR) and UV luminosity, Ṁ_∗ = 𝒦_ UV L_ UV, and take 𝒦_ UV = 1.15 · 10^-28 M_⊙yr^-1erg^-1 s Hz, following <cit.>[This value was obtained assuming a stellar metallicity of Z_∗ = 10^-0.15zZ_⊙ and a Salpeter initial mass function (see also <cit.>).
]. UV luminosities are then related to magnitudes using the AB magnitude relation <cit.>: log_10(L_ UV/ergs^-1Hz^-1) = 0.4 × (51.63 - M_1500). UV LFs are very useful in anchoring our SFR relations (i.e. the ratio f_∗/t_∗), using the more massive reionization-era galaxies bright enough to be observed directly with the Hubble (and eventually JWST) telescope.
* The CMB optical depth – the Thomson scattering optical depth of CMB photons as computed by <cit.>, τ_e = 0.0561 ± 0.071 (1σ). Although it is more accurate to directly forward model the CMB EE power spectra, <cit.> show that computing the likelihood from τ_e (a compressed summary statistic of the CMB power spectra) does not notably impact the resulting posterior for the astrophysical model used here.
These four complementary datasets are used in all of our inferences, each contributing a factor in the final likelihood. For further details, we refer the interested reader to <cit.>.
§ WHAT DO WE LEARN FROM THE PATCHY KSZ SIGNAL?
We now explore what astrophysical constraints can be obtained from reionization observations, including the recent kSZ measurement <cit.>. We first discuss our Bayesian sampler and the set up of our forward models, before showing results using current and future kSZ measurements.
We now explore what astrophysical constraints can be obtained from reionization observations, including the recent kSZ measurement <cit.>. We compute two posteriors:
* without kSZ – this corresponds to the posterior from <cit.>, based on the observational data (i)–(iv) discussed in section <ref>, i.e. large-scale Lyα forest opacity PDFs, the forest dark fraction, UV LFs, and the CMB optical depth.
* with kSZ – this is the same as without kSZ, but including an additional factor in the likelihood (see Appendix <ref> for details) corresponding to the kSZ measurement by <cit.>: 𝒟_3000^pkSZ = 1.1 ^+1.0_-0.7 μK^2 (68% C.I.).
Comparing the without kSZ and with kSZ posteriors, we quantify the additional constraining power provided by the patchy kSZ.
§.§ Inference set-up
To perform Bayesian inference, we use 21cmMC[Available at <https://github.com/21cmfast/21CMMC>.] <cit.>, a public Monte Carlo sampler of 21cmFAST. For each set of model parameters (see section <ref>), 21cmMC computes a 3D lightcone realization of cosmological fields, comparing the model to the observations (see sections <ref> and <ref>). Here we use the MultiNest <cit.> sampler, which is fully implemented in 21cmMC and scales well to high-dimensional inference <cit.>. We use 1000 live points, an evidence tolerance of 0.5 and a sampling efficiency of 0.8. We checked for convergence by launching a run with 2000 live points and found no significant difference in the inferred posterior distributions. Our fiducial posterior converges after ∼ 45k samples, taking ∼ 260k core hours.
Unfortunately, due to computational limitations, we cannot use ultra-large simulations (e.g. Fig. <ref>) when forward modeling. Instead we use smaller boxes, calibrating their output to account for the missing large-scale modes in the kSZ signal (see also ). Specifically, we use simulations of (500 Mpc)^3 on a 256^3 grid. When constructing the lightcones, we rotate the coeval boxes to minimize duplication of structures due to periodic boundary conditions (e.g. ). We account for the missing large-scale power by sampling several realizations (different cosmic seeds) of 500 Mpc boxes, and comparing their power spectra to those from 1.5 Gpc boxes, constructed using the same astrophysical parameters. We compute the mean ratio of the missing power, f_0.5Gpc^1.5Gpc≡𝒟_3000^pkSZ, 1.5Gpc / 𝒟_3000^pkSZ, 500Mpc, adjusting our forward models by this factor and including the corresponding variance in the denominator of the likelihood. We obtain f_0.5Gpc^1.5Gpc = 1.27 ± 0.19. Further details on this calibration procedure can be found in Appendix <ref>.
§.§ Inference results using the recent SPT measurement
We compute two posteriors:
* without kSZ – this corresponds to the posterior based on the observational data (i)–(iv) from the previous section, i.e. large-scale Lyα forest opacity PDFs, the forest dark fraction, UV LFs, and the CMB optical depth.[Even though we used the same parametrization and observational data as <cit.>, our without kSZ posterior distribution is slightly different. This is because here we use the ionizing photon conservation correction from <cit.>, which results in roughly a shift of 0.2 in the recovered α_ esc (as also shown in ). When computing the Lyman alpha forest we use a harder UV background (with energy index β_ uv=-2 instead of -5) and a higher post-ionization front temperature (T_ re = 2.0×10^4K instead of 1.0×10^4K), motivated by recent estimates from hydrodynamic simulations (e.g. ). The harder UV background shifts the end of reionization to slightly earlier times, compared with <cit.>.]
* with kSZ – this is the same as without kSZ, but including an additional factor in the likelihood (see Appendix <ref> for details) corresponding to the patchy kSZ measurement by <cit.>, adjusted for the slightly different lower redshift bound as discussed above: 𝒟_3000^pkSZ = 1.2 ^+1.0_-0.7 μK^2 (68% C.I.).
Comparing the without kSZ and with kSZ posteriors, we quantify the additional constraining power provided by the patchy kSZ. We begin by showing the constrains on the fundamental galaxy parameters, before discussing the corresponding derived quantities such as the EoR history and the halo-galaxy connection.
§.§.§ Galaxy parameters and EoR history
In the bottom left of Fig. <ref>, we show the resulting two- and one-dimensional posteriors without kSZ (blue) and with kSZ (red). We also show the model posteriors together with two of the observational data used in the likelihood: the l=3000 patchy kSZ power (top center), and the UV LFs at z=6, 8, 10 (top right).
From the 𝒟_3000^pkSZ PDFs shown in the top center panel, we see that the recent measurement by <cit.> is in mild tension with the without kSZ posterior: the kSZ data favor the low amplitude tail of the without kSZ posterior, corresponding to late reionization models.
Indeed, by including the kSZ measurement, the distribution is shifted in favor of smaller kSZ power. Most of the with kSZ posterior is still above the mean estimate of the kSZ power by <cit.>, though perfectly consistent given the large observational uncertainty.
This is consistent with the Lyman α forest data, but in slight (≲ 1 σ) tension with the CMB optical depth τ_e as well as the QSO dark pixel fraction. We note that an updated estimate of the QSO dark pixel fraction using more recent, much larger QSO samples from <cit.> results in weaker upper limits on the neutral fraction at z∼6, making them perfectly consistent with later EoR models (Campo et al. in prep). This would leave the CMB τ_e as the only dataset preferring a slightly earlier EoR. Such a mild tension between the two CMB datasets could come from calibration or analysis inconsistencies between large- and small-scale data, that is between the SPT and Planck data <cit.>.
The biggest difference between the two galaxy parameter posteriors is in the recovered ionizing escape fraction, parametrized in our model with f_esc,10 and α_esc (see Eq. <ref>). The SPT measurement favors slightly lower values of f_esc, 10 and higher values of α_ esc. As a result, the inferred ionizing efficiency slightly increases in more massive, late-appearing galaxies (discussed further in the following section), so that the EoR occurs later and more rapidly, as can be seen from the EoR histories shown in Fig. <ref>. Including the relatively low patchy kSZ amplitude claimed by <cit.> disfavors the more extended EoR histories present in the without kSZ posterior. While the end of the EoR remains fairly unchanged, constrained by Lyman alpha forest observations (e.g. , the middle and early stages are shifted to later times with the addition of kSZ data. This translates into lower CMB optical depths as seen in the inset of Fig. <ref>: τ_e = 0.052_-0.008^+0.009 for the with kSZ posterior compared to τ_e = 0.055_-0.009^+0.012 for the without kSZ case.
On the other hand, constraints on parameters governing the star formation rates and stellar-to-halo mass relations (i.e. f_∗, 10, α_∗, t_∗, M_ turn) are fairly unchanged when including kSZ data. As already shown in , observed high-z UV LFs constrain the stellar-to-halo mass relation (f_∗, 10, α_∗) and place an upper limit on a faint end turnover (M_ turn). Therefore, these parameters have only limited freedom to impact the timing of reionization while still being consistent with the UV LFs data.
§.§.§ Scaling relations of EoR galaxies
To gain further insight into the implications of our results, we show the corresponding galaxy scaling relations in Fig. <ref>. In the top panel, we plot the inferred stellar-to-halo mass relation (SHMR), defined as the average stellar mass inside a halo of mass M_ h (including the f_ duty occupation fraction term from Eq. <ref>). The redshift-independent median relation from the with kSZ posterior is denoted with the red solid line, corresponding to:
M̅_∗/M_h = 0.011_-0.002^+0.003( M_ h/10^10M_⊙)^0.50_-0.06^+0.06 (68%C.I.) ,
while the 95% C.I. is shown with the purple shading.[We note that the inferred median scaling of the SHMR ∝ M_h^0.5, is close to the value expected by simply assuming SNe feedback scales with the gravitational potential of the host halo, SHMR ∝ M_h^0.67 <cit.>. As discussed further below, any mass dependence of 𝒦_ UV (here assumed to be constant) would also impact the inferred scaling.]. We also show the corresponding 95% C.I. of the without kSZ posterior in blue. The two posteriors overlap in this space, again illustrating that the SHMR for our model is determined by the UV LFs, and is unaffected by kSZ data.
We also include some other estimates from the literature, which show sizable scatter for the high-redshift, small-mass
regime that is relevant for the EoR. Our inferred relation is roughly consistent with current estimates, given their large scatter. It is unsurprising that, despite the fairly large scatter, the slopes of the SHMRs shown in this panel are roughly similar. This is because in most cases the observed UV LFs are used either directly or indirectly to calibrate the models. The slope of the UV LFs combined with the slope of the HMF, both power-laws in this range, sets the slope of the SHMR, with the normalization being more sensitive to the star formation – L_1500 conversion[
We caution that our SHMRs are likely overconstrained, because we do not include any uncertainty in the SFR – L_1500 conversion (i.e. we fix 𝒦_ UV from Sec. <ref> to be a constant). This conversion depends on the IMF and the duration of recent star formation, with different assumptions changing 𝒦_ UV by factors of ∼ 2 (e.g. ).].
We caution that we do not include an uncertainty and possible mass dependence in our SFR – M_∗ relation. Our models are informed by UV LFs which constrain the SFR on the more massive end of this halo mass range.
our M_∗ - M_ halo relation is overconstrained and possibly somewhat too flat, because we don't include the uncertainty, and possible mass dependence, of the SFR - M_∗ relation. Our models are informed by LF observations which directly constrains the SFR on the more massive end. Passing to the stellar mass requires a conversion that depends on the stellar population. It is likely that less massive galaxies have more top heavy IMF which increases 𝒦_ UV from Sec. <ref>. If we included 𝒦_ UV in our inference with a physically motivated prior that it increases for smaller, more metal-poor galaxies, this would effectively steepen the inferred M_∗ - M_ halo relation and extend the recovered range by a factor of ∼ 2 (e.g. <cit.>).
Similarly, in the bottom panel of Fig. <ref>, we show the ionizing escape fraction to halo mass relation. Our redshift-independent median relation for with kSZ is denoted with a solid red line:
f_ esc = 0.038_-0.017^+0.021( M_ h/10^10M_⊙)^0.25_-0.19^+0.19 (68%C.I.; with kSZ),
Also shown is the result for without kSZ in blue:
f_ esc = 0.060_-0.028^+0.038( M_ h/10^10M_⊙)^0.07_-0.33^+0.23 (68%C.I.; without kSZ).
As in the panel above, the corresponding shaded areas demarcate the 95% C.I. The green dashed lines denote the range of our prior in this space, uniform over log_10 f_ esc, 10∈ [-3, 0], α_ esc∈ [-1, 0.5]; the fact that our posterior is tighter than the prior illustrates the constraining power of current observations and that our results are not sensitive to our choice of prior.
Again for illustrative purposes, we show some theoretical estimates from the literature. Compared to the SHMR in the top panel, there is far less consensus on the ionizing escape fraction. This is because the relevant small scales are impossible to resolve in cosmological simulations; therefore results are sensitive to the resolution/sub-grid prescriptions. Indeed, some simulations suggest an increasing trend with halo mass while others suggest a decreasing trend.
In contrast, Bayesian inference allows the observations to inform us about the (mean) f_ esc(M_ h) relation. By comparing the blue and red shaded regions we see that the addition of kSZ data favors a slight increase in the mean escape fraction towards more massive halos. While the uncertainties are still large at the small mass end, the ionizing escape fraction for galaxies hosted by ∼ 10^10 – 10^11 M_⊙ halos is reasonably well constrained to be a few percent. Interestingly, strong evolution with halo mass is disfavored.
In Fig. <ref> we also demarcate the posterior-averaged mean of M_50 (M_90), defined as halo mass upper limit below which galaxies source 50% (90%) of the ionizing emissivity at z=7, i.e. M_50 is calculated by solving the equation: ∫_0^ M_50dM_ h d n_ ion/ d M_ h = 1/2 ∫_0^∞dM_ h d n_ ion d M_ h. We see that over half of the ionizing photons are sourced by galaxies that are below current detection limits.
In contrast, Bayesian inference allows the observations to inform us about f_ esc(M_ h). By comparing our posterior to our prior range (demarcated with the dashed green lines), we see that with current data we are already able to constrain f_ esc(M_ h) reasonably well and that the posterior is not sensitive to our choice of prior. Interestingly, we find a small positive dependence of f_ esc with halo mass. This could be explained in the following way. Low-mass galaxies can create large low-escape fraction zones through which the ionizing photons can escape. However, this could happen on short time-scales, with a larger time spent in the obscured phase with a low amount of ionizing photons escaping to the IGM. For larger galaxies, star-formation could be more stable and produce consistent escaping channels. These two phenomena could lead to a positive scaling of the mean relation of f_ esc(M_ h).
High resolution galaxy simulations find the escape fraction of individual galaxies strongly fluctuates on short time-scales (e.g. ). Other works show a positive scaling of the escape fraction in a mass range that is most significant for reionization <cit.>.
Finally, we quantify which galaxies are driving reionization. We compute M_50, defined as the halo mass bellow which the integrated ionizing emissivity at z=7 is half of the total ionizing emissivity: ∫_0^ M_50dM_ h d n_ ion/ d M_ h = 1/2 ∫_0^∞dM_ h d n_ ion d M_ h
∫_0^ M_50dM_ h d n_ ion/ d M_ h = 1/2∫_0^∞dM_ h d n_ ion/ d M_ h.
Higher values of M_50 indicate that more massive galaxies are dominating the ionizing output. In fact, confidence intervals shown in Fig. <ref> are in agreement with that claim, as posterior M_50 values point to stellar masses comparable to the ones inferred from galaxies already observed at redshift z∼ 7 <cit.>.
To further quantify which galaxies are driving reionization, in Fig. <ref> we plot the posterior-averaged fractional contribution of galaxies with a given mass to the ionizing emissivity at z= 6, 7, 10. The curves are obtained by sampling the probability density function of galaxies contributing to the total emissivity (see Eq. <ref>), for each sample from the posterior of Fig. <ref>, and then computing the cumulative distribution function (CDF). This is done for both without kSZ and with kSZ posterior. On the bottom axis of left panels we show the host halo mass, while on the top axis we show the corresponding stellar mass, using the median SHMR (for the with kSZ case see the top panel of Fig. <ref>).
We see that for the with kSZ posterior galaxies in the range log_10(M_ h/ M_⊙) = [ 10.00, 11.16 ] ([ 9.18, 11.82 ]) contribute 50% (90%) the ionizing photon budget at z=7. Using our median M_∗-M_ h relation, this translates to the ranges log_10(M_∗/M_⊙) = [8.02, 9.76 ] ([6.64, 10.77 ]). The ranges for redshifts 6, 7 and 10 are found in Tables <ref> and <ref>.
Additionally, in the right panels we show the posterior-averaged fractional contribution of galaxies to the ionizing emissivity, now with a given UV magnitude for redshifts z= 6, 7, 10. Also shown are magnitude limits at z=7 for the Hubble Space Telescope (HST) and James Webb Space Telescope <cit.>. While with HST we are still missing more than 50% of the galaxies responsible for the reionization at z=7, with JWST we will be able to see most of ionization sources.
We caution that our inference constraints could depend on our choice of galaxy parametrization. This is however easily testable by comparing the Bayesian evidences of models of varying complexities and/or doing Bayesian model averaging to determine scaling relations that the models have in common (e.g. SHMR, f_ esc(M_h)). We defer this exploration to future work. We expect however that the six parameter model we use is physically-motivated and flexible enough to span a wide enough range of EoR histories and UV LFs, making it unlikely that current data would prefer a more sophisticated model. For example, adding a redshift dependence to f_ esc(M_ h) would not only be fairly ad-hoc, but would require an extreme evolution over a narrow redshift interval to dramatically shift our results given the fairly rapid EoR histories favored by the data.
§.§ Forecast assuming future kSZ measurments
The improved precision of future experiments, as well as their larger sky coverage, will allow for lower noise levels and decreased sample variance. With CMB-S4, we expect the errors on the measurement of the amplitude of the CMB temperature power spectrum at l = 3000 to decrease by a factor of 5 to 10, depending on the bandpower <cit.>. Improved foreground modelling should also help reduce the uncertainty on the kSZ amplitude by roughly 30 % <cit.>. On the theoretical side, suites of simulations could better characterize the contribution of the homogeneous kSZ to the total power.
To quantify the corresponding improvement in parameter constraints, we repeat the with kSZ inference in Sec. <ref>, but using a mock future kSZ measurement instead of <cit.>.
We assume 𝒟_3000^pkSZ = 2.0 ± 0.10 μK^2. The mean value corresponds to the maximum a posteriori (MAP) model from the with kSZ posterior in the previous section, while the choice of uncertainty is (very roughly) motivated by the arguments above.
In Figure <ref> we show constraints on the EoR history using the mock kSZ observation (green shaded region), together with the current constraints (red shaded region).
We see that if the kSZ error bars could be reduced by a factor of ∼ 10, it would result in a dramatic improvement on the recovered EoR history, with the
midpoint of reionization being constrained to an r.m.s. uncertainty of σ_z_r = 0.16, compared to 0.4 for the current with kSZ posterior in red. A similar improvement is also obtained for the duration of EoR: Δ_z ≡ z(x_ H =0.75)- z(x_ H=0.25). Using the mock kSZ observation we recover Δ_z = 1.09_-0.09^+0.12, which compared to the current with kSZ constraints of Δ_z = 1.16_-0.19^+0.24, reduces the uncertainty by a factor of ∼ 2.
It is interesting to note that the change in the recovered history is primarily in delaying reionization; the duration decreases only marginally.
Because galaxies sit inside halos, the duration of reionization cannot be arbitrarily short; it will be limited by the growth of the HMF. The most rapid EoR models are those dominated by the rare, bright galaxies hosted by massive halos in the exponential tail of the HMF. Their fractional abundance increases more rapidly compared to that of the more common, smaller halos. However, the observed UV LFs set a lower limit on Δ_z because we actually see galaxies down to M_1500∼ -13, and the rare bright galaxies cannot have f_ esc > 1. Since we cannot physically decrease Δ_z to values below unity, the only physically plausible way of decreasing the kSZ amplitude to agree with the mock observation is to lower the redshift of reionization. This is seen in the figure, and it causes the CMB τ_e PDF to pile up in the lower ∼ 30% C.I. inferred from Planck alone. Interestingly this later EoR history is in very good agreement with the latest, independent estimates coming from the Lyman alpha forest opacity fluctuations which imply that EoR finishes at z ≈ 5.3 (Qin et al., in prep).
The Hubble telescope can detect objects as faint as a magnitude of +31.5, and the James Webb Space Telescope (operating in the infrared spectrum) is expected to have a magnitude limit of 34th magnitude.
The largest difference with respect to Fig. <ref> is the inferred escape fraction parameters, f_esc,10 and α_esc. 1D f_esc,10 distribution is shifted even further to lower values compared to Fig. <ref>. The distribution of α_esc is completely shifted to the peak at the positive value. This behavior goes in line with the discussion in Sec. <ref> in which the low value of the patchy kSZ signal prefers a late and rapid EoR history dominated by rare sources, i.e. with high α_esc. There is only a single peak in the posterior distribution, and the lower peak, previously attributed to the CMB optical depth constraint, is not present. This points to the high constraining power of the future kSZ measurement.
The narrowness of the distribution is also seen in the distribution of the patchy kSZ signal in the upper middle panel. Also, the EoR history distribution again points to the late and rapid end of the reionization. Another interesting thing is the M_turn distribution which is narrower with the addition of forecast kSZ likelihood. Very low values of M_turn imply that lower-mass halos host star-forming galaxies which is disfavored by the late and rapid end to the EoR.
§ DO WE NEED SELF CONSISTENT FORWARD MODELS OF THE KSZ?
When interpreting kSZ observations, it is common to vary the amplitude of the patchy kSZ power but with a fixed power spectrum shape (e.g. ). Generally the power at l=3000, 𝒟^ pkSZ_3000, is related to empirical parameters characterizing the EoR history, such as its midpoint and duration (with several definitions found in the literature). This is in contrast to our approach in which the patchy kSZ power spectra are self-consistently forward-modeled directly from galaxy properties.
Using only empirical parameters for the EoR history has two important drawbacks: (i) for a given EoR history, the patchy kSZ power can also vary due to the EoR morphology <cit.>; and (ii) it is more difficult to physically-motivate priors for derived EoR history parameters, than it is for the fundamental galaxy parameters <cit.>. The choice of priors is especially important when the likelihood is not overly constraining (e.g. ).[In the previous section, we showed that our likelihood was indeed quite constraining, and therefore our posterior was not sensitive to our choice of priors. This is because we use several complementary EoR and galaxy observations to construct the likelihood. However, when only using the kSZ observation and ignoring for example the UV LFs, the likelihood is not overly constraining and the posterior can strongly depend on the choice of priors over the EoR history parameters <cit.>.]
Here we briefly explore the impact of (i). We sample our with kSZ posterior from Section <ref>, computing for each sample the midpoint of the EoR, z_r, and its duration, Δ_z. In Fig. <ref> we plot the mean (left panel) 𝒟̅^ pkSZ_3000 and normalized r.m.s. (right panel) of the l=3000 patchy kSZ, as a function of z_r and Δ_z. We leave blank under-sampled bins of (z_r, Δ_z), defined as those for which the variance of the mean is larger than the mean of the variance.
We note that our estimates of 𝒟̅^ pkSZ_3000 are tens of percent higher compared to some recent estimates, for a given combination of z_r and Δ_z <cit.>. This might be in part due to different sampling of EoR morphologies, or to differences in how the patchy kSZ power is defined. Indeed, we estimate a difference of Δ z=1 on the lower redshift bound of the integral in equation <ref> to result in a ∼ 0.2 μK^2 difference in the patchy kSZ amplitude. Another potential source of disagreement could stem from using the Limber approximation to compute the patchy kSZ spectrum from the power spectrum of the density-weighted peculiar velocity field of the free electrons (e.g. ), rather than ray-tracing the signal.
However, we find a good agreement between the two approaches on the scales of interest (l ≳ 1000).In this approach, only Fourier modes perpendicular to the line-of-sight or which fluctuate on scales larger that the Hubble length (about 400 Mpc at z=6) will contribute to the kSZ power <cit.>. The latter having low amplitude, they are often discarded <cit.>, but could explain part of the additional power we find in this work.
We note that our estimates of 𝒟̅^ pkSZ_3000 in the right panel are tens of percent higher compared to some recent estimates, for a given combination of z_r and Δ_z <cit.>. This might be in part due to different sampling of EoR morphologies, differences in how the patchy kSZ power is defined[We estimate a difference of Δ z=1 on the lower redshift bound of the integral to lead to a 0.1 μK^2 difference in the resulting patchy kSZ amplitude.], and/or how it is calculated[Most studies compute the patchy kSZ power using the Limber approximation (e.g. ):
𝒞_l = (σ_T n_e,0/c)^2 ∫ds/s^2 a^4 e^-2 τ (z)P_q ⊥ (k =l/s, s)/2,
where s is the comoving distance, a is the scale factor, and 𝐪≡ x_i 𝐯 (1+δ). Defining 𝐪̃(𝐤) as the Fourier transform of 𝐪, the transverse mode is given by 𝐪̃_⊥ (𝐤) = 𝐪̃ ( 𝐤) - k̂ [ 𝐪̃ (𝐤) ·k̂]. Then, P_q_ ⊥ is the power spectrum of the transverse mode: (2π)^3 P_q_⊥ (k) δ^D(𝐤 - 𝐤') ≡⟨𝐪̃ (𝐤) ·𝐪̃' (𝐤) ⟩.
Using a 1.5 Gpc simulation (parameter combination 1 from Appendix <ref>), we find that the patchy kSZ power computed using Eq. <ref> is ∼50% smaller than that from the direct lightcone integration of eq. <ref>. One possible explanation for this difference is that eq. <ref> assumes that τ(s) and 𝐪̃ only vary over the Hubble length <cit.>. However, in our work, rapid reionization is preferred and so shorter wavelengths of 𝐪̃ could also contribute to the total power. Ignoring those modes, as is done in the approximation of Eq. <ref>, could underestimate the kSZ power.].
We also compute the slope of the 𝒟̅^ pkSZ_3000 - Δ_z relation using the full posterior sample, finding values that are roughly 15 % larger than <cit.> and <cit.>. A more detailed comparison with other analysis is not possible given the differences in modelling, definitions, and EoR parameters, and would require a dedicated study.
In the left panel we also show the marginalized 1D PDFs of z_r (top) and Δ_z (right). Our with kSZ posterior corresponds to the following constraints on the EoR history parameters: z_r = 7.12_-0.41^+0.44 and Δ_z = 1.16_-0.19^+0.24 (68% C.I.). As noted in previous studies, there is a strong degeneracy between z_r and Δ_z, as either a later or a shorter EoR decreases the patchy kSZ power. Our median recovered values of Δ_z are consistent with those from other recent analyses of the SPT observation, including <cit.> who found Δ_z = 1.1^+1.6_-0.7 (68% C.I.) and <cit.> who found Δ_z = 1.30^+0.19_-0.60. Our limits are however ∼ 3 times tighter compared to <cit.> since we use additional, complementary observations in the likelihood.
In the right panel of Fig. <ref> we quantify the scatter in the l=3000 patchy kSZ power, at fixed values of z_r and Δ_z. We see that the r.m.s. scatter in the power is generally at the level of a few percent. Thus varying the l=3000 kSZ power amplitude at a function of only z_r and Δ_z, without considering the EoR morphology, cannot yield an accuracy on 𝒟̅^ pkSZ_3000 better than ∼ few percent.
We stress also that this is a conservative estimate, since we only compute the scatter in the kSZ power for our relatively narrow with kSZ posterior. Studies that do not consider complementary EoR and galaxy observations in the likelihood would result in broader posteriors with correspondingly larger scatter in the mean power. Indeed, by sampling a broader range of models, <cit.> find a larger r.m.s. scatter, of order ∼ 0.4 μK^2 for the kSZ power at a fixed z_r and Δ_z.
It is important to note that without using complimentary observations in the likelihood, both the distributions of (z_r, Δ_z) seen in the left panel and the scatter in the kSZ power at a fixed EoR history seen in the right panel would be considerably broader. As noted earlier, the current SPT detection is low S/N and by itself is very not constraining. Only in combination with complimentary observations can we obtain tight constraints on the EoR history and not be sensitive to our choice of priors.
§.§ Comparison with recent estimates of z and Δ z
1D posterior PDFs for Δ_z and z_r are also shown on the sides of the left panel of Fig. <ref>. We find Δ_z = 1.2^+0.7_-0.3 at the 95% confidence level. Using scaling relations from <cit.> and <cit.>, <cit.> found Δ_z = 1.1^+1.6_-0.7 at the 68% confidence level. Recently, <cit.> used <cit.> result, along with the <cit.> optical depth measurement to constrain the duration of the reionization. Their value of the duration with the 68% C.I. is Δ_z = 1.30^+0.19_-0.60.
Our results are compatible with these constraints, but more constraining due to the additional data used for parameter inference.
Result form <cit.> was used in previous works to put constraints on the EoR parameters <cit.>. Compared to these results and other theoretical works <cit.> our value for 𝒟^pkSZ_3000 is higher. In our definition of the patchy kSZ signal (see Eq. <ref>) lower bound of the integral is fixed at z=5. Because of that there could be an additional contribution after the reionization <cit.>. Also, addition of power presented in Appendix.
<ref> could potentially bias the result.
The usual way in which the pacthy kSZ PS is calculated is using the Limber approximation <cit.>. First, specific free electron momentum is defined as 𝐪≡ x_i 𝐯 (1+δ), where x_i is the ionization fraction, 𝐯 is the gas velocity and δ is the overdensity. From there kSZ power spectrum is calculated as:
𝒞_l = (σ_T n_e,0/c)^2 ∫ds/s^2 a^4 e^-2 τ (z)P_q ⊥ (k =l/s, s)/2,
where s is the comoving distance and a is the scale factor. Defining 𝐪̃(𝐤) as the Fourier transform of 𝐪, the transversal mode is given by 𝐪̃_⊥ (𝐤) = 𝐪̃ ( 𝐤) - k̂ [ 𝐪̃ (𝐤) ·k̂]. Then, P_q ⊥ is the power spectrum of the transverse mode: (2π)^3 P_q ⊥ (k) δ^D(𝐤 - 𝐤') ≡⟨𝐪̃ (𝐤) ·𝐪̃' (𝐤) ⟩. We've tested the difference in the power by calculating the kSZ PS using Eq. <ref>. We found that the power obtained using Eq. <ref> is ∼ 60 % larger than the power using Eq. <ref>. To obtain Eq. <ref>, it is assumed that τ(s) and 𝐪̃ vary over the Hubble length scale <cit.>. However, in our work, rapid reionization is preferred and so this assumption might not hold leading to shorter wavelengths of 𝐪̃ contributing to the total power. However, more detailed investigation is needed to explore the difference in power.
Despite the differences in the amplitude of the patchy kSZ PS between different works, correlation of the signal and Δ_z should be consistent between our approach and recent works. We test it by comparing the slope of the 𝒟^pkSZ_3000 - Δ_z relation. The slope in our work is roughly 10 % higher than the estimate from <cit.> and ∼ 30 % lower than the result from <cit.>for the models in the posterior sample. This could suggest that different analyses are roughly consistent. More detailed analysis of different approaches is not possible given the difference in modelling of the signal, redshift span of simulations and definitions of the signal and EoR parameters such as Δ_z.
§ CONCLUSIONS
The patchy kSZ signal is an integral probe of the timing and morphology of the EoR. Recently, <cit.> have claimed a detection of the patchy kSZ signal (𝒟_3000^pkSZ = 1.1_-0.7^+1.0 μK^2). In the future, we expect a dramatic increase in S/N from telescopes such as CMB-S4 and Simons Observatory enhancing the potential of using kSZ measurements for EoR science.
In this work we quantify what we can learn about the EoR from the patchy kSZ signal. We modify the public 21cmFAST code to produce forward-models of the patchy kSZ signal. We then perform Bayesian inference by sampling galaxy properties and using the recent kSZ measurement together with other observations in the likelihood. These include: (i) high-z UV LFs; (ii) Lyα forest opacity distributions; (iii) the Lyman forest pixel dark fraction and (iv) CMB optical depth.
In order to quantify the additional constraining power of the patchy kSZ we computed two posteriors: one based on <cit.> (using datasets (i)–(iv); without kSZ) and one with an additional likelihood term for the recent measurement of the patchy kSZ cited above (with kSZ). We found that the addition of the kSZ measurement shifts the posterior distribution in favor of faster and later reionization models (Fig. <ref>). This results in a lower optical depth to the CMB: τ_e = 0.052_-0.008^+0.009 (68% C.I.).
The shift to later and more rapid EoR implies a lower ionizing escape fraction with a very weak positive scaling with halo mass.
The average f_ esc of typical galaxies driving the EoR is a few percent. We disfavor a strong evolution of f_ esc with galaxy mass.
We also present constraints on common empirical parameters characterizing the midpoint and duration of reionization, respectively z_r = 7.10_-0.41^+0.44 and Δ_z = 1.16_-0.19^+0.24 (68% C.I.), consistent with other recent results <cit.>.
We show that the scatter in patchy kSZ power at l=3000, at a fixed z_r and Δ_z, is of order ∼ few percent. Thus the interpretation of current kSZ data can be done using only these two summary statistics. However, without a physical model it would be difficult to assign prior probabilities or use complimentary observations in the likelihood.
Future observations should further improve the measurement of the patchy kSZ signal <cit.>.
To forecast the resulting improvement in parameter constraints, we also create a mock observation with the measurement error reduced to 0.1 μ K ^2, centered on the MAP model from our inference. Such a futuristic observation can reduce the uncertainties on the recovered EoR history by factors of ∼2, 3. However, if the patchy kSZ power is confirmed to be low (𝒟_3000^pkSZ≲ 2 μK^2), it would result in a mild tension with the CMB τ_e inferred from primary CMB anisotropies.
§ ACKNOWLEDGEMENTS
We thank C. Mason and A. Ferrara for insightful comments on a draft version of this manuscript. We gratefully acknowledge computational resources of the Center for High Performance Computing (CHPC) at Scuola Normale Superiore (SNS). A.M. acknowledges support from the Ministry of Universities and Research (MUR) through the PNRR project "Centro Nazionale di Ricerca in High Performance Computing, Big Data e Quantum Computing" and the PRO3 project "Data Science methods for Multi-Messenger Astrophysics and Cosmology". Y.Q. acknowledges that part of this work was supported by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project #CE170100013. A.G.'s work is supported by the McGill Astrophysics Fellowship funded by the Trottier Chair in Astrophysics, as well as the Canadian Institute for Advanced Research (CIFAR) Azrieli Global Scholars program and the Canada 150 Programme.
§ DATA AVAILABILITY
The data underlying this article will be shared on reasonable request to the corresponding author. The code will be made public by merging into the main 21cmFAST branch, after the manuscript is accepted for publication.
mnras
§ CALIBRATING SIMULATIONS TO ACCOUNT FOR MISSING LARGE-SCALE KSZ POWER
As discussed in Section <ref>, large boxes are required to accurately simulate the patchy kSZ signal.
<cit.> find that a simulation box of side length 100 h^-1Mpc would miss about 60 % of the kSZ power.
However, using large simulations which also resolve small-scale physics in forward modeling is computationally impractical. Although one could account for missing large-scale power analytically <cit.>, such perturbative approaches are approximate and have only been tested with a few models. Instead, here we compute the kSZ signal directly from multiple, smaller-box realizations of the signal and statistically characterize the missing power comparing to a large-box realization <cit.>.
We pick a random sample from the posterior distribution of <cit.>, corresponding to the without kSZ posterior. For this set of astrophysical parameters, we compute the patchy kSZ power using a 1.5 Gpc simulation, run on a 1050^3 grid. We then generate 20 realizations of the smaller-box simulations used in our inference (500 Mpc on a 256^3 grid), using the same astrophysical parameters but varying the initial random seed. When constructing the lightcones using the 500 Mpc simulations, we rotate the coeval boxes to minimize duplication of structures due to periodic boundary conditions (e.g. ). The resulting histogram of 𝒟^ pkSZ_3000 from the 500 Mpc simulations is shown in Fig. <ref>, together with the value from the 1.5 Gpc simulation (blue vertical line). We compute the ratio of the missing power as f_0.5Gpc^1.5Gpc = 𝒟_3000^pkSZ - 1.5Gpc / 𝒟_3000^pkSZ - 500Mpc. Using these 20 realizations, we find f_0.5Gpc^1.5Gpc = 1.27 ± 0.19. We include this scaling factor and associated uncertainty in the likelihood when performing inference:
lnℒ_ kSZ = -1/2(𝒟_3000^kSZ,SPT - 𝒟_3000^kSZ,model/σ_a + σ_b (𝒟_3000^kSZ,SPT - 𝒟_3000^kSZ,mock))^2,
with σ_a = 2σ_uσ_l/σ_u + σ_l, σ_b = σ_u - σ_l/σ_u + σ_l. Here, σ_u and σ_l are upper and lower 68% C.I. limits of the measurement. Since we are adding the scaling factor uncertainty in the quadrature, the final expressions for σ_u and σ_l are σ_u = √(σ_u, SPT^2 + σ_f_0.5Gpc^1.5Gpc^2) and σ_l = √(σ_l, SPT^2 + σ_f_0.5Gpc^1.5Gpc^2), where the measurement is expressed as (𝒟_3000^kSZ,SPT)_-σ_l,SPT^+σ_u,SPT = 1.1_-0.7^+1.0μK^2. Log-likelihood written in equation (<ref>) is a Gaussian whose width depends on the parameter value and it's used for the asymmetric statistical errors of the measurement <cit.>.
Note that since we are varying the seed, the variance in f_0.5Gpc^1.5Gpc also includes the Poisson uncertainty on the mean power, stemming from the fact that the power is estimated from a finite number of wavemodes. The later is illustrated as a solid black segment in Fig. <ref>, and has a subdominant contribution to the scatter in 𝒟^pkSZ_3000 from the 500 Mpc simulations.
How much does the scaling factor, f_0.5Gpc^1.5Gpc, depend on the choice of astrophysical parameters? Unfortunately, it would be computationally impractical to repeat the above calibration procedure over our entire 6D astrophysical parameter space. Instead we sample four different astrophysical parameters from the without kSZ posterior, and compute f_0.5Gpc^1.5Gpc using a single 1.5 Gpc and 500 Mpc simulation for each parameter set. The parameters and corresponding scale factors are listed in the Table <ref>. Reassuringly the scaling factors vary by only ∼ 3 % between the four parameter combinations. This is much smaller than the ∼ 0.7 μK^2 measurement uncertainty, justifying our assumption of a constant f_0.5Gpc^1.5Gpc.
|
http://arxiv.org/abs/2307.02633v1
|
20230705200617
|
Hybrid Ground-State Quantum Algorithms based on Neural Schrödinger Forging
|
[
"Paulin de Schoulepnikoff",
"Oriel Kiss",
"Sofia Vallecorsa",
"Giuseppe Carleo",
"Michele Grossi"
] |
quant-ph
|
[
"quant-ph",
"cond-mat.stat-mech",
"cs.LG"
] |
paulin.deschoulepnikoff@epfl.ch
Institute of Physics, Ecole Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland
European Organization for Nuclear Research (CERN), Geneva 1211, Switzerland
oriel.kiss@cern.ch
European Organization for Nuclear Research (CERN), Geneva 1211, Switzerland
Department of Nuclear and Particle Physics, University of Geneva, Geneva 1211, Switzerland
European Organization for Nuclear Research (CERN), Geneva 1211, Switzerland
Institute of Physics, Ecole Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland
michele.grossi@cern.ch
European Organization for Nuclear Research (CERN), Geneva 1211, Switzerland
Entanglement forging based variational algorithms leverage the bi-partition of quantum systems for addressing ground state problems. The primary limitation of these approaches lies in the exponential summation required over the numerous potential basis states, or bitstrings, when performing the Schmidt decomposition of the whole system. To overcome this challenge, we propose a new method for entanglement forging employing generative neural networks to identify the most pertinent bitstrings, eliminating the need for the exponential sum. Through empirical demonstrations on systems of increasing complexity, we show that the proposed algorithm achieves comparable or superior performance compared to the existing standard implementation of entanglement forging. Moreover, by controlling the amount of required resources, this scheme can be applied to larger, as well as non permutation invariant systems, where the latter constraint is associated with the Heisenberg forging procedure. We substantiate our findings through numerical simulations conducted on spins models exhibiting one-dimensional ring, two-dimensional triangular lattice topologies, and nuclear shell model configurations.
Hybrid Ground-State Quantum Algorithms based on Neural Schrödinger Forging
Michele Grossi0000-0003-1718-1314
August 1, 2023
==========================================================================
§ INTRODUCTION
In recent years, significant advances have been made in simulating the static and dynamical properties of manybody quantum systems using variational algorithms. For instance, DMRG methods based on matrixproduct states <cit.>, neural networks quantum states <cit.>, equivariant neural networks <cit.> or kernels methods <cit.> have accurately computed the ground state energy of spin systems <cit.>, or also fermi systems, such as molecules <cit.> or nuclei <cit.>. While neural network quantum states represent wave functions using classical representations, we can also consider their quantum counterpart, where the wave function ansatz takes the form of a parameterized quantum circuit <cit.>, such as in the popular variational quantum eigensolver (VQE) <cit.>. Even if VQE has been successfully applied in various areas, such as chemistry <cit.>, spin chains <cit.> or nuclei <cit.>, it is still unclear if VQE is a scalable algorithm. Hence, the optimization procedure becomes increasingly difficult <cit.> with the system size because of the presence of barren plateaus in the loss landscape <cit.>. It is consequently desirable to conceive variational quantum algorithms acting on a minimal number of qubits.
Although the VQE is already a hybrid algorithm, in the sense that it relies on classical resources to perform the optimization, we take a step forward and design an algorithm relying on both neural networks and quantum circuits. More specifically, we consider VQE based on entanglement forging (EF) <cit.>, a circuitknitting strategy that effectively performs a Schmidt decomposition of the variational quantum state, optimizes the two subsystems separately, before reconstructing the entanglement classically. This procedure has the desirable properties of reducing the number of qubits while still reproducing the ground-state energy with high accuracy. It is similar in spirit to quantum-embedded DFT <cit.>, where quantum resources are only used for the most challenging parts.
Besides computing ground state energies, EF also allows practical heuristic simulations, notably in analyzing bipartite entanglement. This concept is fundamental in quantum mechanics, as its measurement provides an understanding of the behavior of strongly correlated systems <cit.>. For instance, bipartite entanglement has been used in condensed matter physics to study phenomena such as quantum phase transitions, topological order, and many-body localization <cit.>. Advances in experimental techniques have made it possible to measure entanglement entropy in a variety of condensed matter systems over the past few years, revealing insights into their underlying quantum properties <cit.>.
The main contribution of this paper is a Schrödinger forging procedure using an autoregressive neural network (ARNN) <cit.>. This method combine the versatility of Schrödinger forging, while controlling the computational resources required via the introduction of a cutoff. Generative neural networks have already been proposed for EF <cit.>, but only in the context of Heisenberg forging, which requires permutation symmetry of the two sub-systems. Our method, however, does not require permutation symmetry between the two subsystems, making it a more versatile approach to solving ground-state problems using quantum computers. Moreover, our algorithm naturally includes a cutoff in the number of basis states, limiting the required number of quantum circuits.
This paper is structured as follows. We first introduce EF in Section <ref>, as well as two ways to tackle its scalability issue based on Monte Carlo sampling and neural networks. The main contribution of this paper is then proposed in Sec. <ref> as a third option.
We conclude our work with numerical simulations in Sec. <ref> testing our hybrid architecture on various physical models, such as one-dimensional spin chains, spins on a triangular lattice with a random external field, and the nuclear shell model.
§ METHODS
The general strategy of variational algorithms for ground state problems is to prepare a wave function ansatz |ψ⟩, and using the variational principle,
E_0 ≤⟨ψ|H|ψ⟩/⟨ψ|⟩,
to approximate the ground state energy E_0 of the Hamiltonian of interest H. The ansatz can take the form of e.g. a neural network <cit.> or a quantum circuit <cit.>, while the variational parameters are usually optimized with e.g. gradient-based methods. In the following, we will explore hybrid classicalquantum models aiming at describing a bipartite system with quantum circuits, while the entanglement between the partitions is forged classically.
§.§ Entanglement Forging
The starting point of the EF procedures is to employ a Schmidt decomposition, a direct application of a singular value decomposition (SVD), to write a quantum state |ψ⟩ of a bipartite H= H_A ⊗ H_B quantum system, with dimensions N_A and N_B, as
|ψ⟩ = U ⊗ V ∑_σλ_σ|σ⟩_A |σ⟩_B.
In the above, U and V are unitaries, |σ⟩_X ∈{0,1}^N_X and λ_σ are the corresponding Schmidt coefficient. The latter are positive, normalized ∑_σ |λ_σ|^2 = 1, and the number of Schmidt coefficients is called the Schmidt rank. We recall that the distribution of the Schmidt coefficients is related to the level of entanglement between the two subsystems, with the von Neumann entropy being calculated by
S_vN = -2 ∑_σλ_σ^2 log(|λ_σ|).
Therefore, maximal entanglement is characterized by a uniform distribution, while minimal entanglement by a dirac delta.
The variational state is obtained by parameterizing U and V with two quantum circuits and considering the Schmidt coefficients as additional variational parameters. Following ibm_EF, the most direct way to compute the expectation values, called Schrödinger forging, is to directly insert the Schmidt decomposition, e.g. Eq. (<ref>), into O=⟨ψ||O|ψ⟩. Assuming that the observable O admits a bipartition O = O_A ⊗ O_B, the expectation value can then be expressed as
⟨ψ||O|ψ⟩ = ∑_n=1^2^N/2λ_n^2 ⟨σ_n||U^† O_AU|σ_n⟩⟨σ_n||V^† O_BV|σ_n⟩
+ ∑_n=1^2^N/2∑_m=1^n-1λ_nλ_m
∑_p∈ℤ_4 (-1)^p⟨ϕ_σ_n,σ_m^p||U^† O_AU|ϕ_σ_n,σ_m^p⟩
· ⟨ϕ_σ_n,σ_m^p||V^† O_BV|ϕ_σ_n,σ_m^p⟩,
where |ϕ_σ_n,σ_m^p⟩=|σ_n⟩+i^p|σ_m⟩, ℤ_4 = { 0,1,2,3} and all the bitstrings σ have been labeled with a number n (or m). Decompositions with equally sized subsystems are considered: N_A=N_B=:N/2. We note that this is not a strict requirement, but we ust it to simplify the notations.
We remark that this involves an exponential sum in the system size. As such, two methods have been suggested to solve this scalability issue <cit.>. The first uses an unbiased estimator of ⟨ψ||O_A ⊗ O_B|ψ⟩, that can be evaluated by importance sampling according to ∼λ_nλ_m. The second approach, is to leverage permutation symmetry between to two sub systems, producing another EF scheme. Since it is defined at the operator level, we refer to it as Heisenberg forging. This approach has been further developed by patrick_EF using ARNN. More details about Heisenberg forging can be found in App. <ref>.
§.§.§ Schrödinger forging with weighted sampling of circuits
The original paper Ref. <cit.> defines an unbiased estimator of ⟨ψ||O_A ⊗ O_B|ψ⟩, which is then realized by sampling each circuit in proportion to the associated coefficient ∼λ_nλ_m. The number of states scales as
S ∼( 1/ϵ∑_nm|λ_n λ_m|)^2.
Ref. <cit.> showed that, with at most S states preparations, the estimation of the expectation value can be achieved with a confidence level of 99%. The sampling efficiency directly depends on the distribution of the Schmidt coefficients, and clearly works better for weakly correlated systems. Knowledge of the physical system is thus required to find decomposition leading to low sampling overhead.
§.§.§ Heisenberg forging with generative neural networks
In the case of Heisenberg forging, we can leverage the permutation symmetry and rewrite ⟨ψ|O| ψ⟩ as Eq. <ref>. The first sum can be approximated by sampling σ∼λ_σ^2, while the second require conditional probabilities. Hence, by defining the function R(n,m) = λ_σ_n/λ_σ_m and interpreting |⟨σ_m |U^† C_α,β U |σ_n ⟩| as the conditional probability p_α,β(n,m), i.e., how likely it is to sample σ_m form the circuit U^† C_α,β U |σ_n ⟩. The second term can thus be written as
∑_α,β∈{0,1}a_α,β/2∑_nλ_σ_n∑_m R(n,m)p_α,β(n,m),
where we note that this simplification is a direct consequence of the permutation symmetry. Assuming a small number of Schmidt coefficients, the sum can be efficiently estimated via Monte Carlo sampling. However, for larger system sizes, patrick_EF remarked that the task can be solved using neural networks, and more particularly autoregressive neural networks, since they can model highdimensional conditional probabilities.
§.§ Schrödinger forging with generative neural networks
In this section, we present a new approach to Schrödinger forging. The starting point is to remark that the Schmidt coefficients decay exponentially if the two subsystems are weakly entangled, as it is the case in low-energy eigenstates of chemical and spin lattice model Hamiltonians. By introducing a cutoff in the sum, it is therefore possible to improve the efficiency of the estimation while keeping a sufficiently low additive error. However, it requires a selection of a set of bitstrings among the 2^N/2 total possibilities, which represents an open problem for EF. To this end, we propose to use generative models (more specifically, autoregressive neural network, ARNN) to select the best candidates.
The use of an ARNN is motivated by the fact that the Schmidt coefficients are normalized and can thus be interpreted as a probability density. Following <cit.>, we propose an algorithm, which is summarised in Alg. <ref>.
The parameterized unitaries and the Schmidt coefficients are finally optimized with a gradient descent based algorithm. A summary of the entire algorithm is shown in Fig. <ref>.
First, we explain how to use an auto-regressive neural network to efficiently identify the relevant bitstrings. Since the Schmidt coefficients are normalized as ∑_σ |λ_σ|^2=1, they can be interpreted as a probability density. The chain rule from probability theory can be used to write
|λ_σ|^2 ∼ p(σ_A,σ_B) = ∏_i p((σ)_i|{(σ)_j, j<i}),
and the bitstring pairs, associated to λ_σ can be encoded by stacking the bitstrings of subsystem B at the end of the bitstrings of subsystem A,
σ = |σ_A, σ_B⟩
= |(σ)_1,⋯, (σ)_N/2,(σ)_N/2+1⋯, (σ)_N⟩.
Note that here (σ)_i denotes the i-th bit of the bitstring σ.
Neural networks, and more particularly auto regressive methods are powerful tools to model such conditional densities <cit.> by generating elements sequentially conditioned on the previous ones. To build the autoregressive model, we consider a dense ARNN, whose architecture is very similar to a dense feedforward neural network. The notable difference is that the weights are tridiagonal matrices, ensuring the autoregressive nature of the model. From the ARNN, the bitstrings can then be sampled directly and efficiently, as detailed in App. <ref>.
Exploring the full space of bistrings is exponentially difficult, motivating the use of machine learning techniques to select the basis states that contribute the most to the wave function. Inspired by the work of CI, we introduce an algorithm whose primary objective is to bypass exploring the extremely large space of basis states.
Starting from a random set of bitstrings A_0, the strategy consists of adding bitstrings generated according to the approximation of the |λ_σ|^2 modeled by the ARNN. Since the variational energy is quadratic with respect to the Schmidt coefficient λ_σ, at each iteration, they can be determined by solving the constrained linear equation system
∂H/∂λ_σ =0
∑_σ |λ_σ|^2 = 1,
where the sum runs over the set A∪ G, with A being the current set of bitstrings while G is the set of bitstrings sampled by the ARNN. The first equation ensures that the forged wave function has minimal energy, while the second guarantees its normalization. In a second step, the current set A is updated by taking the k bitstrings with the highest Schmidt coefficients. The ARNN is finally trained to model p(σ_A,σ_B) ∼ |λ_σ|^2 in a supervised way. These steps are iterated until convergence, which is reached when the current set A is stable and the loss of the ARNN is close to zero.
The choice of the loss function ℒ and training set 𝒯 play an important role in the training of the ARNN. For the training set, two possibilities are investigated: either the model is trained on the current set A and the generated bistrings G or, following <cit.>, only on the nonpruned bitstrings, i.e., the new set A'. Concerning the loss functions, we consider the explicit logcosh loss <cit.> and the implicit Maximum Mean Discrepancy (MMD) loss <cit.>
ℒ_MMD(θ) = ∑_σ_1, σ_2 ∈𝒯 q(σ_1)q(σ_2)K(σ_1,σ_2)
-2∑_σ_1, σ_2 ∈𝒯 q(σ_1)p_θ(σ_2)K(σ_1,σ_2)
+ ∑_σ_1, σ_2 ∈𝒯 p_θ(σ_1)p_θ(σ_2)K(σ_1,σ_2),
where
K(σ_1,σ_2) = e^-||σ_1-σ_2||_2^2/2Δ is chosen to be a Gaussian kernel, with ||·||_2 the 2-norm and Δ the bandwidth parameter. The latter determines the width of the kernel and controls the sensitivity of the MMD measurement. A larger bandwidth allows more global comparisons, while a smaller bandwidth focuses on local details. The MMD loss function effectively minimizes the difference between the mean embedding of the two distributions. It involves a pairwise comparison of every bitstring in the training set with their contribution being controlled by the kernel.
As a benchmark, we also consider a more standard approach for modeling probability distributions using the reversed Kullback Leibler (KL) divergence for the loss of the ARNN. A detailed description of this method is presented in App. <ref>.
§ NUMERICAL SIMULATIONS
In this section, we present numerical experiments.
We begin with the performance of the bitstrings selection algorithm on small models. Then, we proceed to expound upon the bitstrings selection and subsequent energy minimization process on various models of increasing complexity.
§.§ Identify the relevant bitstrings
We investigate the performance of the generative algorithm on small symmetric models: the transverse field Ising model (TFIM), the Heisenberg and J_1-J_2 model on a 1d chain of 14 spins with periodic boundary condition, the 2d TFIM on a 4× 3 triangular lattice with a diagonal cut and open boundary condition and the t-V model on a 4× 3 grid. These models, further detailed in App. <ref>, allow for an exact Schmidt decomposition, enabling us to assess the algorithm's performance by examining how many bitstrings associated with high Schmidt coefficients can be identified.
The results, for a cutoff dimension of k=8 bitstrings, are presented in Table <ref>. More specifically, we can find the performance of the algorithm <ref> in terms of the number of correctly identified bitstrings using logcosh and MMD loss. Two training sets are considered for the former: the union 𝒯=A∪ G and the pruned set 𝒯=A'. Furthermore, the impact of the parameters' initialization is attenuated by model averaging (MA). The ensemble technique consists of training four ARNNs with different initial weights and taking their average as the starting point of a final ARNN. Results obtained with the more standard reversed KL approach (see App. <ref>) are also presented. Finally, the last column of the table contains the sum of the eight highest Schmidt coefficients squared from the exact decomposition. It indicates the amount of entanglement, the cutoff's accuracy, and the probability distribution's sharpness.
The algorithm proposed in this paper is able to find the majority of the most important bitstrings. Moreover, the MMD and logcosh loss function are superior to the standard approach based on the reversed KL divergence. Indeed, with the latter, relevant bitstrings can only be found if the system is small or when the level of entanglement is high (leading to a wide probability distribution). This is not suitable in most applications, since low entanglement is important to guarantee a low additive error with a cut-off dimension. The best results are highlighted in bold, and are in general obtained with the MMD loss. More precisely, with the MMD loss, it is always able to find the four bitstrings with the highest Schmidt coefficient. This loss enables the ARNN to generalize well and make the algorithms converge quickly, as it can be further appreciated for the 2d TFIM with 12 spins in appendix <ref>, Fig. <ref>. In that case, the ARNN has only seen 24 bitstrings in total during the training and it is able to find the 7 bitstrings with the highest Schmidt coefficients, containing the five most important ones, in only two iterations.
To gain a better understanding of the dynamics of the generative algorithm, the loss of the ARNN and the number of bitstring updates between two iterations are presented. Fig. <ref> shows the results with the MMD loss on the five different Hamiltonians. In all cases, we observe that the ARNN loss converges to zero and that the generated set of bitstrings is stable.
In general, the dynamics can be divided into two phases: an initial phase where the loss is high and the model explores a diverse range of bitstrings, followed by a second phase where the model attempts to exploit its approximation of the probability distribution to converge and generate bitstrings with high Schmidt coefficients. This exploration-exploitation trade-off can be modified by adjusting the number of bitstrings sampled at each iteration and the learning rate of the ARNN.
§.§ Complete entanglement forging scheme
In the last section, we shown that the ARNN is able to identify the bitstrings with the highest Schmidt coefficients. We now test the complete EF scheme. First, spin systems on a ring are considered, before going to a two dimensional lattice, and finally to the nuclear shell model.
§.§.§ Spins in one dimension
We begin by considering the one-dimensional transverse field Ising model (TFIM). More precisely, we consider a spin chain with periodic boundary conditions, an even number N=20 of spins, and set the coupling and the external field coefficient to one. The Hamiltonian of the model can be written as
H = ∑_i=1^N Z^iZ^i+1 + X^i,
The resulting topology of the bipartite system is shown in Fig. <ref>.
Since the system is invariant under permutation symmetry, we can compare our approach to the Heisenberg forging with ARNN. Because of to the symmetry, we can choose σ_A = σ_B, which reduces the number of possible bitstrings. As above, a cutoff dimension of k=8 is chosen for the number of bitstrings, which was chosen by trial and error.
Fig. <ref> shows the energy error ratio
Δ = |E-E_exact/E_exact|
for the three forging schemes, i.e., Schrödinger with a random uniform set of bitstrings, Schrödinger with the generated set, and Heisenberg forging with the generated set. Following Ref. <cit.>, a pretraining of the quantum circuit over 1000 iterations is performed. In both cases, the unitaries take the form of hardware efficient ansatz
𝒰(Θ) = ∏_d=0^D-1[ U(θ_d^0) ∏_i=0^N/2CX_i,i+2· U(θ_d^1) ∏_i=1^N/2CX_i,i+2] U(θ_D),
where D=15 is the number of layers, CX_i,j is a CNOT gate with control qubit i and target j, while U(x) is N fold tensor product of arbitrary singlequbit rotation parameterized with 3N parameters. We denote with Θ the set containing all indexed θ_i^j. Details on the training procedure, such at values for the hyperparameters and the optimization algorithm, can be found in App. <ref>.
Simulations, shown in Figure X, have been performed with different sizes of sets of bitstrings. First, one can see that the accuracy doesn't change while increasing the number of bitstrings when random bitstrings are taken into account. Indeed, it's exponentially unlikely to get bitstrings with a high Schmidt coefficient. And with a really bad set of bitstrings, during the Schrodinger forging VQE, the model will try to compensate for this bad choice. The way he found is to put all the Schmidt coefficients to zero except one and take the contribution of only one bitstring. In this case, given the fact that the first gates are rotations, he can embed the bitstring he wants, a better one. This strategy can be seen in Figure X, which presents the final Schmidt decomposition.
We observe that the the choice of the random set has little impact on the performance of the Schrödinger forging procedure. Moreover the models enhanced with the ARNN display better results, both being quite similar.
To ensure that specific physical properties of the ground state, outside of its energy, are correctly reproduced, the spinspin correlators Z^iZ^j of the forged states have been calculated. They are shown in Fig. <ref>. We observe that the accuracy is not degrading over the overlap, suggesting that the error can be explained mainly by the training of the circuits rather than the EF procedure. The error on the correlators Z^iZ^j is minimal when j=i+1 and maximal if the two spins are far apart in the chain. This can be explained by the locality of the ansatz, built using gates acting on neighboring qubits.
§.§.§ Spins in two dimensions
We now move towards two-dimensional spin lattices, which are more challenging due to local operators being mapped to non-local ones when projected onto a line. We consider the TFIM on a 2D topology described by a triangular lattice, as shown in Fig. <ref>, see App. <ref>. We break the permutation symmetry of the two subsystems by applying a random external field h_i∼ U[-1,1]. Setting the coupling constant to one, the Hamiltonian is given by
H = ∑_⟨ i,j⟩ Z^iZ^j + ∑_i=0^N-1 h_iX^i,
where ⟨ i,j⟩ are neighbors according to the triangular topology.
The triangular lattice has a high coordination number, leading to strong magnetic susceptibility <cit.>, meaning that the system is more sensitive to external magnetic fields and can therefore exhibit stronger magnetic order and complex physical phenomena, such as, e.g., disorder, localization, and heterogeneity.
The two-dimensional lattice is divided with a cut along the diagonal axis. We consider open boundary conditions (OBC), cylindrical boundary conditions (CBC), and toroidal boundary conditions (TBC). Since the boundary conditions can lead to different levels of entanglement <cit.>, they play an essential role in the EF procedure, which is why different configurations are considered.
The convergence of the variational energies for the three boundary conditions are shown in Fig. <ref>. Like in the one-dimensional case, a cutoff of k=8 is chosen in the Schmidt decomposition. We observe that the bitstrings generated by the ARNN lead to an improvement of approximately 10^-2 in the error energy ratio with respect to taking a random set. The most striking result, though, is that the gap between the random and generated methods is increasing with respect to the one-dimensional case, suggesting that sampling with the ARNN is becoming more effective when considering systems of increased complexity. On the other hand, no advantage can be noted in the context of TBC. It seems that the parametrization of the unitaries is the limiting factor in improving the energy error.
§.§.§ Nuclear shell model
Finally, we consider light nuclei in the shell model with CohenKurath <cit.> interactions, where the Hamiltonian can be written in second quantization as
H = ∑_i ϵ_i â_i^†â_i +1/2∑_ijklV_ijlkâ_i^†â_j^†â_k â_l .
Here, â_i^† and â_i are the creation and annihilation operators, respectively, for a nucleon in the state |i⟩. Single-particle energies are denoted as ϵ_i and twobody matrix elements as V_ijkl. The orbitals |i⟩ = |n=0, l=1, j, j_z, t_z⟩, are described as functions of the radial n and orbital angular momentum l, the total spin j, its projection on the zaxis j_z its projection, and the zprojection of the isospin t_z.
We consider nucleons in the p shell model space, which includes six orbitals for the protons and six orbitals for the neutrons, while each energy is computed with respect to an inert 4He core. The shell-model Hamiltonian (see Eq. <ref>) is converted into a qubit Hamiltonian via the JordanWigner <cit.> transformation. Each single-particle state is represented by a qubit where |0⟩ and |1⟩ refer to an empty and an occupied state, respectively. Therefore, each nucleus can be distinguished by the number of excited orbitals, representing the protons and neutrons on top of the 4He core.
The partition is made at the isospin level, meaning that the subsystem A consists entirely of protons while the subsystem B of neutrons. Therefore the Schrödinger forging is the only possible choice since the system is not symmetric under protonnucleon exchange. To build a chosen nuclei, we start from an appropriate initial state, whit the desired number of nucleons, and act with an excitation preserving (EP) ansatz. EP ansätze can be built as a product of two-qubit excitation preserving blocks U(θ,ϕ), also known as hop gates <cit.>, of the form
U(θ) = [ 1 0 0 0; 0 cos(θ) sin(θ) 0; 0 sin(θ) -cos(θ) 0; 0 0 0 1 ].
This set can be extended with fourqubit excitation preserving gates <cit.>, defined as
G_i,j,k,l(ω)|0011⟩ = cos(ω/2)|0011⟩ + sin(ω/2)|1100⟩
G_i,j,k,l(ω)|1100⟩ = cos(ω/2)|1100⟩ - sin(ω/2)|0011⟩.
The parameterized circuit then takes the form of a layered ansatz composed of a product of excitation-preserving gates, where the d-th layer is described by
𝒰(Θ_d) = (⊗_i=0^N-1RZ_i(ϕ_d^i)) ∏_i=0^N/2-2U_2i,2i+1(θ_d^i)
×∏_i=1^N/2-3U_2i,2i+1(θ_d^i) ∏_i=0^N-4G_i:i+3(ω_d^i).
We denote by RZ_i(ϕ) a rotation of the ith qubit around the zaxis and the subscript of the U and G gates indicate the qubit the gate is acting upon (i:j is a slice from i to j). The large Θ_d parameters regroup all parameters in the d-th layer, i.e., Θ_d = {ϕ_d^i, θ_d^i, ω_d^i}. A sketch of the quantum circuit is depicted in App. <ref>.
Since the Schmidt rank of the p nuclear shell model is at most 20, the generative algorithm is unnecessary, as all bitstrings in the Schmidt decomposition can be used. The energy minimization for the various nuclei is presented in Fig. <ref>. We observe that every ground state energy in the p shell can be reproduced with an error ratio of at most 10^-3, even for the difficult nuclei such as 12C. Moreover, having access to the Schmidt decomposition allows us to evaluate the von Neumann entropy, whose evolution is presented in Fig. <ref>, which can be of broader interest. Panel (a) shows the evolution of the von Neumann entropy during the training, while panel (b) displays a visualization of the von Neumann entropy in the parameter space. To this end, a principal component analysis (PCA) is performed on the entire history of the Schmidt coefficient, and a scan of the entropy along the two main components is presented. In addition, the entropy value is shown for each training epoch (in gray) and the final value (in red).
In the final experiment, nucleons in the sd shell model space, including 12 orbitals for the protons and 12 orbitals for the neutrons, are considered. For the latter, each energy is computed with respect to an inert 16O core. Using the Jordan-Wigner mapping, this model leads to a 24 qubits Hamiltonian and is composed of a total of 11'210 overlapping terms. This high number can make EF particularly expensive, as it scales linearly with it. However, since most of their coefficients are close to zeros, an approximate Hamiltonian, consisting of the 38 overlapping terms with the most significant coefficients, is instead considered. Despite this approximation, the Hamiltonian can still reproduce 97% of the ground state energy of the 23Na nucleus, which is the focus of this experiment.
The 23Na nucleus is composed of three protons and four neutrons on top of a 16O inert core. The ARNN sampler has therefore been modified to generate bitstrings with 3 ones in subsystem A (protons) and 4 ones in subsystem B (neutrons). The energy minimization and the final Schmidt decomposition of the 23Na are presented in Fig. <ref> and Fig. <ref>, respectively. Once again, a higher accuracy is obtained with the generated set. Multiple state from the generated set are contributing to the VQE, meaning that the ARNN is useful in selecting appropriate bitstrings. On the contrary, when the random set is used, the variational circuit prefers to adapt to one state, and set the contribution from the others to zero.
§ DISCUSSION AND CONCLUSION
This paper proposes an alternative way to perform Schrödinger forging using autoregressive neural networks. We build on top of the work from ibm_EF, which introduced the EF-based VQE, and on patrick_EF, which efficiently compute quantum expectation values as statistical expectation values over bitstrings sampled by a generative neural network. While their work leverages the additional permutation symmetry, our work is fully general and computationally efficient due to the introduction of a cutoff dimension. Moreover, the latter is giving us additional control over the amount of quantum resources required. This is not the case in the Heisenberg forging scenario, as shown in App. <ref>, where the ARNN begins by sampling many bitstrings and finishes by using only one. Therefore, in this specific case, the Heisenberg forging with neural networks is expensive at the beginning of the training and loses its expressive power at the end. On the other hand, Schrödinger forging enables better control on the trade-off between expressiveness and computational expensiveness of the variational model without having the assumption of symmetric permutation of the two subsystems.
Numerical simulations have been performed on ring and triangular lattice spin systems. Schrödinger forging with the ARNN consistently achieves better performance for the computation of the ground state energy and correlators, compared with random sampling and Heisenberg forging with neural networks. In the case of the triangular lattice, different boundary conditions are considered, directly affecting the performance. The parameterization of unitaries is a limiting factor when complex boundary conditions are considered.
The most striking result is that the performance gap between random sampling and using the ARNN increases with the system's complexity, thus suggesting that our approach will be more profitable for larger systems. Finally, the nuclear shell model is also solved using the Schrödinger forging case up to the 10^-3 error ratio for the most complex nucleus. The ARNN is unnecessary since the maximum number of possible bitstrings is 20, as all bitstrings can be used. The approach is then tested on a larger nucleus in the sd shell model, 23Na. Once again, the generated set results in better accuracy than a random one.
Autoregressive models are easily interpreted and can naturally generate bitstrings with a certain number of excitations. They are also well suited for addressing the task at hand, owing to their robustness as density estimators. They do exhibit certain limitations, specifically in terms of sampling speed and the requirement for fixed-order input decomposition <cit.>. Nevertheless, the limited number of samples in this algorithm renders the issue of sampling speed inconsequential. Furthermore, experiments were conducted by varying the decomposition orders, and it was determined that such alterations did not yield any substantial changes in the obtained results. Masked multilayer perceptrons are a straightforward choice for building the autoregressive model. However, other architectures could be more suitable in some cases. In particular, transformers <cit.> provide a strong alternative since they are highly parallelizable and efficient in capturing the global context and long-range dependencies due to their attention mechanism.
At the beginning of this work, simulations were carried out on small models. In these cases, all bitstrings could be taken into account during the VQE. It was observed that the order of the bitstrings, with respect to their coefficient (in absolute value), did not significantly change during the VQE. Therefore, choosing the bitstrings at the beginning of the circuit training enables us to perform well. However, it could be suitable in some cases to train the quantum circuits and ARNN simultaneously, taking advantage of parameter sharing.
Finally, we note that there is not necessarily a correlation between having sets of bitstrings associated with high Schmidt coefficients and the trainability of the corresponding variational state. Indeed, in some cases, taking a set of bitstrings with lower Schmidt coefficients might be favorable to make the variational circuits easier to train.
Therefore, it may be possible to include this feature in the algorithm by choosing bitstrings that maximize the gradients. Alternatively, an algorithm, adapting the form of each variational circuit, an approach close to the ADAPT-VQE <cit.>, could be investigated.
§ CODE AVAILABILITY
The numerical simulations of the quantum circuits have been performed with Pennylane <cit.>, powered by a JAX backend <cit.>, while the NETKET library <cit.> has been used for the ARNN. Solving the constraint systems of equations in the generative algorithm involves a projected gradient descent algorithm available in the JAXopt library <cit.>. Moreover, the Heisenberg forging code is available on Github <cit.>. Visualization of the evolution of the von Neumann entropy is performed using orqviz <cit.>.
The Python code of this project is accessible on Github <cit.>.
§ ACKNOWLEDGEMENT
The authors thank A. Mandarino and T. Papenbrock for stimulating discussions, as well as for the computation of the nuclear shell model's matrix elements. O.K., M.G and SV are supported by CERN through the CERN Quantum Technology Initiative.
§ HEISENBERG FORGING
In the section, we briefly cover the basis of Heisenberg forging, covered in more details in <cit.>.
In this scenario, we assume a symmetric bipartition, i.e., with U_A = V_B ≡ U and find a more efficient way to compute the expectation value. We first need to decompose O as
O_A⊗ O_B + O_B⊗ O_A = a_0/2({O_A,O_B}⊗1 + 1⊗{O_A,O_B}) +∑_α,β∈{0,1}a_α,βC_α,β^*⊗ C_α,β,
where {·,·} denotes the anticommutator, |a_α,β|≤ 1 are real coefficients, and C_α,β nqubit Clifford operators that are defined below. Combining this with the Schmidt decomposition, and by symmetrizing the observable, we obtain
⟨ψ|O|ψ⟩ = a_0∑_n λ_σ_n^2(⟨σ_n|U^† O_A O_B U|σ_n ⟩) +∑_α,β∈{0,1}a_α,β/2∑_n,mλ_σ_nλ_σ_m|⟨σ_m|U^† C_α,β U|σ_n⟩|^2.
Since O_A, O_B ∈{1,X,Y,Z}^⊗ N/2, we either have [O_A,O_B]=0 or {O_A,O_B}=0. In the first case, we can find a Clifford circuit V such that O_A = VZ_pV^† and O_B=VZ_qV^†. We can then define
C_α,β = 1/2V(1+(-1)^α Z_p + (-1)^β Z_q - (-1)^α+βZ_pZ_q )V^†
In the remaining case, we can simply use C_0,0 = (O_A+O_B)/√(2), C_0,1 = (O_A-O_B)/√(2) and a_1,0=a_1,1=0.
The estimation of the sums can be performed non-trivially, using Monte Carlo sampling, where the number of samples grows as 1/ϵ^2, with ϵ the additive error. However, the sampling step is not obviously scalable in the Schrödinger case, and it will be discussed below. We also point out that, despite a sampling overhead, the individual quantum circuits are easier to implement than without the Schmidt decomposition because they are shallower and require fewer qubits.
§ SAMPLING FROM THE ARNN
In this section, we provide more details on how to efficiently and directly sample the bitstrings with the ARNN. We proceed recursively, as shown in Fig. <ref>. We begin by sampling the first bit of the string, which is then given as an input to sample the second bit, and so on up to the last bit.
In the nuclear shell model, it is important to control the number of valueone bit appearing in the string, since each nucleus is defined by a certain amount k of excited orbitals. Thus, the same procedure can be slightly modified to generate bitstrings with a fixed number of ones. Indeed, we just need to change the conditional probability in the sampling procedure, which can be done by setting p(σ_i|{(σ)_j, j<i}) = 0 if ∑_j<i (σ)_j = k. This ensures a maximum of k excitation. If, on the other hand, there is only l<k excitation at the end of the string, the last k-l bits are turned into one to correct for it. While this leads to non-uniform sampling at the beginning of the training, we expect the ARNN to overcome this issue by incorporating it through the learning stage.
A notable difference between the Schrödinger and Heisenberg forging schemes is that for the latter, it is impossible to control how many states one has to prepare on the quantum hardware. Indeed, in this case, all bitstrings sampled by the ARNN must be taken into account. In practice, as shown in Fig. <ref>, many states must be prepared at the beginning of the training and only one at the end. In the case of Schrödinger forging, since the cut-off can be fixed at the beginning, the number of states to be prepared on the hardware is constant. Following Ref. <cit.> a 1000-epoch pre-training on the unitaries has been performed as proposed in <cit.>. However, other optimization strategies could be considered.
§ OVERVIEW OF THE MANY-BODY HAMILTONIANS OF THE SMALL MODELS
Here, we present the many-body quantum Hamiltonians used for the numerical simulations. First, we consider spins models: the TFIM, Heisenberg and J_1-J_2 model on a 1d chain and the TFIM 2d on a triangular lattice. We also consider fermionic models, such as the t-V model on a 4x3 grid and the nuclear shell model.
The Hamiltonians of the 1d transverse-field Ising model (TFIM) is
H = J ∑_i=0^N Z^iZ^i+1 + X^i.
The Hamitonian of the Heisenberg model (Heis) 1d is
H = J ∑_i=0^N X^iX^i+1 + Y^iY^i+1 + Z^iZ^i+1,
while for the 1d J_1-J_2 model 1d wde have
H = J_1 ∑_i=0^N X^iX^i+1 + Y^iY^i+1 + Z^iZ^i+1 + J_2 ∑_i=0^N X^iX^i+2 + Y^iY^i+2 + Z^iZ^i+2.
For these models, J=1, J_1=1, J_2=0.2, and Periodic Boundary Condition (PBC), i.e., N≡0, N+1≡1, are used. The topology of the spin chain with the separation between the two subsystems is presented in Fig. <ref> (a) and (b) for 14 and 20 spins, respectively.
The Hamiltonian of the 2D transverse-field Ising model (TFIM) is given by
H = ∑_⟨ i,j⟩ Z^iZ^j + ∑_i=0^N-1 X^i,
where ⟨ i,j⟩ are neighbors according to the triangular topology, see Fig. <ref>, which also shows the different cuts and boundary conditions. This model is more challenging due to local operators being mapped to non-local ones when projected onto a line. Moreover, it has a high coordination number which leads to a strong magnetic susceptibility <cit.>, meaning that the system is more sensitive to external magnetic fields and can exhibit stronger magnetic order.
The Hamiltonian of the t-V model is
H = -t∑_⟨ i,j⟩ (a_i^† a_j + a_j^† a_i) + V∑_⟨ i,j⟩a_i^† a_i a_j^† a_j,
with a_i and a_i^† being respectively the creation and annihilation operators on site i. A 4×3 system of spinless fermions with periodic boundaries and t=V=1 is considered. It is mapped to a qubit Hamiltonian with the Jordan-Wigner transformation. In this model, fermions are allowed to move on the grid, modifying the energy of the system. In this spinless version, there is only one spin-orbit per site, giving a final Hamiltonian of 12 qubits.
Non-Permutation Symmetric Models
Non-permutation symmetric models, have also been taken into account. The spin models used is the TFIM with a random uniform external magnetic field h_i ∼Uniform[-1,1]. The latter can lead to interesting and complex physical phenomena, for example, disorder, localization, and heterogeneity. The Hamiltonian is
H = ∑_⟨ i,j⟩ Z^iZ^j + ∑_i=0^N-1 h_iX^i,
where ⟨ i,j⟩ are neighbors according to either a ring or a triangular lattice. For the latter, we investigate three different boundary conditions: open boundary condition (OBC), cylindrical boundary condition (CBC) and toroidal boundary condition (TBC). The boundary condition can lead to different levels of entanglement <cit.> and therefore plays an important role in the entanglement forging procedure. Illustrations of the different lattices is presented in Figure <ref>.
Finally, we consider light nuclei in the shell model with CohenKurath <cit.> interactions. In this case, the Hamiltonian is first written in second quantization as
H = ∑_i ϵ_i â_i^†â_i +1/2∑_ijklV_ijlkâ_i^†â_j^†â_k â_l .
Here, â_i^† and â_i are the creation and annihilation operators, respectively, for a nucleon in the state |i⟩. Single-particle energies are denoted as ϵ_i and twobody matrix elements as V_ijkl. The orbitals |i⟩ = |n=0, l=1, j, j_z, t_z⟩, are described as functions of the radial n and orbital angular momentum l, the total spin j, its projection on the zaxis j_z its projection, and the zprojection of the isospin t_z.
First, nucleons in the p shell model space are considered. The latter includes six orbitals for the protons and six orbitals for the neutrons, while each energy is computed with respect to an inert 4He core. The shell-model Hamiltonian (<ref>) is converted into a qubit Hamiltonian by the JordanWigner <cit.> transformation. Each single-particle state is represented by a qubit where |0⟩ and |1⟩ refer to an empty and an occupied state, respectively. Therefore, each nucleus can be distinguished by the number of excited orbitals, representing the protons and neutrons on top of the 4He core. Each parameterized circuit hence needs to be excitation preserving (EP), which enables the construction of arbitrary nuclei by starting from a suitable initial state. Moreover, the sampler of the ARNN needs to be adapted to generate bitstrings with a fixed number of ones.
The partition is made at the isospin level, meaning that the subsystem A consists entirely of protons while the subsystem B of neutrons. The system being not symmetric under protonnucleon exchange, Heisenberg forging is not usable.
In the final experiment, nucleons in the sd shell model space including 12 orbitals for the protons and 12 orbitals for the neutrons are considered. For the latter, each energy is computed with respect to an inert 16O core. Using the Jordan-Wigner mapping, this model leads to a 24 qubit Hamiltonian, with 12 orbitals for the protons and 12 for the neutrons, composed of a total of 11'210 overlapping terms. Since EF scales linearly with the number of overlapping terms, this makes the entanglement forging procedure quite expensive. Given that most of the coefficients of these overlapping terms are close to zeros, and to make the simulations simpler to execute, we consider an approximate Hamiltonian, consisting of the 38 overlapping terms with the largest coefficients. Given the model's expensiveness, the simulations have only been carried out on one nuclei, the 23Na. Reducing the Hamiltonian results in an error of around 3% on the ground state energy of the 23Na.
§ MODELLING THE PROBABILITY DISTRIBUTION WITH A MORE STANDARD APPROACH
In this section, we present a more standard approach for modeling probability distributions using the reversed KL divergence for the loss of ARNN. We show why it is unsuitable for the considered problem situation.
This approach aims to model the full probability distribution |λ_σ|^2. Since we only have samples from the approximated probability distribution p(σ_A,σ_B), the reversed KL can be used to learn the best representation of the distribution self-consistently. Thus, at each iteration, the training set comprises bitstrings sampled from the approximation distribution given by the ARNN. The latter is then trained in a supervised way to model the target distribution |λ_σ|^2 by minimizing the reversed KL divergence
ℒ_ARNN^rev-KLD = 𝔼_σ∼ p[ logp(σ_A,σ_B)/λ_σ^2].
With this choice, wherever p(σ_A,σ_B) has a high probability, λ_σ^2 will also take a high value. This mode-seeking behavior is desired since the objective is mainly to sample bitstrings associated with a high Schmidt coefficient.
With weakly entangled systems, which is desirable to have a low additive error with the cut-off in the Schmidt decomposition, the target probability distribution is very sharp, as shown in Fig. <ref>. Such probability densities are very difficult to model with this approach. Indeed, with high probability, the training sets are composed of bitstrings associated with very small Schmidt coefficients. In this flat region (left of Fig. <ref>), the probability density appears uniform, and it is challenging to extrapolate the relevant bitstrings. Moreover, due to the normalization constraint, the Schmidt coefficients of a small set of bitstrings are not good estimators of the Schmidt coefficients of the ground truth distribution.
However, this defeats our purpose of identifying bitstrings with a high Schmidt coefficient rather than modeling the entire probability distribution. Hence, adopting a training strategy that keeps the bitstrings with high Schmidt coefficients through the iterations is convenient. With such a training strategy, employing a loss composed of an average over the model data samples is impossible. The explicit form of the reversed KL divergence
ℒ_ARNN^expl-rev-KLD = ∑_σ p(σ_A,σ_B) [ logp(σ_A,σ_B)/λ_σ^2],
would be an alternative if only the target probability distribution is not very sharp. Hence, the reversed KL divergence is not a symmetric measure. Consequently, the gradients obtained from the reversed KL divergence may not provide stable and robust updates for the model when the predicted distribution diverges significantly from the target distribution. This lack of robustness makes it challenging to learn in highly uncertain situations or when the model needs to adapt to changes in the training set, which is the case here. The logcosh and MMD loss were therefore used since they are more robust and suitable for modeling sharp distributions. Indeed, they do not suffer the same limitations since they focus on individual samples rather than the overall distribution and do not overemphasize outliers.
§ OPTIMIZATION DETAILS AND HYPERPARAMETERS
In this section, details on the optimization procedure are given. During the VQE stage of the training, the adabelief optimizer <cit.> is used to update the quantum circuit parameters, while Nesterov's accelerated gradient descent scheme <cit.> is performed for the Schmidts coefficient. One iteration of the Schmidts coefficient is done every ten iterations of the circuit parameters. The hyperparameters of the adabelief optimizers, following the convention of the original paper, are β_1=0.9, β_2=0.999 and ϵ=10^-16, while for Nesterov, a momentum coefficient of 0.6 is used. In both cases, we set the learning rate between 0.1 and 0.01.
The generative algorithm is trained using adabelief with the same hyperparameters and a learning rate of 0.001. The ARNN comprises five hidden layers and a hidden neuron density of α=2. At each iteration of the generative algorithm, the ARNN samples between 10 and 50 bitstrings to build the set G, while the exact value has been manually tuned for each simulation. This influences the performance since low values cause the ARNN to converge very quickly, leading to spikes caused by the lack of generalization and overfitting. On the other hand, high values deteriorate the algorithm's computational efficiency, convergence speed, and memory requirement in the same way as batches in stochastic gradient descent. For ARNN, the Lecun normal initializer for the weight, zero initial biases, and selu activation function λ = 1.0507 α = 1.6733 were used <cit.>.
§ VARIATIONAL CIRCUITS
In this section, we provide a visual example of the quantum circuits used for VQE ansätze. A layer of the variational circuit used for the spin systems is shown in Fig. <ref> for N=4 qubits, while Fig. <ref> shows one layer of the circuits used for the nuclear shell model.
§ EVOLUTION OF THE GENERATED SET
A histogram was produced to visualize the ARNN training set evolution. Shown in Fig. <ref>, it illustrates the number of times each bitstring has been present in the training set 𝒯=A' of the ARNN. The bitstrings present in the final set are shown in purple and those present during the algorithm are shown in light blue. The illustrated example is at the end of the algorithm on the TFIM 2d 12 spins with the MMD loss. Bitstrings are ordered in such a way that their associated Schmidt coefficient decreases (in absolute value). The bitstrings associated with the highest Schmidt coefficient are the most frequently viewed by the ARNN.
At the end, the seven bitstrings associated with the biggest Schmidt coefficient are present in the final set. The eighth bitstring in the set is bitstrings number ten. Given that bitstrings eight and nine were seen during the algorithm and that their associated Schmidt coefficients squared are very low and close to the one of bitstring ten, we can explain this lack by numerical errors when determining the Schmidt coefficients.
|
http://arxiv.org/abs/2307.02112v1
|
20230705083815
|
Deformation and quantisation condition of the $\mathscr{Q}$-top recursion
|
[
"Kento Osuga"
] |
math-ph
|
[
"math-ph",
"hep-th",
"math.AG",
"math.MP"
] |
Graduate School of Mathematical Sciences, University of Tokyo,
3-8-1 Komaba, Meguro, Tokyo, 153-8914, Japan
osuga@ms.u-tokyo.ac.jp
We consider a deformation of a family of hyperelliptic refined spectral curves and investigate how deformation effects appear in the hyperelliptic refined topological recursion as well as the -top recursion. We then show a coincidence between a deformation condition and a quantisation condition in terms of the -top recursion on a degenerate elliptic curve. We also discuss a relation to the corresponding Nekrasov-Shatashivili effective twisted superpotential.
Deformation and Quantisation condition
of the -top recursion
Kento Osuga
==============================================================
§ INTRODUCTION
The purpose of the present paper is twofold. One is to describe the so-called variational formula in the framework of the hyperelliptic refined topological recursion as well as the -top recursion proposed in <cit.>. The other is to reveal an intriguing coincidence between a deformation condition and a quantisation condition in terms of the -top recursion as an application of the variational formula.
§.§ Motivations and Backgrounds
Since motivations and backgrounds of a refinement of topological recursion are discussed in <cit.> in detail, we only give a brief review of recent developments in this direction.
As defined in <cit.> (and in <cit.> for a special class of genus-zero curves), a hyperelliptic refined spectral curve _κ,μ consists of three data: a compactified and normalised Torelli-marked hyperelliptic curve C=(Σ,x,y) of genus g̃[We abuse the terminology and include curves of g̃=0,1.], complex parameters κ associated with the Torelli markings, and complex parameters μ associated with non-ramified zeroes and poles of a differential ydx[Strictly speaking, ydx has to be anti-symmetrised in terms of the hyperelliptic involution σ.]. We often drop `hyperelliptic' for brevity. Taking a refined spectral curve as initial data, the refined topological recursion constructs an infinite sequence of multidifferentials ω_g,n on Σ^n labeled by n∈_≥0 and g∈1/2_≥0 — g is different from the genus of Σ. <cit.> proved or conjectured properties of ω_g,n. Several results based on matrix models have also been discussed in e.g. <cit.>.
The multidifferentials ω_g,n polynomially depend on the refinement parameter , up to ^2g. It is easy to see that the -independent part precisely corresponds to the Chekhov-Eynard-Orantin topological recursion <cit.>. As shown in <cit.>, it turns out that the -top degree part also give rise to a self-closed recursion, and we call it the -top recursion. That is, the Chekhov-Eynard-Orantin topological recursion and the -top recursion are a subsector of the full refined topological recursion, and we respectively denote differentials in each subsector by ω_g,n^ CEO and ϖ_g,n to notationally distinguish from ω_g,n.
For a family of hyperelliptic curves C( t) with some complex parameters t, one can consider the corresponding family of refined spectral curves _κ,μ( t) (with mild restrictions, e.g. ramification points should not collide each other under deformation of parameters). As a consequence, ω_g,n also depend on the parameters t, and one may ask: how do ω_g,n vary under a deformation with respect to t?
In the unrefined setting, this point has already been addressed in <cit.>, and we know how ω_g,n^ CEO( t) varies which is known as the variational formula[The variational formula in the unrefined setting is not limited to hyperelliptic curves.]. It can be thought of as a generalisation of the Seiberg-Witten relation <cit.>. However, it turns out that there is a subtlety and difficulty when one tries to apply the original Eynard-Orantin proof to the refined setting. Thus, we provide an equivalent interpretation of the variational formula (Definition <ref>) which becomes easier to apply to the refined topological recursion. With this perspective, we are able to state a refined analogue of the variational formula.
§.§ Summary of main results
The first achievement of the present paper is to prove the variational formula for the refined topological recursion, when Σ=^1 (Theorem <ref>). However, since we have to fix several notations and technical aspects in order to remove the subtlety mentioned above, it is hard to state the variational formula here and we leave all the details to Section <ref>. Roughly speaking, it states that a certain deformation δ_t*ω_g,n with respect to t∈ t is related to an integral of ω_g,n+1 as follows:
δ_t*ω_g,n=∫_p∈γΛ(p)·ω_g,n+1,
where (γ,Λ) is defined in Definition <ref>. Let us emphasise that, in contrast to the unrefined setting, the variational formula (<ref>) holds only when a refined spectral curve _κ,μ( t) satisfies a certain condition which we call the refined deformation condition (Definition <ref>). See Section <ref> for more details. Note that some properties of the refined topological recursion are still conjectural when Σ≠^1 <cit.>, hence the variational formula also remains conjectural in this case. We also note that <cit.> discuss a similar formula in a different refined setting.
Another achievement of the present paper is to uncover an intriguing coincidence between the refined deformation condition and what we call the -top quantisation condition defined as follows. It is shown in <cit.> that the -top recursion naturally constructs a second-order ordinary differential operator, called the -top quantum curve. For a refined spectral curve _κ,μ( t) whose underlying curve is given by y^2=Q_0(x), the associated -top quantum curve is written in the following form
( ϵ_1^2d^2/dx^2-Q_0(x)-∑_k∈_≥1ϵ_1^k· Q_k(x)) ψ^ -top(x)=0,
where ϵ_1 is a formal parameter, Q_k(x) is a rational function of x determined by {ϖ_h}_h for 2h<k, and the logarithmic derivative of ψ^ -top(x) is a formal sum of ϵ_1^2g-1·ϖ_g,1 over g. In the context of topological recursion, one may sometime require a condition on quantisation that the set of poles of Q_k(x) should be a subset of poles of Q_0(x). Therefore, we say that a refined spectral curve _κ,μ( t) satisfies the -top quantisation condition, if the -top quantum curve respects the pole structure of Q_0(x) (Definition <ref>) — existence of a quantum curve in the full refined setting is proven only for a special class of genus-zero curves <cit.> and in this case one can analogously consider the refined quantisation condition.
In order to deliver a clear picture about the coincidence between the refined deformation condition and the -top quantisation condition, let us focus on the following example. For t∈^*, we consider a one-parameter family of curves C_t=(^1,x,y) where meromorphic functions (x,y) satisfy:
y^2-Q_0(x;t)=0, Q_0(x;t)=4(x-q_0)^2·(x+2q_0), q_0=√(-t/6).
This is the curve associated with the zero-parameter solution of the Painlevé I equation, and t plays the role of the Painlevé time <cit.>. Since ydx has a simple zero at the preimages of x=q_0, the corresponding refined spectral curve _μ(t) carries one parameter μ∈, and ω_g,n depend both on t and μ.
In this example, it turns out that _μ(t) satisfies the refined deformation condition if and only if μ is set to a special value μ=μ_0. (Proposition <ref>). On the other hand, one can show that Q_k≥2(x;t,μ) has a pole at x=q_0 for a generic μ, which is a zero of Q_0(x;t). However, it turns out that when μ=μ_0, such poles disappear for all k, and thus, the -top quantisation condition is satisfied (Proposition <ref>). Therefore, we observe that the refined deformation condition and the -top quantisation condition precisely agree, even though they originated from two different requirements. It is interesting to see whether this coincidence holds in other curves, e.g. curves discussed in <cit.> in relation to other Painlevé equations.
When μ=μ_0, the variational formula gives a relation between Q_k(x;t,μ_0) in (<ref>) and a derivative of F^-top_g:=ϖ_g,0 with respect to t — the former appears in the -top quantisation and the latter is a consequence of a deformation of a refined spectral curve:
Consider the above family of refined spectral curves _μ_0(t) satisfying the refined deformation condition and also the -top quantisation condition. Then, the associated -top quantum curve is given in the following form:
( ϵ_1^2d^2/dx^2-4x^3-2tx-2∑_g∈1/2_≥0ϵ_1^2g∂ F^ -top_g/∂ t)ψ^ -top(x)=0.
It is crucial to remark that there is no ϵ_1^2∂/∂ t term in (<ref>), in contrast to the quantum curve derived in <cit.> within the framework of the Chekhov-Eynard-Orantin topological recursion. Instead, a similar differential operator to (<ref>) has appeared in the context of conformal blocks in the semi-classical limit, or the so-called Nekrasov-Shatashivili limit e.g. <cit.>. Note that they consider a genus-one curve whose singular limit becomes (<ref>), and we expect that the form of (<ref>) remains the same for the corresponding genus-one curve. Importantly, their arguments and Theorem <ref> suggest a conjectural statement that F^ -top_g agrees with the so-called Nekrasov-Shatashivili effective twisted superpotential _g^ eff <cit.>, when a refined spectral curve is chosen appropriately:
∑_g∈1/2_≥0ϵ_1^2gF^ -top_g ?=∑_g∈1/2_≥0ϵ_1^2g_g^ eff:=ϵ_1ϵ_2 log Z^ Nek|_ϵ_2=0,
where Z^ Nek is the corresponding Nekrasov partition function <cit.> and the equality should be considered as a formal series in ϵ_1. See e.g. <cit.> for more about Nekrasov-Shatashivili effective twisted superpotentials. Note that for the curve associated with the Painlevé I equation, the Nekrasov partition function is not defined from an irregular conformal block perspective, whereas F^ -top_g is perfectly well-defined. We hope that the present paper together with the notion of the -top recursion <cit.> sheds light on verifying the above statement and also triggers a new direction between topological recursion, the -top recursion, and invariants in the Nekrasov-Shatashivili limit.
§.§.§ Acknowledgement
The author thanks Nitin Chidambaram, Elba Garcia-Failde, Hajime Nagoya, Lotte Hollands, Kohei Iwaki, Omar Kidwai, Oleg Lisovyy, Nicolas Orantin for discussions and correspondences. This work is supported by JSPS KAKENHI Grant Number 22J00102, 22KJ0715, and 23K12968, also in part by KAKENHI Grant Number 20K14323 and 21H04994.
§ DEFINITIONS
We briefly review the refined topological recursion proposed in <cit.>. We refer to the readers <cit.> for more details.
[<cit.>]
A hyperelliptic refined spectral curve _μ, κ consists of the collection of the following data:
* (Σ,x,y): a connected compact Riemann surface of genus g̃ with two meromorphic functions (x,y) satisfying
y^2-Q_0(x)=0,
where Q_0(x) is a rational function of x which is not a complete square. We denote by σ:Σ→Σ the hyperelliptic involution of x:Σ→^1 and by the set of ramification points of x, i.e. set of σ-fixed points.
* (_i,_i,κ_i): a choice of a canonical basis _i,_i∈ H_1(Σ,) and associated parameters κ_i∈ for i∈{1,..,g̃},
* (_+,μ_p): a choice of a decomposition _+⊔σ(_+)= and associated parameters μ_p∈ for all p∈_+ where is the set of unramified zeroes and poles of ydx.
Let us fix some notation before defining the refined topological recursion. First of all, throughout the present paper, g,h are in 1/2_≥0, n,m in _≥0, i,j in {1,..,g̃} and a,b in {0,..,n}. We denote by B the fundamental bidifferential of the second kind, and for a choice of representatives _i of _i for each i, we denote by η_^p the fundamental differential of the third kind for p∈Σ normalised along each _i-cycle. We write p_a∈Σ for each a, J:=(p_1,..,p_n)∈(Σ)^n, and J_0:={p_0}∪ J∈(Σ)^n+1. Assuming p_a∉∪σ(_+) for all a, we denote by C_+ a connected and simply-connected closed contour such that it contains all points in J_0∪_+ and no points in ∪σ(J_0∪_+). With the assumption on p_a, one can always find such a contour and we drop the n-dependence on C_+ for brevity. Similarly, we denote by C_- a connected and simply-connected closed contour containing all points in ∪σ(J_0∪_+) but not points in J_0∪_+. We call p∈ ineffective if ydx is singular at p, and effective otherwise. We denote by ^* the set of effective ramification points. We denote by ^0,∞_+∪σ(_+^0,∞) the set of unramified zeroes and poles of ydx respectively, and denote by C_-^𝔭 a connected and simply-connected closed contour inside C_- but not containing points in σ(_+^∞
). Finally, we fix ∈ and we call it the refinement parameter.
[<cit.>]
Given a hyperelliptic refined spectral curve _μ, κ, the hyperelliptic refined topological recursion is a recursive definition of multidifferentials ω_g,n+1 on (Σ)^n+1 by the following formulae:
ω_0,1(p_0): =y(p_0)· dx(p_0),
ω_0,2(p_0,p_1): =-B(p_0,σ(p_1)),
ω_1/2,1(p_0): =/2(-dΔ y(p_0)/Δ y(p_0)+∑_p∈_+μ_p·η^p_(p_0)+∑_i=1^g̃κ_i·∫__iB(·,p_0)),
and for 2g-2+n≥0,
ω_g,n+1(J_0):=1/2 π i(∮_p∈ C_+-∮_p∈ C_-)η_^p(p_0)/4ω_0,1(p)· Rec_g,n+1^(p,J),
where
Rec_g,n+1^(p_0;J):= ∑^*_g_1+g_2=g
J_1⊔ J_2=Jω_g_1,n_1+1(p_0,J_1)·ω_g_2,n_2+1(p_0,J_2)+∑_t⊔ I=Jdx(p_0)· dx(t)/(x(p_0)-x(t))^2·ω_g,n(p_0,I)
+ ω_g-1,n+2(p_0,p_0,J)+· dx · d_0ω_g-1/2,n+1(p_0,J)/dx(p_0),
and the * in the sum denotes that we remove terms involving ω_0,1.
As expected, it is shown in <cit.> that {ω_g,n+1}_g,n satisfies the Chekhov-Eynard-Orantin topological recursion when =0. However, it is important to remark that it is conjectural that the above definition makes sense for 2g-2+n≥1 when Σ≠^1 or ≠0 — there is no issue when 2g-2+n=0. In particular, it has not been proven whether the above formula constructs symmetric multidifferentials ω_g,n+1 on (Σ)^n+1 — the definition only ensures the well-definedness within a fundamental domain due to η^p_(p_0) in the formula. When Σ=^1, <cit.> proved several properties on ω_g,n+1 which are summarised as below:
When Σ=^1, ω_g,n+1 are well-defined multidifferentials on (Σ)^n+1 and they satisfy the following properties:
* ω_g,n+1 are symmetric multidifferentials
* For 2g-2+n≥0, ω_g,n+1(p_0,J) has no residues as a differential in p_0, and their poles only lie in ^*∪σ(J∪_+^0).
* For 2g-2+n≥0, let ϕ be any primitive of ω_0,1, then
(2-2g-n-1)·ω_g,n+1(J_0)=1/2π i∮_p∈ C^𝔭_-ϕ(p)·ω_g,n+2(p,J_0)
Theorem <ref> holds for any Σ.
As discussed in <cit.>, it is easy to see for each g,n that ω_g,n+1 polynomially depends on up to ^2g, and the recursion for the -top degree part is self-closed, i.e. they can be constructed without the information of lower degree parts. We call it the -top recursion, and explicitly it is defined as follows:
[<cit.>]
Given a hyperelliptic refined spectral curve _μ, κ, the -top recursion is a recursive definition of multidifferentials ϖ_g,n+1 on (Σ)^n+1 by the following formulae:
ϖ_0,1(p_0): =y(p_0)· dx(p_0),
ϖ_0,2(p_0,p_1): =-B(p_0,σ(p_1)),
ϖ_1/2,1(p_0): =1/2(-dΔ y(p_0)/Δ y(p_0)+∑_p∈_+μ_p·η^p_(p_0)+∑_i=1^g̃κ_i·∫__iB(·,p_0)),
and for 2g-2+n≥0,
ϖ_g,n+1(J_0):=1/2 π i(∮_p∈ C_+-∮_p∈ C_-)η_^p(p_0)/4ω_0,1(p)· Rec_g,n+1^- top(p,J),
where
Rec_g,n+1^- top(p_0;J):= ∑^*_g_1+g_2=g
J_1⊔ J_2=Jϖ_g_1,n_1+1(p_0,J_1)·ϖ_g_2,n_2+1(p_0,J_2)
+∑_t⊔ I=Jdx(p_0)· dx(t)/(x(p_0)-x(t))^2·ϖ_g,n(p_0,I)+ dx · d_0ϖ_g-1/2,n+1(p_0,J)/dx(p_0).
Note that there is no ϖ_g-1,n+2 in Q_g,n+1^- top, unlike Q_g,n+1^. Since the -top recursion is a subsector of the refined topological recursion, Theorem <ref> holds for ϖ_g,n+1 too, as long as Σ=^1. We note that it is meaningful to define the -top recursion independently and study it on its own. For example, as discussed in <cit.>, the -top recursion would be relevant to the Nekrasov-Shatashivili limit which is an active research area in mathematics and physics. In particular, <cit.> proved the following property for any Σ, not limited to Σ=^1:
ϖ_g,1 are well-defined residue-free differentials on Σ whose poles only lie in ^*∪σ(_+^0), and there exists an ordinary second order differential equation of the following form:
( ϵ_1^2d^2/dx(p)^2-Q_0(x(p))-∑_k∈_≥1ϵ_1^kQ_k(x(p))) ψ^- top(p)=0
where Q_k(x) is a rational function of x explicitly constructed by ϖ_h,1 for 2h<k, and ψ^- top is a formal series in ϵ_1 defined by
ϵ_1 · dlogψ^- top(p):=∑_g≥0ϵ_1^2g·ϖ_g,1(p).
The associated differential operator (<ref>) is called the -top quantum curve. Except for a special class of genus-zero curves investigated in <cit.>, existence of the refined quantum curve in full generality is still an open question.
When the underlying hyperelliptic curve depends on complex parameters t={t_1,..,t_n}, one can consider a t-parameter family _κ,μ( t) of refined spectral curves as long as t are in a domain such that no points in ∪ collide. All the above definitions and theorems hold for _κ,μ( t). In the next section, we will consider how ω_g,n+1( t) behave while one varies t.
Before turning to the variational formula, let us define the free energy F_g, except F_0,F_1/2,F_1 which will be defined later:
[<cit.>]
For g>1, the genus-g free energy F_g,F_g^- top of the refined topological recursion and the -top recursion is defined respectively as follows:
F_g: =ω_g,0:=1/2-2g1/2π i∮_p∈ C^𝔭_-ϕ(p)·ω_g,1(p),
F_g^- top: =ϖ_g,0:=1/2-2g1/2π i∮_p∈ C^𝔭_-ϕ(p)·ϖ_g,1(p).
§ VARIATION
The variational formula is proven in <cit.> and originally it is explained as follow. Consider a one-parameter family of spectral curves (t) in the unrefined setting. Then, x and y as functions on Σ depend on the parameter t and so do all ω_g,n+1(t). Then, <cit.> considers a special type of deformation, namely, variation for fixed x. This may sound contradictory with the fact that x depends on t, but what it really means is the following.
Set =0. By choosing one of the branched sheet, one projects ω_g,n+1 down to ^1 away from ramification points and treat them locally as multidifferentials on ^1. The variation for fixed x means that we apply the partial derivative with respect to t for these multidifferentials on ^1 with the understanding that ∂/∂ tdx_a=0, and apply the local inverse x^-1 to pull them back to differentials on Σ. That is, the variation symbol δ^EO_t in <cit.> acting on ω_g,n+1 means (c.f. <cit.>):
δ^EO_t*ω_g,n+1(p_0,..,p_n;t):=(∂/∂ tω_g,n+1(p_t(x_0),..,p_t(x_n);t))|_x_a=x(p_a),
where on the right-hand side we think of x as independent of tp_t and instead p_t depends on both t and x. We will denote by * the action of the variation in order to distinguish from the standard product symbol · which we are using throughout the paper. The standard partial derivative notation ∂_t is commonly used in e.g. <cit.> but we avoid this notation to emphasise that the operation is not just a partial derivative.
We will provide another equivalent description of the variation operation without considering the projection and inverse. The motivation of introducing such a new perspective is for the clarity of the proof of the variational formula when ≠0. The original proof by Eynard and Orantin is based on a graphical interpretation whose analogue does not exist in the refined setting, at least at the moment of writing. As a consequence, we need to directly evaluate the variation of the refined recursion formula (<ref>), and in this case, taking the projection and the inverse becomes subtle because C_± contains J_0 and σ(J_0).
Given _μ,κ(t), the topological recursion variational operator δ_t^(n) is a differential operator acting on meromorphic functions on (Σ)^n defined by
δ_t^(n):=d/d t-∑_a=1^n∂ x(p_a)/∂ t1/dx(p_a)d_p_a,
where (p_1,..,p_n)∈(Σ\)^n and d_p_a denotes the exterior derivative with respect to p_a. We extend the action of δ_t^(n) to a meromorphic multidifferential ω on (Σ)^n by
δ_t^(n)*ω(p_1,..,p_n;t):=(δ_t^(n)*ω(p_1,...,p_n;t)/dx(p_1)⋯ dx(p_n))· dx(p_1)⋯ dx(p_n).
Note that this definition is valid not only for hyperelliptic curves but also for any algebraic curves. It can be generalised to a multi-parameter family in an obvious way. δ_t^(n) is defined only when each p_a∉ which resonates with the fact that one has to choose a branch in the Eynard-Orantin description. Importantly, the above definition implies
δ_t^(1)* x=0, δ_t^(1)* dx=0,
and for a differential w on (^1)^n, its pullback to (Σ)^n satisfies
δ_t^(n)* w(x(z_1),...,x(z_n);t)=∂/∂ tw(x(z_1),...,x(z_n);t).
Thus, δ_t^(n) in fact serves as the variation for fixed x. Furthermore, we have
δ_t^(1)* ydx=∂ y/∂ tdx-∂ x/∂ tdy,
which corresponds to <cit.>. From now on, we omit writing the t-dependence of functions and multidifferentials.
Perhaps, the conceptual motivation of the action of δ^(n)_t becomes clearer when one thinks of the underlying hyperelliptic curve from the Hitchin perspective <cit.>. A Hitchin spectral curve (of rank 2) is given by a triple (Σ^o,φ,π) where π:Σ^o→^1, φ is a quadratic differential on ^1, and Σ^o is embedded in T^*^1 as
Σ^o={λ∈ T^*Σ^o|λ^⊗2=π^*φ}⊂ T^*^1.
Our Σ would be obtained after normalisation and compactification of Σ^o. By interpreting π=x and φ=(ydx)^⊗2, variation for fixed x means that one varies the quadratic differential φ while keeping the projection π=x invariant.
Given an unrefined spectral curve (t), let us assume existence of a pair (γ,Λ) such that γ is a path in Σ\ and Λ is a function holomorphic along γ satisfying
δ_t^(1)* ω_0,1(p_1)=:∫_p∈γΛ(p)·ω_0,2(p,p_1).
Then, <cit.> showed that the following relation holds for g,n∈_≥0 by using the graphical interpretation of the unrefined topological recursion formula, which is known as the variational formula:
δ_t^(n+1)*ω_g,n+1(J_0)=∫_p∈γΛ(p)·ω_g,n+2(p,J_0).
The difficulty to generalise the variational formula into the refined setting arises due to the more complicated pole structure of {ω_g,n+1}_g,n. Nevertheless, if we restrict the pair (γ,Λ) to certain classes as below, a refined analogue still holds when Σ=^1, and we expect that it works for any Σ in general.
For s∈^∞\ and r∈^∞∩, let x(s)=x_s,x(r)=x_r and suppose ω_0,1 behaves locally
ω_0,1=±(∑_k=0^m_st_s,k/(x-x_s)^k+1+(1))dx, ω_0,1=(∑_k=1^m_rt_r,k/(x-x_r)^k+(1))dx/2√(x-x_r)
Let Λ_s,k,Λ_r,k be the corresponding meromorphic function on Σ such that
1/2(_p=s-_p=σ(s))Λ_s,k(p)^-1·ω_0,1(p)=t_s,k, _p=rΛ_r,k(p)^-1·ω_0,1(p)=t_r,k.
<cit.> show a construction of each Λ_s,k,Λ_r,k, at least locally. Note that their pole is at most of order m_s-1,m_r-1 respectively.
[<cit.>]
Given _κ,μ( t), (γ,Λ) is said to be a generalised cycle if it falls into one of the following kinds:
I : γ∈{_i}_i∈{1,..,g̃} and Λ=1
II : Let p∈Σ be an m_p-th order pole of ω_0,1 where m_p≥2. Then, for k∈{1,..,m_p-1}, Λ_p,k is given as in (<ref>), and γ_p,k is a union of contours encircling p and σ(p) in the opposite orientation if p∉, and γ_p,k is a contour encircling p if p∈.
III : Let p∈Σ be a location of a residue of ω_0,1 which necessarily means p∉. Then, γ_p is an open path from σ(p) to p within a fundamental domain, and Λ_p=1.
The corresponding parameters t_(γ,Λ) defined by the expansion (<ref>) are called 2nd kind times or 3rd kind times, whereas 1st kind times are defined by
t_i:=1/2π i∮__iω_0,1,
1st, 2nd, and 3rd kind times are respectively called filling fractions, temperatures, and moduli of the poles in <cit.>. All generalised cycles (γ,Λ) are anti-invariant under σ when it applies to integration. 2nd and 3rd kind times are often refered to as KP times and their relation to KP systems are discussed in <cit.>.
We consider a refined spectral curve _κ,μ( t) such that t_1,..,t_| t|∈ t are defined as above, which are independent of each other, and we denote by (γ_1,Λ_1),..,(γ_| t|,Λ_| t|) associated generalised cycles. In this setting, the variational formula (<ref>) holds in the unrefined setting as shown in <cit.>. However, when ≠0, it turns out that an analogous statement holds if _κ,μ( t) satisfies an additional condition, which we call the refined deformation condition:
Consider _κ,μ( t) parameterised by times of the 1st, 2nd, and 3rd kind t=(t_1,..,t_| t|). We say that _κ,μ( t) satisfies the refined deformation condition with respect to t_l for l∈{1,..,| t|} if the following holds:
δ_t_l^(1)* ω_1/2,1(p_1)=∫_q∈γ_lΛ_l(q)·ω_1/2,2(q,p_1).
We say that _κ,μ( t) satisfies the refined deformation condition if the above holds for all l.
Note that in the unrefined setting the variational formula (<ref>) for (g,n)=(0,1) automatically holds if ω_0,2=B. Even if ω_0,2 is defined differently, it is then observed in e.g. <cit.> that the variational formula still works for the rest of ω_g,n+1, as long as the variational relation (<ref>) holds for (g,n)=(0,1). In other words, it has to be rather imposed as a supplemental condition in addition to (<ref>). The refined deformation condition (Definition <ref>) is analogous to this observation.
Finally, we will state the variational formula in the refined setting, whose proof is entirely given in Appendix <ref> and <ref> because it is lengthy:
When Σ=^1, assume that _κ,μ( t) satisfies the refined deformation condition with respect to t_l for l∈{1,..,| t|}. Then, ω_g,n+1 and F_g (g>1 for F_g) satisfy:
∂ F_g/∂ t_l= ∫_p∈γ_lΛ_l(p)·ω_g,1, δ_t_l^(n+1)*ω_g,n+1(J_0)=∫_p∈γ_lΛ_l(p)·ω_g,n+2(p,J_0).
Theorem <ref> holds for any Σ.
§ EXAMPLES
We will now apply the variational formula to several examples
§.§ Hypergeometric type curves
Hypergeometric type curves are the classical limit of a confluent family of Gauss hypergeometric differential equations, and they are discussed in <cit.> in relation to the BPS invariants and Stokes graphs. Hypergeometric type curves are classified into nine types based on their pole structure, and seven of them depend on parameters. <cit.> already write all the seven types of curves in terms of 3rd kind times, which they denote by m_p rather than t_p. Then, the question one should ask is whether the corresponding refined spectral curve _μ( t) satisfies the refined deformation condition. Hypergeometric type curves are main examples considered in <cit.>.
Every refined spectral curve _μ( t) associated with a hypergeometric type curve in the form of <cit.> satisfies the refined deformation condition.
The proof is done by explicit computations. Since they are genus-zero curves, a rational expression of x,y is given in e.g. <cit.> in terms of a coordinate z on ^1, from which one can construct the variational operator δ_t^(1) for all t∈ t with respect to z. Then, all one has to do is to compute ω_1/2,1(z_0) and ω_1/2,2(z_0,z_1) from the refined topological recursion and explicitly check the refined deformation condition. See Appendix <ref> where we present explicit computations for a few examples.
One can use the variational formula as the defining equation for F_1/2 and F_1 as follows — since all ω_0,n is independent of the refinement parameter , we can define F_0 as <cit.> does:
For a refined spectral curve _μ( t) associated with a hypergeometric type curve, F_1/2 and F_1 are defined as a solution of the following differential equations for all k,l∈{1,..,| t|}:
∂ F_1/2/∂ t_k∂ t_l:=∫_p_1∈γ_k∫_p_2∈γ_lω_1/2,2(p_1,p_2), ∂ F_1/∂ t_k:=∫_p_1∈γ_kω_1,1(p_1),
where F_1/2 is defined up to linear terms in t_l and F_1 is defined up to constant terms.
Since Λ=1 for the 3rd kind, we immediately obtain the following:
For a refined spectral curve _μ( t) associated with a hypergeometric type curve, we have the following for 2g-2+n≥1:
∏_a=1^n∂/∂ t_l_a F_g=∏_a=1^n∫_p_a∈γ_l_aω_g,n(p_1,..,p_n).
Corollary <ref> becomes useful to derive a relation between refined BPS structures <cit.> and the refined topological recursion, as a generalisation of <cit.>. For a general refined spectral curve _κ,μ( t), not limited to hypergeometric type curves, we will define F_1/2,F_1 in a similar way to Definition <ref>. See Remark <ref>.
§.§ A degenerate elliptic curve
Let us consider the case where x and y satisfy the following algebraic equation:
y^2-Q_0(x)=0, Q_0(x):=4(x-q_0)^2(x+2q_0)=4x^3+2tx+8q_0^3, q_0=√(-t/6).
A convenient rational expression of x,y in terms of a coordinate z on Σ=^1 is
x(z)=z^2-2q_0, y(z)=2z(z^2-3q_0)=2z(z^2-q_z^2),
where for brevity, we set q_z:=√(3q_0). It appears in a singular limit (as an algebraic curve) of the following elliptic curve,
y^2=4x^3-g_2x-g_3,
where for generic g_2,g_3 we can write x,y in terms of the Weierstrass ℘-function as x=℘ and y=℘'. In <cit.>, the curve (<ref>) or (<ref>) is chosen as a spectral curve of the Chekhov-Eynard-Orantin topological recursion, and a relation between the free energy and a τ-function of the Painlevé I equation is proven.
With the above parameterisation, the hyperelliptic involution σ acts as σ:z↦-z, and ={0,∞} with ^*={0}. Note that ω_0,1(z) has a simple zero at z=± q_z, hence we choose _+={q_z} and we assign μ∈ to z=q_z. Since H_1(Σ,)=0 in this example, the above choice uniquely defines a refined spectral curve _μ(t). Theorem <ref> then implies that ω_g,n+1(z_0,J) have poles, as a differential in z_0, at z_0=0,-z_1,..,-z_n,-q_z when 2g-2+n≥0.
As shown in <cit.>, t in (<ref>) plays the role of a 2nd kind time, and the corresponding generalised cycle can be decoded from the following equations
Λ_t(z):=-z+c q_0/z, _z=∞Λ_t(z)^-1·ω_0,1(z)=t, δ^(1)_t*ω_0,1(z_0)=_z=∞·Λ_t(z)·ω_0,2(z,z_0),
where c is one of the roots of 2c^2-6c+3=0. The second term in Λ_t is irrelevant in the last equation in (<ref>), and it is indeed absent in <cit.>, though it is necessary for the second equation. Now one may ask: does every _μ(t) satisfy the refined deformation condition similar to hypergeometric type curves (Proposition <ref>)? Here is the answer to that question:
Let _μ(t) be a refined spectral curve defined as above. Then, it satisfies the refined deformation condition if and only if μ=1.
The proof is again by explicit computations, similar to Proposition <ref>. That is, we explicitly write the variational operator δ_t^(1) in terms of t and z, and confirm when (<ref>) is satisfied. Since everything can be expressed as rational functions, it is easy to find that μ=1 is the only solution. See Appendix <ref> for computations.
Note that, unlike ω_g,n+1 for 2g-2+n≥0, poles of ω_1/2,1(z_0) are all simple and they are located not only at z_0=0,-q_z but also at z_0=q_z,∞ whose residues are given as:
_z=0ω_1/2,1(z)=-/2, _z=∞ω_1/2,1(z)=3/2, _z=± q_zω_1/2,1(z)=/2(-1±μ)
Therefore, the refined deformation condition is satisfied exactly when ω_1/2,1 becomes regular at _+. Even if we choose _+={-q_z} instead, this aspect remains correct. That is, the refined deformation condition for this curve is equivalent to the condition such that ω_1/2,1 becomes regular at _+, no matter how _+ is chosen.
§.§.§ -top quantum curve
Theorem <ref> shows that the -top recursion can be utilised to quantise a refined spectral curve. For a general refined spectral curve _κ,μ( t), not limited to the above example, we introduce the following terminology:
We say that a refined spectral curve _κ,μ( t) satisfies the -top quantisation condition if for each k the set of poles of Q_k≥1^- top is a subset of that of Q_0^- top.
We return to our example, and consider the -top quantisation condition for _μ(t).
The above refined spectral curve _μ(t) satisfies the -top quantisation condition if and only if μ=1.
The proof is again by computations. The formula in <cit.> gives
Q_1^- top(z_0):=ϖ_0,1(z_0)/dx(z_0)^2·μ·η^q_z(z_0)=2q_z·μ,
Q_k≥2^- top(z_0):=2ϖ_0,1(z_0)· R^- top_k/2,1(p_0)/dx(p_0)· dx(p_0), R^- top_k/2,1(z_0)=_z=q_zη^z(z_0)/2ω_0,1(z)· Rec_k/2,1^- top(z).
The if part is easy to see. By setting set μ=1, then (<ref>) implies that ω_1/2,1 becomes regular at z=q_z hence Q_k≥2^- top becomes regular at x=q_0. See Appendix <ref> for the only-if part.
Therefore, the refined deformation condition and the -top quantisation condition agree for this example. Note that any refined spectral curve of hypergeometric type satisfies the -top, and in fact the refined quantisation condition. We expect that no additional condition will appear in the full refined quantisation, and it is interesting to see whether this coincidence holds for other curves, e.g. curves related to other Painlevé equations <cit.>.
To close, we prove that the -top quantum curve for _μ=1(t) is written in terms of the -top free energy F_g^- top whose proof will be given in Appendix <ref>. <cit.> discuss a similar equation in the context of accessory parameters and conformal blocks in the Nekrasov-Shatashivili limit. Thus, we conjecture that the -top free energy F_g^- top coincides with the Nekrasov-Shatashivili effective twisted superpotential <cit.> even when Σ≠^1 as long as an appropriate refined spectral curve is chosen.
For _μ=1(t) described above, the -top quantum curve is given as:
( ϵ_1^2d^2/dx(p)^2-4x^3-2tx-2∑_g∈1/2_≥0ϵ_1^2g∂ F_g^- top/∂ t) ψ^- top(p)=0,
where F_1/2^- top and F_1^- top are defined as a solution of the following differential equation:
∂^2/∂ t^2F_1/2^- top=_z_1=0_z_0=0·Λ_t(z_1)·Λ_t(z_0)·ω_1/2,2(z_0,z_1), F_1/2^- top|_t=0=∂/∂ tF_1/2^- top|_t=0=0.
∂/∂ tF_1^- top=_z_0=0·Λ_t(z_0)·ω_1,1(z_0), F_1^- top|_t=0=0.
§ PROOFS
Throughout Appendix, we set Σ=^1. We will give detailed computations for most of propositions and theorems of the present paper.
§.§ Proof of Theorem <ref>: for ω_g,n+1
We assume that a refined spectral curve _μ(t) carries one time t of either 2nd or 3rd kind, and we denote by (γ,Λ) the associated generalised cycle. The arguments below can be easily generalised to curves with several times.
Let us first introduce convenient notations. First, for any multidifferential ω(p,J), we denote its anti-invariant part under σ by
Δ_pω(p,J):=ω(p,J)-ω(σ(p),J),
where the subscript shows the variable we are considering for the above operation. Next, in order to specify variables for the variational operator, we sometime use the following notation
δ_t^(p_1,..,p_n)=δ_t^(n)=d/d t-∑_a=1^n∂ x(p_a)/∂ t1/dx(p_a)d_p_a.
Then, we can extend the action of the variational operator to meromorphic functions on (Σ)^m for m≠ n without any issue.
§.§.§ Useful lemmas
We show how the variational operator δ_t^(n) behaves on a product of functions and differentials:
Let f(p,p_0) be a meromorphic function of p and differential in p_0 and ω(p,p_1) a meromorphic bidifferential on Σ. Then, for any o∈Σ, we have the following:
δ_t^(p,p_0,p_1)*(f(p,p_0)·ω(p,p_1))= δ_t^(p,p_0)*(f(p,p_0))·ω(p,p_1)+ f(p,p_0)·δ_t^(p,p_1)*ω(p,p_1)
δ_t^(p_0,p_1)*_p=af(p,p_0)·ω(p,p_1)=_p=oδ_t^(p,p_0,p_1)*(f(p,p_0)·ω(p,p_1))
(<ref>) is just a Leibniz rule for the variational operator, and it is straightforward.
On the other hand, we need a more careful consideration to prove (<ref>). Let us first show that δ_t^(p_0,p_1) commutes with _p=o, no matter if o depends on p_0, p_1 or t. Let z be local coordinates around a, and suppose the integrand of the left-hand side is expanded at z(p)=z(o) as
f(p,p_0)·ω(p,p_1)= ∑_k∈h_k(p_0,p_1,o)·dz(p)/(z(p)-z(o))^k+1,
where h_k(p_0,p_1,o) are bidifferential in p_0,p_1. Then, after taking the residue, the left-hand side of (<ref>) is simply
L.H.S. of (<ref>)= δ_t^(p_0,p_1)*h_0(p_0,p_1,o).
On the other hand, since z(p) can be thought of as a constant in terms of δ_t^(p_0,p_1), we find
δ_t^(p_0,p_1)*(f(p,p_0)·ω(p,p_1))=∑_k∈( δ_t^(p_0,p_1)*(h_k(p_0,p_1,o))·dz(p)/(z(p)-z(o))^k+1
-h_k(p_0,p_1,o)·δ_t^(p_0,p_1)*(z(o))·(k+1)dz(p)/(z(p)-z(o))^k+2),
where δ_t^(p_0,p_1)*(z(o)) can be nonzero if o depends on p_0, p_1, or t. Nevertheless, the second term in (<ref>) will have no contributions after taking the residue, and we have shown that δ_t^(p_0,p_1) commutes with _p=o. One may interpret this result such that a closed contour encircling p=o can be chosen independently from the time t.
Our last task is to transform δ_t^(p_0,p_1) into δ_t^(p,p_0,p_1), that is, the variational operator becomes effective with respect to the variable of integration p as well. In fact, by the chain rules, we find
δ_t^(p_0,p_1)*(f(p,p_0)·ω(p,p_1))=δ_t^(p,p_0,p_1)*(f(p,p_0)·ω(p,p_1))+d_p(f(p,p_1)·ω(p,p_1)/dx(p)∂ x(p)/∂ t).
Then since f and ω are both meromorphic, the last term vanishes after taking residue.
Lemma <ref> can be easily generalised to δ_t^(p_0,p_1,..,p_n) for any n. We next recall useful results given in <cit.> (see also <cit.>):
For _μ(t), we have
δ_ϵ^(2) * ω_0,2(p_0,p_1)= δ_ϵ^(2) * B(p_0,p_1)
= -∑_r∈_p=rη^p(p_0)/4 ω_0,1(p)·(B(p,p_1)-B(σ(p),p_1))·δ_ϵ^(1)*ω_0,1(p)
= ∫_γΛ(p)·ω_0,3(p,p_0,p_1),
δ_ϵ^(2) * η^p(p_0)= ∑_r∈_q=rη^q(p_0)/2ω_0,1(q)·η^p(q)·δ_ϵ^(1)*ω_0,1(q)
= -1/2π i∮_q∈ C_+η^q(p_0)/ω_0,1(q)·η^p(q)·δ_ϵ^(1)*ω_0,1(q),,
where p∈Σ is independent of t and p_0,..,p_n and C_+ is defined in Section <ref>.
Note that, strictly speaking, <cit.> only shows the first line of (<ref>), and the second equality is a consequence due to <cit.> and the invariance of the integrand under σ on q. With this property, we will show another lemma which is equivalent to e.g. <cit.>:
Let ω(p;p_1,..,p_n) a meromorphic quadratic differential in p and multidifferential in p_1,..,p_n for some n∈_≥0. Then, we have
1/2π i∫_p∈ C_+ω(p;J)·δ_t^(2)*(η^p(p_0)/2ω_0,1(p))
=1/2π i∫_p∈ C_+η^p(p_0)/2ω_0,1(p)·2δ_ϵ^(1)*ω_0,1(p)·(1/2π i∫_q∈ C_+η^q(p)/2ω_0,1(q)·ω(q;J)).
Let us focus on the contribution from the action of δ_t^(2) on η^p(p_0). Thanks to Lemma <ref>, the corresponding term becomes
(1/2π i)^2∫_p∈ C_+∫_q∈ C_+ω(p;J)/2ω_0,1(p)·η^q(p_0)/ω_0,1(q)·η^p(q)·δ_ϵ^(1)*ω_0,1(q),
where C_+ with respect to q contains q=p inside. We now exchange the order of residues as follows (c.f. <cit.>, <cit.>)
∫_p∈ C_+∫_q∈ C_+=∫_q∈ C_+(∫_p∈ C_+-2π i_p=q),
where C_+ with respect to p on the right-hand side contains p=q inside. Thus, we have
(<ref>)= 1/2π i∫_q∈ C_+η^q(p_0)/2ω_0,1(q)·2δ_ϵ^(1)*ω_0,1(q)·(1/2π i∫_p∈ C_+η^p(q)/2ω_0,1(p)·ω(p;J))
+1/2π i∫_q∈ C_+ω(q;J)·η^q(p_0)/2ω_0,1(q)^2·δ_t^(2)*ω_0,1(q)
After relabeling p↔ q, one notices that the second term in (<ref>) precisely cancels the contribution of the action of δ_t^(1) on ω_0,1(p) on the left-hand side of (<ref>).
Recall that every time of the 2nd or 3rd kind is associated with a pole of ω_0,1, and we denote by p_-∈σ(^∞_+) the corresponding pole inside C_-, and as a consequence p_+:=σ(p_-) is inside C_+ if it is not a ramification point whereas p_+=p_- is inside C_- if it is a ramification point — recall that we are not allowing a deformation such that p_± approach to each other. Then, we can show the following property:
The following function in p_0
δ_t^(1)*ω_0,1(p_0)/ω_0,1(p_0)
is holomorphic at p_0=p_±, and for 2g-2+n≥-1, the following differential in p_0 is regular at p_0=p_±:
∫_q∈γΛ(q)·ω_g,n+2(q,p_0,J).
(<ref>) means that the pole order of omega_0,1 does not get higher even after taking the variation. This is because we are only considering generalised cycles (γ,Λ), which by definition guarantees that the pole order of Λ(q) at q=p_± is at most m-1 for an m-th order pole of ω_0,1. Then, since ω_0,2(q,p_0) has a double pole at q=σ(p_0), we find that δ_t^(1)*ω_0,1 has at most an m-th order pole, hence (<ref>) is regular as a function in p_0 at p_0=p_±.
As for (<ref>), Theorem <ref> shows that all poles of ω_g,n+2(q,p_0,J) for 2g-2+n≥-1 with respect to q lie in ^*∪σ(J_0∪_+^0). Thus, a pole of (<ref>) at p_0=p_+ can only come from the pole of the integrand ω_g,n+2(q,p_0,J) at q=σ(p_0), hence we focus on this contribution.
As derived in <cit.>, the pole of ω_1/2,2(q,p_0) at q=σ(p_0) arises from the following term:
ω_1/2,2(q,p_0)=-/4(d_qΔ_q ω_0,2(q,p_0)/ω_0,1(q)+d_p_0Δ_p_0ω_0,2(q,p_0)/ω_0,1(p_0))+reg. at q=σ(p_0).
Recall from Definition <ref> that Λ(q) has a pole at q=p_+ of order at most m-1 if p_+ is an m-th order pole of ω_0,1. Then, since there is ω_0,1 in the denominator of (<ref>), one notices that (<ref>) becomes regular at p_0=p_± after integration.
We now proceed by induction in χ=2g-2+n≥-1. Since Σ=^1, the integrand of the refined topological recursion formula is a meromorphic differential in p. Thus, by using the property that the sum of all residues of a meromorphic differential is zero on a compact Riemann surface, one can rewrite the recursion formula as
ω_g,n+2(p_0,q,J) =1/2π i∫_p∈ C_+η^p(p_0)/2ω_0,1(p) Rec_g,n+2(p,q,J)
=-1/2ω_0,1(p_0)· Rec_g,n+2(p_0,q,J)+R_g,n+2(p_0,q,J),
where
R_g,n+2(p_0,q,J)= 1/2π i∫_p∈ C_+\{p_0}η^p(p_0)/2ω_0,1(p) Rec_g,n+1(p,q,J)
= d_q(η^q(p_0)/2ω_0,1(q)·ω_g,n+1(q,J))+1/2π i∫_p∈ C_+\{p_0,q}η^p(p_0)/2ω_0,1(p) Rec_g,n+1(p,q,J),
and C_+\{p_0} denotes the resulting contour after evaluating residue at p_0=p_± which gives the first term in the second line of (<ref>), and similarly for C_+\{p_0,q}. (<ref>) is indeed called the refined loop equation of type (g,n+2) <cit.>.
Then, by the induction ansatz, we have
1/2ω_0,1(p_0)·∫_q∈γΛ(q)· Rec_g,n+2(p_0,q,J)
=1/2ω_0,1(p_0)·∫_q∈γΛ(q)·2ω_0,2(p_0,q)·ω_g,n+1(p_0,J)+reg at p_0=p_±
=1/ω_0,1(p_0)·(δ_t^(1)*ω_0,1(p_0))·ω_g,n+1(p_0,J)+reg at p_0=p_±.
Thus, this contribution is non-singular at p_0=p_± thanks to the first statement of this lemma. On the other hand, the contribution from R_g,n+2 can be written as
∫_q∈γΛ(q)· R_g,n+2(p_0,q,J)=∫_q∈γΛ(q)· d_q(η^q(p_0)/2ω_0,1(q)·ω_g,n+1(q,J))+reg at p_0=p_±.
The first term vanishes no matter if t is of the 2nd kind or 3rd kind due to the pole structure of Λ(q). Therefore, we conclude that (<ref>) is regular at p_0=p_±.
§.§.§ Proof of Theorem <ref> for ω_g,n+1
We now prove Theorem <ref> for ω_g,n+1 by induction in χ=2g-2+n≥-2. For χ=-2, i.e, (g,n)=(0,0), it holds because we only consider parameters associated with generalised cycles (c.f. <cit.>). For χ=-1, the theorem also holds because it is shown for δ_t^(2)*ω_0,2(p_0,p_1) in <cit.>, and also because we assume that a refined spectral curve _μ(t) satisfies the refined deformation condition. Our approach is similar to the technique shown in <cit.> to some extent.
Let us assume that the variational formula holds up to χ=k for some k≥-1, and we consider the case for (g,n) with χ=2g-2+n=k+1. Then by applying the variational operator to the recursion formula in the form of the first line of (<ref>), Lemma <ref> imply
δ_t^(n+1)*ω_g,n+1(p_0,J)
=1/2π i∫_p∈ C_+η^p(p_0)/2ω_0,1(p)·δ_t^(n+1)* Rec_g,n+1(p,J)
+1/2π i∫_p∈ C_+η^p(p_0)/2ω_0,1(p)·2δ_ϵ^(1)*ω_0,1(p)·(1/2π i∫_q∈ C_+η^q(p)/2ω_0,1(q) Rec_g,n+1(q,J))
=1/2π i∫_p∈ C_+∫_q∈γΛ(q)·η_^p(p_0)/2ω_0,1(p) Rec_g,n+2(p,q,J),
where at the second equality we used the induction ansatz on δ_t^(n+1)* Rec_g,n+1(p,J) and also we applied the recursion formula in the third line to obtain ω_g,n+1(p,J).
Let us simplify (<ref>). Consider a decomposition C_+=C_0∪ C_γ such that C_γ contains p_+ inside but no other poles of the integrand. Then, C_0 and γ do not intersect and one can freely exchange the order of integration. In particular, one obtains:
δ_t^(n+1)*ω_g,n+1(p_0,J)-∫_q∈γΛ(q)·ω_g,n+2(p,q,J)=ρ_g,n+1(p_0,J),
where
ρ_g,n+1(p_0,J):=(_p=p_+∫_q∈γ-∫_q∈γ_p=q)Λ(q)·η^p(p_0)/2ω_0,1(p) Rec_g,n+2(p,q,J).
Note that the first term in (<ref>) is the remnant contribution from C_γ whereas the second term is the counter effect of applying the refined recursion formula (<ref>) to obtain ·ω_g,n+2(p,q,J) on the left-hand side of (<ref>). As shown in (<ref>) in Lemma <ref>, the integrand of the first term in (<ref>) as a differential in p becomes regular at p=p_±, hence it vanishes. Furthermore, since ω_0,2(p,σ(q)) is the only term that has a pole at p=q in the integrand in (<ref>), the second term can be written as
∫_q∈γ_p=qΛ(q)·η^p(p_0)/2ω_0,1(p) Rec_g,n+2(p,q,J)=∫_q∈γΛ(q)· d_q(η^q(p_0)/2ω_0,1(q)ω_g,n+1(q,J)).
This always vanishes for any generalised cycle due to the pole order of Λ(q) at q=p_± (see Definition <ref>). This completes the proof for ω_g,n+1.
§.§ Proof of Theorem <ref>: for F_g
Notice that the above proof was based on the pole structure of the refined topological recursion formula, or equivalently, refined loop equations. Since F_g does not appear in the recursion formula, we need a different approach to prove for F_g.[Strictly speaking, the original proof in <cit.> is based on the rooted-graph interpretation of the Eynard-Orantin recursion formula which works only for ω_g,n+1, but not for F_g. The statement itself still stands for F_g too as one can easily see in (<ref>) whose computation is valid beyond hyperelliptic curves. Alternatively, one can simply introduce a non-tooted graphical interpretation for the defining equation of F_g and properly make sense of the action of the variation, which is perhaps just omitted in <cit.>.].
For g>1, we directly take the derivative of the definition of F_g which gives
∂ F_g/∂ t= 1/2-2g1/2π i∮_p∈ C^𝔭_-((δ_t^(1)*ϕ(p))·ω_g,1(p)+ϕ(p)·(δ_t^(1)*ω_g,1(p)))
= 1/2-2g1/2π i∮_p∈ C^𝔭_-∫_q∈γΛ(q)·((∫^pω_0,2(q·))·ω_g,1(p)+ϕ(p)·ω_g,2(p,q))
where we used Lemma <ref>, and we used the variational formula for δ_t^(1)*ω_g,1 at the second equality. Then, since C_-^𝔭 does not contain any point in ^∞, we can exchange the order of integration with respect to p and q in (<ref>). After some manipulation by using the dilaton equation (<ref>), we find
∂ F_g/∂ t-∫_q,∈γΛ(q)·ω_g,1(q)=1/2-2g∫_q,∈γΛ(q)·_p=σ(q)ϕ(p)·ω_g,2(p,q),
where the right-hand side is the counter effect of applying the dilaton equation, similar to (<ref>). Therefore, what we have to show is that the right-hand side of (<ref>) vanishes. This is straightforward when =0 because ω_g,n+1(p_0,J)|_=0 have no poles at p_0=σ(p_i). However, since the pole structure is different in the refined setting, the proof involves more careful considerations.
§.§.§ Proof for the 2nd kind
We first consider the case where t is a 2nd kind time (Definition <ref>). That is, for m≥2, we assume that ω_0,1 has a pole at p_± of order m, Λ(q) is meromorphic at q=p_± of order l where l∈{1,..,m-1}, and γ is a small contour encircling p_± in the prescribed orientation. Therefore, the integral simply reduces to taking residue at q=p_±, and as a consequence, it is sufficient to check the order of the zero of _p=σ(q)ϕ(p)·ω_g,2(p,q). This is a clear contrast from the 3rd kind cases at which one has to consider open-contour integrals.
Our task is to show the following property which immediately implies the variational formula for the 2nd kind:
Let us define a multidifferential I_g,n+1 as follows:
I_g,n+1(q,J):=_p=σ(q)ϕ(p)·ω_g,n+2(q,J,p).
Then, we have I_0,1=ω_0,1, I_0,2=0, I_1/2,1=0, and for 2g-2+n≥0, it can be written as follows:
I_g,n+1(q,J)=-1/2ω_g,n+1(q,J)-1/2ω_g,n+1(σ(q),J)-· d_qI_g-1/2,n+1(q,J)/2ω_0,1(q)+Ĩ_g,n+1(q,J),
where Δ_qĨ_g,n+1(q,J) has at least an m-th zero at q=p_±.
It is trivial to see that I_0,1=ω_0,1 and I_0,2=0. As discussed in (<ref>), the pole structure of ω_1/2,2(p,p_0) at p=σ(p_0) also immediately implies that I_1/2,1=0. For 2g-2+n≥0, we proceed by induction and consider refined loop equations (<ref>) for ω_g,n+2(q,J,p) by treating q as the first variable. Let us only give a few useful techniques in order to avoid tedious computational arguments.
As shown in (<ref>) (see also <cit.>), the singular term of R_g,n+2(q,J,p) at p=σ(q) is written as
R_g,n+2(q,J,p)=d_p(η^p(q)/2ω_01,(p)·ω_g,n+1(p,J))+ reg. at p=σ(q).
Thus, we have
_p=σ(q)ϕ(p)· R_g,n+2(q,J,p)=-1/2ω_g,n+1(σ(q),J).
Notice that the above term is the only contribution from R_g,n+2(q,J,p) to I_g,n+1, which is the second term in (<ref>). Therefore, the other terms in (<ref>) are all coming from Rec_g,n+2(q,J,p) in (<ref>).
The first term in (<ref>) is the contribution of ω_0,2(q,p) in Rec_g,n+2(q,J,p), more explicitly,
_p=σ(q)ϕ(p)·(-1/2ω_0,1(q)(2ω_0,2(q,p)+dx(q)dx(p)/(x(q)-x(p))^2)·ω_g,n+1(q,J))=-1/2ω_g,n+1(q,J).
Next, terms involving ω_1/2,1 in Rec_g,n+2(q,J,p) give
_p=σ(q)ϕ(p)·(-1/2ω_0,1(q)(2ω_1/2,1(q)·ω_g-1/2,n+2(q,J,p)+· dx(q)· d_qω_g-1/2,n+2(q,J,p)/dx(q)))
=-Δω_1/2,1(q)/2ω_0,1(q)· I_g-1/2,n+1(q,J)-· d_qI_g-1/2,n+1(q,J)/2ω_0,1(q).
The last term in (<ref>) coincides with the third term in (<ref>). Note that the first term in (<ref>) only has an (m-1)-order zero at q=p_± due to the presence of ω_1/2,1(q), but
Δω_1/2,1(q)/2ω_0,1(q)·Δ_q I_g-1/2,n+1(q,J)
has a higher order zeroe thanks to the induction ansatz. Then, one can easily see that all other terms have the prescribed zero behaviour thanks to the ω_0,1(q) in the denominator in the refined loop equation (<ref>).
§.§.§ Proof for the 3rd kind
We will show an analogous proposition to Proposition <ref> but in a slightly different form. First recall that ω_g,n+1(p_0,J) for 2g-2+n≥0 has no residue with respect to p_0. Thus, the following residue makes sense:
I_g,n+1^*(q,J):=(_p=p_++_p=p_-)ω_0,1(p)·∫^p_σ(p)ω_g,n+2(q,J,·),
where the integral is taken with respect to the last variable.
I^*_0,1(q)/ω_0,1(q) and I_g,n+1(q,J) for 2g-2+n≥-1 are regular at q=p_±.
For I^*_0,1(q), we have
I^*_0,1(q)=(_p=p_++_p=p_-)ω_0,1(p)·η^p(q).
Thus, I^*_0,1 picks up the singular part of ω_0,1 at q=p_+ and q=p_- (c.f. <cit.>). Thus, it becomes regular after dividing by ω_0,1(q).
For I_g,n+1(q,J) for 2g-2+n≥-1, since the proposition only concerns a local behaviour at q=p_±, potentially singular terms may appear only from the pole of ω_g,n+2(q,J,p) at p=σ(q) and we only focus on these poles similar to above discussions. Then, for the rest of the proof we apply the same technique as the proof of Lemma <ref> and Proposition <ref>. That is, we treat the contributions from ω_0,2 and ω_1/2,1 differently, and check the singular behaviour at q=p_± by induction. Since arguments will be almost parallel to the one given in Lemma <ref> and Proposition <ref>, we omit it.
We now prove the variational formula for the 3rd kind. Lemma <ref> implies that I^*_g,n+1(q,J) as a differential in q has no residue everywhere on Σ. This is because ω_g,n+2(q,J,p) has no residue with respect to q (Theorem <ref>), and thus residues can only potentially appear at q=p_± after taking the integral (<ref>) which we have just shown that this is not the case. Thus, we can consider integration once more:
I^**_g,n(J):= (_q=p_++_q=p_-)ω_0,1(q)·∫^q_σ(q)I^*_g,n+1(·,J)
= (_q=p_++_q=p_-)(_p=p_++_p=p_-)ω_0,1(q)·ω_0,1(p)·∫^q_σ(q)∫^p_σ(p)ω_g,n+2(·,·,J).
Since ω_g,n+2 is symmetric multidifferential, one can simply relabel p↔ q in (<ref>). On the other hand, as discussed in <cit.>, exchanging the order of residues would give
(_p=p_++_p=p_-)(_q=p_++_q=p_-)=(_q=p_++_q=p_-)(_p=p_++_p=p_-+_p=q+_p=σ(q)).
Therefore, we find
(_q=p_++_q=p_-)(_p=q+_p=σ(q))ω_0,1(q)·ω_0,1(p)·∫^q_σ(q)∫^p_σ(p)ω_g,n+2(·,·,J)=0.
Now notice that
(_p=q+_p=σ(q))ω_0,1(p)·∫^p_σ(p)ω_g,n+2(·,q,J)= 2_p=σ(q)ϕ(p)·ω_g,n+2(p,q,J)
= 2I_g,n+1(q,J).
Thus, with the help of Proposition <ref>, the left-hand side of (<ref>) can be written as
L.H.S. of (<ref>)= 2(_q=p_++_q=p_-)ω_0,1(q)·∫^q_σ(q)I_g,n+1(·,J)
= 4t∫^p_+_p_-I_g,n+1(·,J),
where t is the 3rd kind time at p_±. Note that there is no contribution from higher order poles of ω_0,1(q) thanks to Proposition <ref>. Combining (<ref>) and (<ref>), we conclude that the right-hand side of (<ref>) vanishes for the 3rd kind as well.
This completes the proof of Theorem <ref> for both ω_g,n+1 and F_g.
§.§ Explicit computations
We will provide explicit computational results for Proposition <ref>, <ref>, and <ref>.
§.§.§ Proof of Proposition <ref>
We will give computations for the Weber, Whittaker, and Bessel curve as evidence of Proposition <ref>. The statement for other curves can be similarly checked. Throughout Section <ref>, we let z be a coordinate on Σ=^1, and we parameterise y(z),x(z) such that ={1,-1} and ={0,∞}, some of which are different from rational expressions in <cit.> but they are related by an appropriate Möbius transformation. As shown in <cit.>, the corresponding generalised cycle (Λ,γ) is given such that Λ=1 and γ is a contour from 0 to ∞. We denote by t the corresponding 3rd kind time, which is none other than λ in <cit.>. Furthermore, we choose _+={∞} to define a refined spectral curve and denote by μ for the associated complex parameter.
Weber The underlying curve is given by
y^2-x^2/4+t=0, y(z)=√(t)/2(z-1/z), x(z)=√(t)(z+1/z).
Then, the variational operator and ω_1/2,1 are respectively given as
δ_t^(1)=∂/∂ t -z (z^2+1)/2 (z-1) (z+1) t∂/∂ z
ω_1/2,1(z_0)=/2(-z_0^2+1/(z_0-1) z_0 (z_0+1)-μ/z_0)dz_0.
ω_1/2,2 itself is lengthy to write down here, but we have
δ_t^(1)*ω_1/2,1(z_0)=∫_0^∞ω_1/2,2(·,z_0)= -z_0 (-μ +μ z_0^2+2 z_0^2+2)/(z_0-1)^3 (z_0+1)^3 tdz_0.
Whittaker The underlying curve is given by
xy^2-x/4+t=0, y(z)=z-1/2 (z+1), x(z)=t (z+1)^2/z.
Then, the variational operator and ω_1/2,1 are respectively given as
δ_t^(1)=∂/∂ t -z (z+1)/t (z-1)∂/∂ z
ω_1/2,1(z_0)=/2(1/1-z_0^2-μ/z_0)dz_0.
ω_1/2,2 itself is lengthy to write down here, but we have
δ_t^(1)*ω_1/2,1(z_0)=∫_0^∞ω_1/2,2(·,z_0)= --μ +μ z_0+z_0+1/t (z_0-1)^3dz_0.
Bessel The underlying curve is given by
x^2y^2-x/4-t^2=0, y(z)=-z^2-1/16 t z, x(z)=-16 t^2 z/(z+1)^2.
Then, the variational operator and ω_1/2,1 are respectively given as
δ_t^(1)=∂/∂ t +2 z (z+1)/t (z-1)∂/∂ z
ω_1/2,1(z_0)=/2(z_0^2+1/z_0(1-z_0^2)-μ/z_0)dz_0.
ω_1/2,2 itself is lengthy to write down here, but we have
δ_t^(1)*ω_1/2,1(z_0)=∫_0^∞ω_1/2,2(·,z_0)= -2-μ +μ z_0+z_0+1/t (z_0-1)^3dz_0.
§.§.§ Proof of Proposition <ref>
Recall that the parametrisation of the curve is given in (<ref>) as
x(z)=z^2-2q_0, y(z)=2z(z^2-3q_0)=2z(z^2-q_z^2), q_0=√(-t/6), q_z:=√(3q_0).
Then, we can explicitly construct the variational operator and ω_1/2,1 as
δ_t^(1)=∂/∂ t -1/2z√(6t)∂/∂ z
ω_1/2,1(z_0)=(-1/2 z_0+μ -1/2 (z_0-q_z)+-μ -1/2 (q_z+z_0))dz_0.
ω_1/2,2 itself is complicated to write down here, but we have
δ_t^(1)ω_1/2,1(z_0)= (-2 (μ +2) z_0 q_z^2+2 (2 μ +1) z_0^2 q_z+2 q_z^3+(2 μ +1) z_0^3/8 z_0^3 q_z^3 (q_z+z_0)^2
+(μ -1) (q_z^2+z_0^2)/8 q_z^3 (q_z^2-z_0^2)^2)dz_0,
_z=∞Λ_t(z)·ω_1/2,2(z,z_0)= (-2 (μ +2) z_0 q_z^2+2 (2 μ +1) z_0^2 q_z+2 q_z^3+(2 μ +1) z_0^3/8 z_0^3 q_z^3 (q_z+z_0)^2)dz_0.
Therefore, they become the same if and only if μ=1. It is worth noting that the second term in (<ref>) is singular at z_0=-q_z whereas ω_1/2,2 will never have a pole at ± q_z due to Theorem <ref> and Lemma <ref>. This is a clear contrast from hypergeometric type curves that ω_1/2,1 has residue at σ(_+), but its variation δ_t*ω_1/2,1 is regular as shown in Section <ref>. It is interesting to investigate whether this phenomenon arises from the difference between ^0 and ^∞, or difference between 2nd kind and 3rd kind.
§.§.§ Proof of Proposition <ref>
Without explicit computation, it is not hard to see from the definition of R_1,1^- top(z) that it has a triple pole at p=p_± whose coefficient is proportional to (μ-1)(μ-3). This can be checked by looking at the contribution of ω_1/2,1 whose pole structure is given in (<ref>). What is less straightforward without explicit computation is to show that the subleading order coefficient is still proportional to (μ-1) but not to (μ-3) anymore. By explicit computations, we have
R_1,1^- top(z)= ((1-μ) (6 μ z^2 q_z^2-18 z^2 q_z^2+9 μ q_z^4-11 q_z^4-7 μ z^4+5 z^4)/128 q_z^4
(q_z-z)^3 (q_z+z)^3
-15 μ ^2+7/128 a^4 (a-z) (a+z))dz.
This clearly shows that the -top quantisation condition is satisfied only if μ=1. Once we set μ=1, then Rec_g,1^- top(z) is regular at z=q_z for all g>1 so is R_g,1^- top(z) because ω_1/2,1(z) is regular at z=q_z. This completes the proof of Proposition <ref>.
§.§ Proof of Theorem <ref>
We will consider the contribution of F_g for each g≥0.
§.§.§ F_0
It is already shown in <cit.> that
F_0=-48/5q_0^5,
from which we find
∂ F_0/∂ t=4q_0^3.
This is consistent with the ϵ^0_1 term in Theorem <ref>, and also consistent with the unrefined quantum curve in <cit.>.
§.§.§ F_1/2
We take the definition of F_1/2 as in Theorem <ref>, which gives
∂^2/∂ t^2F_1/2^- top:=_z_0=0_z_1=0·Λ_t(z_0)·Λ_t(z_1)·ϖ_1/2,2(z_0,z_1)=1/4(-3/2)^1/4t^1/4.
Then from the boundary condition set in Theorem <ref>, one finds that
F_1/2^- top=4/5(-3/2)^1/4t^5/4, Q_1^- top(z_0)|_μ=1=2∂ F_1/2^- top/∂ t.
One may wonder why we do not consider a solution F̃_1/2^- top by respecting the variational formula for ω_1/2,1 as:
∂F̃_1/2/∂ t:=_z=∞Λ_t(z)·ϖ_1/2,1(z)=μ· q_z,
with unfixed μ. In fact, the condition F̃_1/2(0)=0 implies that
F̃_1/2^- top=4/5(-3/2)^1/4t^5/4·μ, Q_1^- top(z_0)=2∂F̃_1/2/∂ t,
for any value of μ. One of the issues of taking this definition is that it does not work for the 3rd kind times, because ω_1/2,1 has a pole at the end points of the associated path γ. Another problem is that if we take (<ref>) as the defining equation for a general spectral curve _κ,μ( t), then one cannot show that
∂^2 F_1/2^- top/∂ t_k∂ t_l=∮_p_0∈γ_lδ_t_l^(1)*(Λ_k(p_0)·ϖ_1/2,1(p_0))
is symmetric in k↔ l or not.
The above observation may motivate one to propose a definition of F_1/2 as:
∀ k,l∈{1,..,| t|} ∂^2 F_1/2/∂ t_k∂ t_l:=∫_p∈γ_k∫_q∈γ_lΛ_k(p)·Λ_l(q)·ω_1/2,2(p,q).
for a general refined spectral curve _κ,μ( t) satisfying the refined deformation condition. The above definition makes sense, that is, one can show that it is symmetric under k↔ l by utilising (<ref>) and the anti-invariantness of generalised cycles under the involution σ. Indeed, this definition works for all hypergeoemtric curves as well as this example. Therefore, we propose that F_1/2 is defined as (<ref>), which is defined uniquely up to a constant and linear dependence in t. Similarly, up to constant, we propose that F_1 for a refined spectral curve satisfying the refined deformation condition is defined by
∀ k∈{1,..,| t|} ∂ F_1/∂ t_k:=∫_p∈γ_kΛ_k(p)·ω_1,1(p).
§.§.§ Proof of Theorem <ref>.
We will show that
g≥1 ∂ F_g/∂ t=_z=∞Λ_t(z)·ω_g,1(z)=1/2Q_2g≥2^- top(z_0),
where the first equality is merely the variational formula. The second equality means that Q_2g≥2^- top is indeed a constant which we will show below.
The proof is similar to that in <cit.>. For a refined spectral curve _μ=1(t) satisfying the -top quantisation condition, ω_1/2,1 and Rec_g,1^- top(z) are regular at z=q_z, hence (<ref>) implies that there should exist a function R_g(t) such that
R_g,1^- top(z_0)=R_g(t)·dz_0/z_0^2-q_z^2.
Thus, by using the explicit rational expression of x(z) and y(z) given in (<ref>), we have
Q_2g^- top(z)=2R_g(t).
Finally, since ω_0,1(z_0) has a 5-th order pole at z_0=∞, the -top loop equation (the -top degree part of the refined loop equation (<ref>)) implies that
_z=∞Λ_t(z)·ω_g,1(z)=_z=∞Λ_t(z)· R_g,1(z)=R_g(t).
Therefore, (<ref>) holds, and this completes the proof.
|
http://arxiv.org/abs/2307.02874v1
|
20230706091947
|
Several Topics on Transverse Momentum-Dependent Fragmentation Functions
|
[
"Kai-Bao Chen",
"Tianbo Liu",
"Yu-Kun Song",
"Shu-Yi Wei"
] |
hep-ph
|
[
"hep-ph"
] |
§ INTRODUCTION
Quantum chromodynamics (QCD) <cit.> is known as the fundamental theory of strong interaction in the framework of Yang-Mills gauge field theory <cit.>. As a key property of QCD, the color confinement prohibits direct detection of quarks and gluons, the fundamental degrees of freedom, with any modern detectors. The emergence of color neutral hadrons from colored quarks and gluons is still an unresolved problem and has received particular interest in recent years <cit.>. With the progress of QCD into the precision era, unraveling the hadronization mechanism in the high-energy scattering processes has become one of the most active frontiers in nuclear and particle physics.
Due to the nonperturbative nature of QCD, it is still challenging to directly calculate the hadronization process from first principles. Similar to the parton distribution functions (PDFs) <cit.>, which were originally defined as the probability density of finding a parton inside the parent hadron, the concept of fragmentation functions (FFs) was introduced by Berman, Bjorken, and Kogut <cit.> right after the parton model to describe the emergence of a system of the hadron from a high-energy parton isolated in the phase space. An alternative name, the parton decay function, has also frequently been used in early literature.
The modern concept of FFs in QCD was first introduced to describe the inclusive production of a desired hadron in the e^+e^- annihilation <cit.>, which is still the cleanest reaction currently available to investigate the fragmentation process. Within the QCD-improved parton model, the FF has its foundation in the factorization theorem <cit.>, in which the differential cross section is approximated as a convolution of short-distance hard scattering and long-distance matrix elements with corrections formally suppressed by inverse powers of a hard scale, e.g., the center-of-mass (c.m.) energy Q=√(s) in the e^+e^- annihilation. The predictive power of this theoretical framework relies on the control of the hard probe, which can be achieved by our ability to calculate the partonic cross section order by order in the perturbation theory, and the universality of the long-distance functions, such as the FFs, to be tested in multiple high-energy scattering processes.
For a single-scale process, e.g., e^+e^- → h X, where h represents the identified hadron in the final state and X denotes the undetected particles, the process is not sensitive to the confined motion of quarks and gluons in the hadronization process, and one can apply the colinear factorization with the emergence of the detected hadron described by a colinear FF D_f→ h(z), where the subscript f stands for the parton flavor and z is the longitudinal momentum fraction carried by the hadron h with respect to the fragmenting parton.
If two hadrons are identified in a process, e.g., e^+e^- → h_A h_B X, where h_A and h_B are detected hadrons in the final state, the reaction becomes a double-scale problem with one scale Q given by the hard probe and the other scale provided by the transverse momentum imbalance, |p_A⊥ + p_B⊥|. When the second scale is much smaller than Q, i.e., the two hadrons are nearly back to back, one needs to use the transverse-momentum-dependent (TMD) factorization. The emergence of each of the hadrons is described by a TMD FF D_f→ h(z,k_⊥), where k_⊥ is the transverse momentum of the fragmenting parton with respect to the observed hadron <cit.>. When the two scales are compatible, the reaction effectively becomes a single-scale process, and one can again use the colinear factorization. The matching between the two regions has been developed. The TMD FFs defined in the e^+e^- annihilation also play an important role in the study of nucleon three-dimensional structures via the semi-inclusive deep inelastic scattering (SIDIS) process <cit.>. Instead of identifying two hadrons in a reaction, one can also access TMD FFs in the single-hadron production process by reconstructing the thrust axis, which provides the sensitivity to the transverse momentum of the observed hadron, as proposed in recent years <cit.>.
Taking the parton spin degree of freedom into account, one can define polarized or spin-dependent TMD FFs. They essentially reflect the correlation between parton transverse momentum and its spin during the hadronization process and result in rich phenomena in high-energy scattering processes. For example, the Collins fragmentation function H_1^⊥(z,k_⊥) <cit.>, naively interpreted as the probability density of a transversely polarized quark fragmenting into an unpolarized hadron, can lead to a single spin asymmetry (SSA) in the SIDIS process with a transversely polarized target <cit.>. This asymmetry is a key observable for the determination of the quark transversity distribution, the net density of a transversely polarized quark in a transversely polarized nucleon. It also leads to azimuthal asymmetries in e^+e^- annihilation as measured by Belle, BaBar, and BESIII. The progress of experimental techniques to determine the spin state of produced hyperons, such as Λ and Ω, and vector mesons, such as ρ and K^*, offer us the opportunity to extract additional information from FFs. This is far beyond a trivial extension since the spin has been proven to be a powerful quantity to test theories and models, especially in hadron physics. The recent measurement of the spontaneous polarization of Λ from unpolarized e^+e^- annihilation is such an instance <cit.>. This observation can be explained by a naively time-reversal odd (T-odd) TMD FF D_1T^⊥(z,k_⊥) and has received interests from various groups <cit.>.
In addition to the leading-twist FFs, which usually have probability interpretations, the high-twist FFs have been found o be much more important than expected in recent years for understanding precise experimental data <cit.>. Although the colinear factorization at subleading power was demonstrated some time ago, the TMD factorization beyond the leading power is still under exploration, and some approaches have been proposed <cit.>. Although high-twist contributions are formally power suppressed, their contributions to the cross section might not be negligible and may have significant effects in certain kinematics or observables. The inclusion of high-twist FFs will also modify the evolution equation and consequently affect the leading-twist FFs. The TMD factorization at subleading power was recently explored with different approaches. Overall, many efforts, both theoretical and experimental, are still required to understand the hadronization process and the upcoming data from future electron-ion colliders.
The remainder of this review is organized as follows. In Section <ref> we use e^+e^- → h_A h_B X as an example to present the flow of deriving the TMD factorization and the QCD evolution equation of TMD FFs. In Section <ref>, we present the FFs up to the twist-4 level for spin-0, -1/2, and 1 hadron productions. In Section <ref>, we summarize the experimental measurements towards understanding the spin-dependent FFs. In Section <ref>, we briefly lay out some model calculations. A summary is given in Section <ref>.
§ FACTORIZATION AND EVOLUTION
The modern concept of FFs has established on the QCD factorization theorems, which can be derived either from calculating traditional Feynman diagrams in perturbative field theory <cit.> or in effective theories <cit.>. In the former approach, one first identifies a collection of Feynman diagrams that offers the leading contribution through the Libby–Sterman analysis <cit.>. In this method, the leading contribution is represented by the reduced diagrams.
Taking e^+e^-→ h_Ah_BX process with h_A,h_B traveling along almost back-to-back directions as an example <cit.>, the leading regions are presented in Figure <ref>. The cross section is the product of various ingredients, such as the hard part H, the soft part S, and the colinear parts J_A, J_B. We work in the light-cone coordinate, so that a four momentum p can be written as follows: p^μ=(p^+,p^-,p_⊥) with p^±=1/√(2)(p^0± p^3). In the kinematic region where TMD factorization applies, the transverse momentum is considerably small compared with that along the longitudinal direction. Therefore, the momenta of the almost back-to-back hadrons A and B scale as p_A∼ Q(1,λ,√(λ)) and p_B∼ Q(λ,1,√(λ)), where Q is the large momentum scale and λ≪ 1 is a small parameter. The hard part H computes the cross section of interaction among hard partons whose momenta scale as Q(1,1,1) in perturbative field theory. The contribution from colinear partons whose momenta are colinear with the final state hadrons A and B are evaluated in the colinear function J_A/B. This process results in the gauge invariant bare FFs. The soft part calculates the contribution from soft gluons whose momenta typically take the form of Q(λ,λ,λ). They will be absorbed into the definition of TMD FFs eventually and convert the bare FFs into the renormalized ones.
The interactions between different parts can be eliminated via applying appropriate kinematic approximations and the Ward identity. Finally, the cross section is given by a convolution of those well-separated parts, and we arrive at the factorization theorem of this process.
Depending on the physics of interest, we may derive either colinear factorization or transverse-momentum-dependent (TMD) factorization theorems. For the differential cross section of e^+e^-→ h_Ah_BX as a function of the relative transverse momentum between h_A and h_B, the TMD factorization theorem applies.
In the single-photon-exchange approximation, the differential cross section of this process can be written as the production of a leptonic tensor and a hadronic tensor. It reads as follows:<cit.>
dσ/dy dz_A dz_B d^2 P_A⊥ = 2π N_c α^2/Q^4 L_μν W^μν,
where, α is the coupling constant, N_c=3 is the color factor, Q is the center-of-mass energy of the colliding leptons, y=(1+cosθ)/2 with θ the angle between incoming electron and the outgoing hadron h_A, z_A and z_B are light-cone momentum fractions of h_A and h_B, and P_A⊥ is the transverse momentum of h_A with respect to the direction of the h_B momentum. For the unpolarized lepton beams, the leptonic tensor L_μν is given by the following:
L_μν = l_1μ l_2ν + l_1ν l_2μ - g_μν l_1 · l_2,
with l_1 and l_2 being the momenta of colliding leptons. The hadronic tensor W_μν contains nonperturbative quantities and is laid out as follows:
W^μν
= ∑_f |H_f(Q,μ)^2|^μν∫ d^2 k_A⊥ d^2k_B⊥δ^(2) ( k_A⊥+k_B⊥-q_⊥)
×[D_1q^h_A (z_A,p_A⊥;μ,ζ_A)
D_1q̅^h_B(z_B,p_B⊥;μ,ζ_B)+…],
where q_⊥=-P_A⊥/z_A, and H_f(Q,μ) is the hard scattering factor that can be evaluated in the perturbative QCD. Here, D_1q^h_A (z_A,p_A⊥; μ,ζ_A) is the TMD FF with p_A⊥ the transverse momentum of hadron with respect to the fragmenting quark direction, μ is a renormalization scale, and ζ_A is a variable to regularize the rapidity divergence. Notice that k_i,⊥ is the relative transverse momentum of the fragmenting parton with respect to the hadron momentum. Therefore, we have p_i,⊥ by p_i,⊥ = -z_i k_i,⊥. Please also notice the difference between P_A⊥ and p_A⊥. The three-dot symbol stands for various spin-dependent terms which are not explicitly shown.
It is more convenient to perform the TMD evolution in the coordinate space than in the momentum space. Therefore, we need the Fourier transform,
D_1q^h (z,p_⊥;μ,ζ)=1/z^2∫ d^2p_⊥ e^i 1/zb_T·p_⊥D̃_1q^h (z,b_T;μ,ζ),
to translate the TMD FF into coordinate space one. The hadronic tensor then becomes the following:
W^μν
= ∑_f |H_f(Q,μ)^2|^μν∫d^2b_T/(2π)^2 e^-iq_⊥·b_T[D̃_1q^h_A (z_A,b_T;μ,ζ_A)
D̃_1q̅^h_B(z_B,b_T;μ,ζ_B)+…].
The TMD FF in the coordinate space is defined as the product of transition matrix elements between the vacuum and the hadronic final states.
Before presenting the final definition for the TMD FF in the coordinate space, we first show the unsubtracted version, which appears in the LO calculation. For the production of hadron A, it reads as follows:
D̃_1q^h_A, unsub (z,b_T;y_p_A-y_B)
=1/4N_c Tr_C Tr_D1/z∑_X ∫dx^-/2πe^ik^+x^-⟨ 0|γ^+ L(x/2;+∞,n_B)
XXXXXXXXXXXXXXX×ψ_q (x/2)|h_1,X⟩⟨ h_1,X|ψ̅_q (-x/2) L(-x/2;+∞,n_B)^†|0⟩,
where the position vector x = (0,x^-, b_T) contains only minus and transverse components, y_p_A = 1/2ln2(p_A^+)^2/m_A^2 is the rapidity of hadron A, Tr_C is a trace in the color space, and Tr_D is a trace in the Dirac space. The direction of the Wilson lines in the FF of hadron A is specified by the direction of hadron B which is denoted as n_B and vice versa. Notice that the rapidity parameters y_A → +∞ and y_B → -∞ are introduce, so that n_A= (1, -e^-2y_A, 0_T) and n_B= (-e^2y_B, 1, 0_T) are slightly space-like. Please also notice the difference between y_p_A and y_A. The Wilson line starting from the position x is defined as follows:
L(x;+∞,n)_ab= P{e^-ig_0 ∫_0^+∞ dλ n· A_(0)^α (x+λ n)t^α}_ab.
with a and b being the color indices, and g_0 and A_(0)^α the bare coupling and the bare gluon field.
Taking the y_A → +∞ and y_B → -∞ limit and absorbing the soft factors into the unsubtracted TMD FF, we arrive at the final definition of the TMD FF:
D̃_1q^h_A(z,b_T;μ,ζ_A)= D̃_1q^h_A, unsub (z,b_T;y_p_A-(-∞))
XXXXXXXXXXX×√(S̃_(0)(b_T;+∞,y_n)/S̃_(0)(b_T;+∞,-∞)S̃_(0)(b_T;y_n, -∞))× Z_D Z_2,
where y_n is an arbitrary rapidity introduced to separate ζ_A ≡m_A^2/z_A^2 e^2(y_p_A - y_n) from ζ_B ≡m_B^2/z_B^2 e^2(y_n - y_p_B), and Z_D,Z_2 are renormalization factors. The bare soft factor S̃_(0) is defined as the expectation values of Wilson lines on the vacuum, reading as follows:
S̃_(0)(b_T;y_A,y_B)= 1/N_c⟨ 0| L(b_T/2;+∞, n_B)^†_ca L(b_T/2;+∞, n_A)_ad
× L(-b_T/2;+∞, n_B)_bc L(-b_T/2;+∞, n_A)^†_db |0⟩.
§.§ Evolution Equations for TMD FFs
To regularize the ultraviolet (UV) and rapidity divergences, the energy scale μ and √(ζ) are introduced. As a consequence, the TMD FFs differ at different energy scales. The evolution effects are important for phenomenological studies. The QCD evolution for TMD FFs with respect to ζ is controlled by the Collins–Soper (CS) equation <cit.>, which is given as follows:
∂lnD̃_1q^h(z,b_T;μ,ζ)/∂ln√(ζ)=K̃(b_T;μ),
with K̃(b_T;μ) being the CS evolution kernel. The scale dependence of the evolution kernel is governed by
dK̃(b_T;μ)/dlnμ =-γ_K(μ),
where γ_K(μ) is the anomalous dimension. It is given by γ_K (μ)= 2C_F/πα_s (μ) with C_F=4/3 being the color factor and α_s being the running coupling at the LO accuracy <cit.>.
The μ dependence of the TMD FF is then given by
dlnD̃_1q^h(z,b_T;μ,ζ)/∂lnμ =γ_D(μ; ζ/μ^2) ,
where γ_D is another anomalous dimension. At the LO accuracy <cit.>, it is given as follows: γ_D(μ, ζ/μ^2) = α_s (μ)C_F/π (3/2 - lnζ/μ^2).
The TMD FF defined by Equation (<ref>) is actually calculable in the colinear factorization approach in the small-b_T regime. However, at the large b_T region, the discrepancy of these two approaches grows in terms of Λ_ QCD b_T. This region is usually referred to as the nonperturbative regime since a large coordinate corresponds to a small energy scale. The perturbative treatment of the QCD evolution in this region is no longer reliable. To have a consistent formula, the b_*-prescription is usually adopted in phenomenology. By introducing b_* = |b_T|/√(1+b_T^2/b_ max^2) and μ_b=2e^-γ_E/b_*, we can separate the perturbative part from the nonperturbative part in the QCD evolution. Here, γ_E is the Euler constant, and b_ max is an infrared cutoff which is properly chosen to guarantee that μ_b ≫Λ_ QCD. Employing the b_* prescription, the QCD evolution is always performed in the realm of the perturbative QCD. Therefore, this approach underestimates the contribution from the nonperturbative regime. This part of the contribution can be reintegrated into the final prescription by the introduction of a nonperturbative factor.
Ultimately, we arrive at <cit.>
D_1q^h (z, b_T; μ, ζ) = D_1q^h (z, b_T^*; μ_0=μ_b, ζ_0=μ_b^2)
×exp{ln√(ζ)/μ_bK̃(b_*;μ_b)+∫_μ_b^μdμ^'/μ^'γ_D(μ^'; ζ/μ'^2)}
×exp{-S_ np (z, |b_T|, ζ)},
where the last line is the nonperturbative function that returns the nonperturbative effect that has been deliberately removed from the QCD evolution in the b_*-prescription. There is no theoretical approach that can evaluate this nonperturbative function other than the one that extracts it from experimental data <cit.>. Notice that <cit.> present a different method to address the nonperturbative physics. Here, D_1q^h (z, b_T^*; μ_0=μ_b, ζ_0=μ_b^2) is the FF at the initial scale. In the phenomenology, it is usually chosen to coincide with the colinear FF D_1q^h (z, μ_f) with the factorization scale specified by μ_f=μ_b.
Similar to the PDF case, QCD evolution tends to broaden the k_T distribution width at higher energy scales. Both unpolarized and spin-dependent FFs show such a behavior <cit.>.
§.§ TMD Factorization at the Higher Twist
[-15]In a semi-inclusive process, normally we can find two energy scales: the typical transverse momentum q_⊥ and the hardest energy scale Q. In the region of Q ≫ q_⊥≫Λ_ QCD, the TMD factorization framework at the leading twist usually works very well. When q_⊥∼ Q ≫Λ_ QCD, we should fall back to the colinear factorization. However, in between, there is still a large phase space where q_⊥ is smaller than Q but not much smaller. This is the kinematic region where both the TMD factorization and the colinear factorization can approximately apply. However, the prediction from the TMD factorization deviates from the experimental measurements when q_⊥/Q becomes not very small, calling for the inclusion of higher twist corrections. The higher twist corrections are also usually referred to as power corrections since they provide contributions in terms of (q_⊥/Q)^n. In addition, twist-3 contributions usually introduce new asymmetries that do not appear at the leading twist level. A comprehensive study on the higher twist contributions is thus vital in phenomenology. Contributions from higher twist TMD PDFs and FFs were studied some time ago <cit.>, but few advances have been made in the systematic derivations of the TMD factorization formula at the higher twist gain. Various theoretical methods have been applied to derive the TMD factorization scheme at the twist-3 level, such as the TMD operator expansion technique <cit.>, the soft-colinear effective theory approach <cit.>, factorization from functional integral <cit.>, and a very recent work from <cit.>, etc. The TMD factorization at the higher twist level is far from being completed, which requires further theoretical efforts.
§ SPIN-DEPENDENT TMD FFS
In semi-inclusive reactions, the experimental observables are usually different azimuthal asymmetries. In the kinematic region of TMD factorization, they are directly linked to TMD PDFs or FFs. The transverse momenta of partons and hadrons are often entangled with their polarizations. As a consequence, there are abundant polarization-dependent azimuthal asymmetries that can be measured in the experiment. This is particularly true for the transverse polarization. It is thought to provide only subleading power contributions compared to the longitudinal polarization at high energy; however, it often generates leading power contributions when correlated with the transverse momenta. In this section, we summarize the definition of the spin-dependent TMD FFs for hadrons with different spins. The following discussion only applies at the LO level since the TMD factorization for higher twist contributions is still far from being concluded. Therefore, we remove the scale dependence from TMD FFs.
§.§ The Intuitive Definition of TMD FFs
FFs represent the momentum distribution of a hadron inside of a hadronic jet produced by the fragmenting high-energy parton. We use D^q→ h(k;p) to denote the probability density of producing a hadron h with momentum p from a quark with momentum k.
In the high-energy limit, we can safely neglect the quark and hadron mass. Therefore, we have k^2 = p^2 =0. In the naive parton model picture, the hadrons move colinearly with the parent quark. We thus have p=zk where p is the hadron momentum, k is the quark momentum, and z is the momentum fraction. In this case, the FF is only a scalar function of z. We have
D^q→ h(k;p) = D_1^q→ h(z),
where D_1^q→ h(z) is simply the unpolarized FF.
With the spin degree of freedom being taken into account, the FFs will also depend on additional parameters which characterize the polarization of the final state hadron or the fragmenting quark. For example, for the production of spin-1/2 hadrons, we need to introduce λ_q and λ to describe the helicities and introduce s⃗_Tq and S⃗_T to describe the transverse polarizations of the quark and the hadron. With more available parameters, we can construct two additional scalar structures, λ_q λ_h and s⃗_Tq·S⃗_T, according to the parity conservation. Therefore, the complete decomposition of the FF is given by
D^q→ h(k,S_q;p,S) = D_1^q→ h(z) + λ_q λ G_1L^q→ h(z) + s⃗_Tq·S⃗_T H_1T^q→ h(z),
where G_1L(z) and H_1T(z) are the longitudinal and transverse spin transfers from the quark to the hadron, respectively. The physical interpretations of these probability densities coincide with those of the leading twist FFs in the colinear factorization approach.
In some cases, the transverse momentum of the final state hadron with respect to the quark momentum becomes relevant to the observable of interest. The interplay between the transverse momentum p_⊥ and the polarization parameters induces considerably intriguing phenomena. Again, we use the spin-1/2 hadron production as an example. From the parton model, we obtain the following eight TMD probability densities:
D(k,S_q;p,S) = D_1(z,p_⊥) + λ_q λ G_1L(z,p_⊥) + s⃗_Tq·S⃗_T H_1T(z,p_⊥)
+ 1/MS⃗_T · (k̂⃗̂×p⃗_⊥) D_1T^⊥(z,p_⊥) + 1/Mλ_q (S⃗_T ·p⃗_⊥) G_1T^⊥(z,p_⊥)
+ 1/Ms⃗_Tq· (k̂⃗̂×p⃗_⊥) H_1^⊥(z,p_⊥) + 1/Mλ (s⃗_Tq·p⃗_⊥) H_1L^⊥(z,p_⊥)
+ 1/M^2 (s⃗_Tq·p⃗_⊥)(S⃗_T ·p⃗_⊥) H_1T^⊥(z,p_⊥).
Here, we have dropped the q→ h superscript for simplicity. These TMD FFs correspond to the eight leading twist TMD FFs defined in the TMD factorization approach. Among them, we notice in particular the famous Collins function H_1^⊥ <cit.> and the Sivers-type FF D_1T^⊥ <cit.>. They are usually referred to as the naive-T-odd FFs. In neglecting the interaction among the final state hadrons and the gauge link (which will be explained below), the time-reversal invariance demands that these two functions disappear. However, the time-reversal operation converts the “out” state to the “in” state. The interaction among hadrons suggests that one cannot find a simple relation between the “in” and “out” states any longer. Therefore, the time-reversal invariance actually poses no constraints on FFs. This feature can be fully appreciated in the context of parton correlators in the next subsection. Furthermore, we use H to denote FFs accompanied with the transverse polarization of the fragmenting quark s⃗_Tq. They are chiral-odd FFs. The reason for this will also be explained later.
§.§ The Definition of TMD FFs from the Parton Correlators
In the language of quantum field theory, the quark FFs are defined via the decomposition of parton correlators, such as the quark–quark correlator and the quark–gluon correlator. Usually, we need to define the gauge-invariant quark–quark correlators in the very beginning. From <cit.>, we have the following:
Ξ̂_ij^(0)(k;p,S) =1/2π ∑_X ∫ d^4ξ e^-ik ξ⟨ 0| ℒ^† (0;∞) ψ_i(0) |p,S;X⟩⟨ p,S;X|ψ̅_j(ξ) ℒ(ξ;∞) |0⟩,
where ξ is the coordinate of the quark field, k and p denote the 4-momenta of the fragmenting quark and the produced hadron, respectively; S denotes the hadron spin; and ℒ(ξ;∞) is the gauge link that ensures the gauge invariance of the definition of the correlator. We use i and j to represent one component of the corresponding spinor. Therefore, Ξ̂_ij^(0)(k;p,S) is actually one element in a 4 × 4 matrix which is denoted by Ξ̂^(0)(k;p,S).
As for the TMD FFs, we can integrate the above master correlator over the k^- component and obtain the following TMD quark–quark correlator:
Ξ̂_ij^(0)(z,k_⊥;p,S) =
∑_X ∫p^+dξ^-/2π d^2ξ_⊥ e^-i(p^+ξ^-/z - k⃗_⊥·ξ⃗_⊥)
×⟨ 0| ℒ^† (0;∞) ψ_i(0) |p,S;X⟩⟨ p,S;X|ψ̅_j(ξ) ℒ(ξ;∞) |0⟩,
where z=p^+/k^+ is the longitudinal momentum fraction of the hadron, and k_⊥ is the transverse momentum of the fragmenting quark with respect to the hadron momentum. Unlike the discussion in the previous sections, it is more convenient to express the parton correlators as a function of k_⊥ instead of p_⊥. Nonetheless, since we have the approximation k_⊥ = - p_⊥/z, these two methods are equivalent.
Although the TMD quark–quark correlator is a nonperturbative object, we can still discuss some general features from the definition. For instance, it possesses hermiticity, parity invariance, and charge-conjugation symmetry. As will be shown below, these properties will constrain the structures of the correlator. However, unlike the case for PDFs, the time-reversal invariance does not mean much for FFs.
Furthermore, the quark–quark correlator is a 4× 4 matrix in the Dirac space. Therefore, it can always be decomposed in terms of 16 Γ-matrices, i.e.,
Ξ̂^(0)(z,k_⊥;p,S) = Ξ^(0)(z,k_⊥;p,S) + iγ_5 Ξ̃^(0)(z,k_⊥;p,S) + γ^αΞ_α^(0)(z,k_⊥;p,S)
+ γ_5γ^αΞ̃_α^(0)(z,k_⊥;p,S) + iσ^αβγ_5 Ξ_αβ^(0)(z,k_⊥;p,S).
The coefficient functions Ξ^(0), Ξ̃^(0), Ξ_α^(0), Ξ̃_α^(0) and Ξ_αβ^(0) are given by the trace of the corresponding Γ-matrix with the correlator. These coefficient functions can further be decomposed into the products of scalar functions with basic Lorentz covariants according to their Lorentz transformation properties. The basic Lorentz covariants are constructed in terms of the available kinematic variables used in the reaction process. The scalar functions are the corresponding TMD FFs. We will present the detailed decomposition in the following subsections. Notice that the TMD quark–quark correlator given by Equation (<ref>) satisfies the constraints of hermiticity and parity conservation. This will limit the allowed Lorentz structures of the parton correlator.
Higher twist TMD FFs also receive contributions from quark–gluon correlators <cit.> in addition to the quark–quark correlator mentioned above. For example, the complete decomposition of twist-3 TMD FFs also involves contributions from the following correlator:
-0cm
Ξ̂_ρ,ij^(1)(k;p,S)= ∑_X ∫d^4ξ/2π e^-ik·ξ⟨ 0| ℒ^† (0;∞) D_ρ(0)ψ_i(0) |p,S;X⟩⟨ p,S;X|ψ̅_j(ξ) ℒ(ξ;∞) |0⟩,
where D_ρ(y)≡ -i∂_ρ+gA_ρ(y) and A_ρ(y) denote the gluon field. However, the twist-3 TMD FFs defined via these quark–gluon correlators are not independent from those defined via the quark–quark correlator <cit.>. They are related to each other by a set of equations derived using the QCD equation of motion γ· D(y)ψ(y)=0. Therefore, we will only show the explicit decomposition of the TMD FFs from the quark–quark correlator in the following subsections.
§.§ The Spin Dependence
With the spin degree of freedom being taken into account, the basic Lorentz covariants in the decompositions of the coefficient functions in Equation (<ref>) depend on not only momenta but also parameters describing the hadron polarization. The hadron polarization is defined in the rest frame of the hadron and is described by the spin density matrix.
For spin-1/2 hadrons, the spin density matrix is given by
ρ = 1/2( 1 + S⃗·σ⃗),
where σ⃗ is the Pauli matrix, and S⃗ is the polarization vector in the rest frame of the hadron. The covariant form of the polarization vector reads as follows:
S^μ = λp^+/Mn̅^μ + S_T^μ - λM/2p^+ n^μ.
Here, M is the hadron mass, λ is the helicity, and S_T^μ is the transverse polarization vector of the hadron. We have employed n̅^μ to represent the light-cone plus direction and n^μ to denote the minus direction. For spin-1/2 hadrons, an additional pseudo-scalar λ and an axial-vector S_T^μ are at our disposal for constructing the basic Lorentz tensors.
For spin-1 hadrons, such as the vector mesons, the polarization is described by a 3× 3 density matrix, which is usually given as <cit.>
ρ = 1/3 (1 + 3/2S^i Σ^i + 3 T^ijΣ^ij).
Here, Σ^i is the spin operator of spin-1 particle. The rank-2 tensor polarization basis Σ^ij is defined by
Σ^ij≡1/2 (Σ^iΣ^j + Σ^j Σ^i) - 2/31δ^ij.
where the second term subtracts the diagonal elements from the product in the first term to give the relation
Σ^x x+Σ^y y+Σ^z z =0.
This can be easily seen from the square of the spin-1 operator, i.e., Σ^2 ≡Σ^x Σ^x + Σ^y Σ^y + Σ^z Σ^z = s(s+1) 1 with s=1 for spin-1. From Equation (<ref>), we find that a polarization tensor T is required to fully describe the polarization of a vector meson besides the polarization vector S. The polarization vector S is similar to that of spin-1/2 hadrons. It takes the same covariant form as laid out in Equation (<ref>). The polarization tensor T^ij= Tr(ρΣ^ij) has five independent components that consist of a Lorentz scalar S_LL, a Lorentz vector S_LT^μ = (0, S_LT^x, S_LT^y,0) and a Lorentz tensor S_TT^μν that has two nonzero independent components (S_TT^xx = -S_TT^yy and S_TT^xy = S_TT^yx). It is parameterized as follows:
T= 1/2(
[ -2/3S_LL + S_TT^xx S_TT^xy S_LT^x; S_TT^xy -2/3 S_LL - S_TT^xx S_LT^y; S_LT^x S_LT^y 4/3 S_LL ]).
The Lorentz covariant form for the polarization tensor is expressed as <cit.>
T^μν = 1/2 [ 4/3S_LL( p^+/M)^2 n̅^μn̅^ν + p^+/M n^{μS_LT^ν}
- 2/3S_LL(n̅^{μn^ν} - g_T^μν)
+ S_TT^μν
- M/2p^+n̅^{μS_LT^ν} + 1/3S_LL( M/p^+)^2 n^μ n^ν],
where we have used the shorthand notation A^{μB^ν}≡ A^μ B^ν + A^ν B^μ.
For spin-3/2 hadrons, such as the decuplet baryons, the polarization is described by a 4× 4 density matrix which is given by <cit.>
ρ=1/4(1+4/5 S^iΣ^i+2/3 T^i jΣ^i j+8/9R^i j kΣ^i j k).
Here, Σ^i is the spin operator of the spin-3/2 particle, and S^i is the corresponding polarization vector.
Similar with that for spin-1 case, (Σ^ij) is the polarization tensor basis which has five independent components. It can be constructed from Σ^i and is given by
Σ^i j =1/2(Σ^iΣ^j+Σ^jΣ^i)-5/4δ^i j 1,
Notice that the square of the spin-3/2 operator is given by ∑_i (Σ^i)^2 = 3/2(3/2+1) 1 = 15/4 1. The rank-2 tensor polarization basis for spin-3/2, Σ^ij, is also chosen to be traceless as laid out by Equation (<ref>). Therefore, the second term in Equation (<ref>) is different from that in Equation (<ref>) for spin-1 hadrons. The corresponding polarization tensor T^ij also has five independent components which are the same as those for spin-1 hadrons. The rank-3 tensor polarization basis Σ^ijk is unique for spin-3/2 hadrons. It has seven independent components which can be constructed as follows:
Σ^i j k =1/6Σ^{iΣ^jΣ^k}
-41/60(δ^ijΣ^k + δ^jkΣ^i + δ^kiΣ^j)
= 1/3(Σ^ijΣ^k + Σ^jkΣ^i + Σ^kiΣ^j)
- 4/15(δ^ijΣ^k + δ^jkΣ^i + δ^kiΣ^j ),
where the symbol {⋯} stands for the sum of all possible permutations.
The corresponding rank-3 spin tensor R^ijk is defined as follows:
R^i j k=1/4[
(
[ -3S_LLT^x+S_TTT^xxx -S_LLT^y+S_TTT^yxx -2S_LLL+S_LTT^xx; -S_LLT^y+S_TTT^yxx -S_LLT^x-S_TTT^xxx S_LTT^xy; -2S_LLL+S_LTT^xx S_LTT^xy 4S_LLT^x ])
(
[ -S_LLT^y+S_TTT^yxx -S_LLT^x-S_TTT^xxx S_LTT^xy; -S_LLT^x-S_TTT^xxx -3S_LLT^y-S_TTT^yxx -2S_LLL-S_LTT^xx; S_LTT^xy -2S_LLL-S_LTT^xx 4S_LLT^y ])
(
[ -2S_LLL+S_LTT^xx S_LTT^xy 4S_LLT^x; S_LTT^xy -2S_LLL-S_LTT^xx 4S_LLT^y; 4S_LLT^x 4S_LLT^y 4S_LLL ])
],
Meanwhile, the Lorentz covariant form is given as follows:
-0cm
R^μνρ =
1/4{S_LLL[1/2(M/P ·n̅)^3 n̅^μn̅^νn̅^ρ
-1/2(M/P ·n̅)(n̅^{μn̅^ν n^ρ}
-n̅^{μg_T^νρ})
+(P ·n̅/M)(n̅^{μ n^ν n^ρ}-n^{μg_T^νρ})-4(P ·n̅/M)^3 n^μ n^ν n^ρ]
+1/2(M/P ·n̅)^2 n̅^{μn̅^ν S_LLT^ρ}+2(P ·n̅/M)^2 n^{μ n^ν S_LLT^ρ}-2n̅^{μ n^ν S_LLT^ρ}+1/2S_LLT^{μg_T^νρ}
+1/4(M/P ·n̅) n̅^{μ S_LTT^νρ}-1/2(P ·n̅/M) n^{μ S_LTT^νρ}+S_TTT^μνρ}.
§.§ Decomposition Result for Spin-Dependent TMD FFs
The results for TMD FFs of spin-1 hadrons defined via quark–quark correlator exist up to twist-4 level in the literature <cit.>. The leading twist TMD FFs for spin-3/2 hadrons have also been presented in <cit.>. In this section, we summarize the general decomposition of the quark–quark correlator in terms of TMD FFs for the unpolarized part, polarization-vector-dependent part, rank-2-polarization-tensor-dependent part, and rank-3-polarization-tensor-dependent parts. To describe the production of pseudoscalar mesons, we only need the unpolarized part. To describe the production of baryons, we need to combine the unpolarized and the polarization-vector-dependent parts. The description of the spin-3/2 hadron production requires all four parts. However, it should be noted that different conventions are employed in different works.
The notation system for TMD FFs in this review are laid out here. We use D, G, and H to denote FFs of unpolarized, longitudinally polarized, and transversely polarized quarks, respectively. They are obtained from the decomposition of the γ_μ, γ_5γ_μ and γ_5 σ_μν terms of the quark–quark correlator. Those FFs defined from the decomposition of the 1 and γ_5 terms are denoted as E. We use the numbers 1 and 3 in the subscripts to denote the leading twist and twist-4 FFs, respectively. Other FFs without numbers in the subscripts are at the twist-3 level. The polarization of the produced hadron will be specified in the subscripts, where L and T represent longitudinal and transverse polarizations, and LL, LT, and TT stand for the rank-2-tensor polarizations. The symbol ⊥ in the superscript implies that the corresponding basic Lorentz structure depends on the transverse momentum k_⊥.
The decomposition for the unpolarized part is given by the following:
zΞ^U(0)(z,k_⊥;p) = ME(z,k_⊥),
zΞ̃^U(0)(z,k_⊥;p) =0,
zΞ_α^U(0)(z,k_⊥;p) = p^+ n̅_α D_1(z,k_⊥)+ k_⊥α D^⊥(z,k_⊥) + M^2/p^+n_α D_3(z,k_⊥),
zΞ̃_α^U(0)(z,k_⊥;p) = -k̃_⊥α G^⊥(z,k_⊥),
zΞ_ρα^U(0)(z,k_⊥ ;p) = -p^+/Mn̅_[ρk̃_⊥α] H_1^⊥(z,k_⊥) + Mε_⊥ρα H(z,k_⊥)
- M/p^+ n_[ρk̃_⊥α] H_3^⊥(z,k_⊥).
Here, k̃_⊥α≡ε_⊥μαk_⊥^μ denotes the transverse vector orthogonal to k_⊥α, with ϵ_⊥μν being defined as ε_⊥μν≡ε_μναβn̅^α n^β. There are eight TMD FFs for the unpolarized part. Among them, the number density D_1 and the Collins function H_1^⊥ are at the leading twist. They both have twist-4 companions i.e., D_3 and H_3^⊥, respectively. The other four are twist-3 FFs. The TMD FFs D_1T^⊥,G^⊥, H_1^⊥, H, and H_3^⊥ are usually referred to as the naive T-odd FFs. The reader may have already discerned that the T-odd FFs are always associated with the Levi-Civita tensor, ε_μναβ. It should be noted that T-odd PDFs can only survive thanks to the gauge link. However, for the FFs, the final state interactions between the produced hadrons in the hadronization process can also contribute to the T-oddness. This difference has a more important impact on the polarization-vector-dependent T-odd PDFs and FFs, which are discussed below.
The decomposition for the vector polarized part is given by the following:
-0cm
zΞ^V(0)(z,k_⊥;p,S) = (k̃_⊥· S_T) E_T^⊥(z,k_⊥),
zΞ̃^V(0)(z,k_⊥;p,S) = M [ λ E_L(z,k_⊥) + k_⊥· S_T/M E^'⊥_T(z,k_⊥) ],
zΞ_α^V(0)(z,k_⊥;p,S) = p^+ n̅_αk̃_⊥· S_T/M D_1T^⊥(z,k_⊥)
- MS̃_Tα D_T(z,k_⊥)
- k̃_⊥α[ λ D_L^⊥(z,k_⊥) + k_⊥· S_T/MD_T^⊥(z,k_⊥) ] + M/p^+n_α (k̃_⊥· S_T) D_3T^⊥(z,k_⊥),
zΞ̃_α^V(0)(z,k_⊥;p,S) = p^+ n̅_α[ λ G_1L(z,k_⊥) + k_⊥· S_T/M G_1T^⊥(z,k_⊥) ]
- MS_Tα G_T(z,k_⊥) - k_⊥α[ λ G_L^⊥(z,k_⊥) + k_⊥· S_T/M G_T^⊥(z,k_⊥) ]
+ M^2/p^+n_α[ λ G_3L(z,k_⊥) + k_⊥· S_T/M G_3T^⊥(z,k_⊥) ],
zΞ_ρα^V(0)(z,k_⊥;p,S) = p^+ n̅_[ρS_Tα] H_1T(z,k_⊥) + p^+/Mn̅_[ρk_⊥α][ λ H_1L^⊥(z,k_⊥)
+ k_⊥· S_T/M H_1T^⊥(z,k_⊥) ]
+ k_⊥[ρS_Tα] H_T^⊥(z,k_⊥) + M n̅_[ρn_α][ λ H_L(z,k_⊥)
+ k_⊥· S_T/M H_T^'⊥(z,k_⊥) ]
+ M^2/p^+ n_[ρS_Tα] H_3T(z,k_⊥)
+ M/p^+ n_[ρk_⊥α][ λ H_3L^⊥(z,k_⊥) + k_⊥· S_T/M H_3T^⊥(z,k_⊥) ].
There are in total 24 polarization-vector-dependent TMD FFs. Of these, 6 contribute at the leading twist, 12 at twist-3, and remaining 6 at twist-4.
Among the six leading twist FFs, G_1L is the longitudinal spin transfer, H_1T and H_1T^⊥ are transverse spin transfers, G_1T^⊥ is the longitudinal to transverse spin transfer, H_1L^⊥ is the transverse to longitudinal spin transfer, and D_1T^⊥ induces the transverse polarization of hadrons in the fragmentation of an unpolarized quark. We note in particular that the D_1T^⊥ FF resembles the Sivers function in PDFs <cit.>. It is responsible for the hadron transverse polarization along the normal direction of the production plane in high-energy collisions. It is also a naive T-odd FF. However, as mentioned above, the T-oddness has little meaning in the context of hadronization. The T-odd PDFs arise solely from the gauge link. Therefore, it has been proven theoretically that there is a sign-flip between the Sivers functions in SIDIS and Drell-Yan <cit.>. However, the T-oddness of FFs can also be generated from the interaction among final state hadrons. Therefore, there is no such similar relation for the D_1T^⊥ FF between different processes. Besides D_1T^⊥, there are seven other T-odd FFs, namely, E_T^⊥, E_L, E_T^'⊥, D_L^⊥, D_T, D_T^⊥, and D_3T^⊥. The rest are T-even. All of T-odd FFs are accompanied by the Levi-Civita tensor except for E_L and E_T^'⊥.
The decomposition for the rank-2-polarization-tensor-dependent part is given as follows:
-0cm
zΞ^T(0)(z,k_⊥;p,S) =M [ S_LLE_LL(z,k_⊥)
+ k_⊥· S_LT/M E_LT^⊥(z,k_⊥)
+ S_TT^kk/M^2 E_TT^⊥(z,k_⊥)],
zΞ̃^T(0)(z,k_⊥;p,S) =M [ k̃_⊥· S_LT/M E_LT^'⊥ (z,k_⊥)
+ S_TT^k̃ k/M^2 E_TT^'⊥(z,k_⊥) ],
zΞ_α^T(0)(z,k_⊥;p,S) = p^+ n̅_α[ S_LL D_1LL(z,k_⊥)
+ k_⊥· S_LT/MD_1LT^⊥ (z,k_⊥) + S_TT^kk/M^2 D_1TT^⊥(z,k_⊥) ]
+ M S_LTα D_LT(z,k_⊥) +S_TTα^k D_TT^'⊥(z,k_⊥)
+ k_⊥α[ S_LLD_LL^⊥(z,k_⊥) + k_⊥· S_LT/MD_LT^⊥ (z,k_⊥)
+ S_TT^kk/M^2 D_TT^⊥(z,k_⊥) ]
+ M^2/p^+n_α[ S_LL D_3LL(z,k_⊥) + k_⊥· S_LT/MD_3LT^⊥ (z,k_⊥)
+ S_TT^kk/M^2 D_3TT^⊥(z,k_⊥) ],
zΞ̃_α^T(0)(z,k_⊥;p,S) = p^+ n̅_α[ k̃_⊥· S_LT/MG_1LT^⊥(z,k_⊥)
+ S_TT^k̃k/M^2G_1TT^⊥(z,k_⊥) ]
- MS̃_LTα G_LT(z,k_⊥)
- S̃_TTα^k G_TT^'⊥(z,k_⊥)
- k̃_⊥α[ S_LLG_LL^⊥(z,k_⊥)
+ k_⊥· S_LT/MG_LT^⊥ (z,k_⊥) + S_TT^kk/M^2 G_TT^⊥(z,k_⊥) ]
+ M^2/p^+n_α[ k̃_⊥· S_LT/MG_3LT^⊥(z,k_⊥)
+ S_TT^k̃k/M^2 G_3TT^⊥(z,k_⊥) ] ,
zΞ_ρα^T(0)(z,k_⊥ ;p,S) = -p^+ n̅_[ρS̃_LTα] H_1LT(z,k_⊥)
- p^+/Mn̅_[ρS̃_TTα]^k H_1TT^'⊥(z,k_⊥)
-p^+/Mn̅_[ρk̃_⊥α][ S_LL H_1LL^⊥(z,k_⊥)
+ k_⊥· S_LT/MH_1LT^⊥ (z,k_⊥) + S_TT^kk/M^2 H_1TT^⊥(z,k_⊥) ]
+ Mε_⊥ρα[ S_LL H_LL(z,k_⊥) + k_⊥· S_LT/MH_LT^⊥ (z,k_⊥)
+ S_TT^kk/M^2 H_TT^⊥(z,k_⊥) ]
+ n̅_[ρn_α][ (k̃_⊥· S_LT) H_LT^'⊥(z,k_⊥)
+ S_TT^k̃k/MH_TT^'⊥(z,k_⊥) ]
- M/p^+ n_[ρk̃_⊥α][ S_LL H_3LL^⊥(z,k_⊥)
+ k_⊥· S_LT/MH_3LT^⊥ (z,k_⊥) + S_TT^kk/M^2 H_3TT^⊥(z,k_⊥) ]
- M/p^+ n_[ρ M S̃_LTα][ H_3LT(z,k_⊥)
+ S̃_TTα]^kH_3TT^'⊥(z,k_⊥) ].
We have used the shorthanded notations such as S_TT^kk≡ S_TT^αβ k_⊥αk_⊥β. There are in total 40 tensor polarization-dependent TMD FFs; of these.10 contribute at the leading twist, 20 contribute at twist-3, and the remaining 10 contribute at twist-4.
The 24 TMD FFs defined from the decomposition of Ξ̃_α^T(0) and Ξ_ρα^T(0) are naive T-odd.
Among these TMD FFs, we notice in particularly that the S_LL dependent TMD FF D_1LL, which is responsible for the spin alignment of the produced vector meson, is decoupled from the quark polarization. This suggests that the vector meson spin alignment can also be observed in the unpolarized high-energy collisions <cit.>. Besides, D_1LL also survives the k_⊥-integral. Therefore, it also appears in the colinear factorization.
The rank-3-polarization-tensor-dependent TMD FFs are unique for spin-3/2 (or higher) hadrons.
A complete set of leading twist quark TMD FFs for spin-3/2 hadrons has been given in <cit.>.
There are in total 14 rank-3-polarization-tensor-dependent TMD FFs that can be defined at the leading twist level.
We refer interested readers to <cit.> for a detailed discussion.
§.§ TMD FFs of Antiquarks and Gluons
One can define antiquark TMD FFs by replacing the fermion fields in the correlator of quark TMD FFs with the charge-conjugated fields.
Therefore, it is easy to find that the traces of the correlator with Dirac matrices I, iγ_5 and γ^μγ_5 will have an opposite sign between quark and antiquark cases, while the traces with γ^μ and iσ^μγ_5 are the same <cit.>.
The definition and parameterization of the antiquark TMD FFs are then full analogous to those of quark TMD FFs.
The gluon FFs are defined through the gluon correlator given by <cit.>
Γ̂^μν;ρσ(k;p,S)
= ∑_X ∫d^4ξ/(2π)^4 e^ik·ξ⟨ 0| F^ρσ(ξ) | p,S; X⟩⟨ p,S;X| U(ξ,0) F^μν(0)| 0⟩,
where F^ρσ(ξ)≡ F^ρσ,aT^a is the gluon field field strength tensor, and U(ξ,0) is the Wilson line in the adjoint representation that renders the correlator gauge invariant. Under the assumption that the fragmenting parton moves in the plus direction, an integration over the k^- component is carried out to give the TMD gluon correlator.
At the leading twist, we need to consider
M Γ̂^ij(z,k_⊥;p,S) =
∫ dk^- Γ^+j;+i(k;p,S),
where i and j are transverse Lorentz indices in the transverse directions.
For the spin-1/2 hadron production, there are eight leading twist gluon TMD FFs which are given by the decomposition of the TMD gluon correlator <cit.>. We have the following:
Γ̂_U^i j (z, k_⊥; p, S) =
p^+/M[-g_T^i j D_1g (z, k_⊥)+(k_⊥^i k_⊥^j/M^2+g_T^i jk_⊥^2/2 M^2) H_1g^⊥(z, k_⊥ )],
Γ̂_L^i j(z, k_⊥; p, S) =
- λp^+/M[ i ε_⊥^i j G_1Lg (z, k_⊥)-ε_⊥^k_⊥{i k_⊥^j}/2 M^2 H_1Lg^⊥(z, k_⊥)],
Γ̂_T^i j(z, k_⊥; p, S) =
- p^+/M[ g_T^i jε_⊥^k_⊥ S_T/M D_1Tg^⊥ (z, k_⊥) + i ε_⊥^i jk_⊥·S_T/M G_1Tg^⊥ (z, k_⊥)
- ε_⊥^k_⊥{i S_T^j}+ε_⊥^S_T{i k_⊥^j}/4 M H_1Tg (z, k_⊥)
- ε_⊥^k_⊥{i k_⊥^j}/2 M^2k_⊥·S_T/M H_1Tg^⊥(z, k_⊥) ].
Γ̂_U, Γ̂_L, and Γ̂_T stand for the unpolarized, the longitudinal, and transverse polarized parts for the hadron production, respectively. Analogously to the quark FFs, we have used D to represent FFs of the unpolarized gluons, G to represent the FFs of the circularly polarized gluons, and H to represent the FFs of the linearly polarized gluons. Higher twist gluon TMD FFs are also discussed in <cit.>, who further detail the parameterizations.
§ EXPERIMENT AND PHENOMENOLOGY
In high-energy experiments, the polarization of final state hadrons is usually measured from the angular distribution of their decay products. It is very challenging to acquire accurate experimental data. In light of a considerably large amount of free parameters, the spin-dependent FFs are not well-constrained experimentally. Compared with the case for unpolarized PDFs or FFs, the quantitative study of spin-dependent FFs is still immature. That said, there are already quite a few phenomenological studies making full use of the available experimental data. In this section, we summarize the available experimental data and the corresponding phenomenological studies.
§.§ Λ Hyperons
The polarization of Λ^0 hyperons is usually measured from the angular distribution of the daughter proton in the parity-violating Λ^0 → p + π^- decay channel. In the rest frame of Λ^0, the normalized angular distribution of the daughter proton reads as follows:
1/NdN/dcosθ^* = 1/2 (1 + α Pcosθ^*) ,
where α = 0.732 ± 0.014 is the decay parameter of Λ <cit.>, P is the polarization of Λ along a specified direction, and θ^* is the angle between the proton momentum and the specified direction to measure the Λ polarization.
The LEP experiment is an e^+e^- collider at the Z^0-pole. Due to the parity violation in the weak interaction, the produced quark and antiquark are strongly polarized along the longitudinal direction. The longitudinal polarizations of those final state quarks and antiquarks in e^+e^- annihilation at different collisional energies can be easily computed at the LO level and are explicitly shown in <cit.>. At the Z^0-pole, the longitudinal polarization of the final state down-type quarks can reach 0.9. That of the up-type quarks is a bit smaller but is still about 0.6 ∼ 0.7. Based on the SU(6) spin-flavor symmetry, the polarization of Λ^0 is determined by the polarization of the s quark. It is thus proposed in <cit.> that the final state Λ^0 hyperons are also strongly polarized at LEP, and the measurement of this polarization can probe interesting information on the hadronization mechanism. In the language of QCD factorization, the LEP experiment is the ideal place to study the longitudinal spin transfer G_1L (z), which represents the number density of producing longitudinally polarized Λ^0 hyperons from longitudinally polarized quarks. It is the p_T-integrated version of the TMD FF G_1L(z,p_⊥).
At the leading order and leading twist, the longitudinal polarization of Λ^0 reads as follows: <cit.>
P_L (y,z) = ∑_q λ_q (y) ω_q (y) G_1L,q (z) + { q ↔q̅; y ↔ (1-y) }/∑_q ω_q (y) D_1,q (z) + { q ↔q̅; y ↔ (1-y) },
where λ_q (y) = Δω_q (y) / ω_q (y) is the helicity of the fragmenting quark with Δω_q (y) and ω_q (y) being defined as follows:
Δω_q (y) = χ T_1^q (y) + χ_ int^q I_1^q (y),
ω_q (y) = χ T_0^q (y) + χ_ int^q I_0^q (y) + e_q^2 A(y),
T_1^q(y) = - 2c_V^q c_A^q [(c_V^e)^2 + (c_A^e)^2] A(y) + 2[(c_V^q)^2 + (c_A^q)^2] c_V^e c_A^e B (y),
T_0^q (y) = [(c_V^q)^2 + (c_A^q)^2] [(c_V^e)^2 + (c_A^e)^2] A(y) - 4 c_V^q c_A^q c_V^e c_A^e B(y),
I_1^q (y) = - c_A^q c_V^e A(y) + c_V^q c_A^e B(y),
I_0^q (y) = c_V^q c_V^e A(y) - c_A^q c_A^e B(y).
Here, y=(1+cosθ)/2 with θ is the angle between the outgoing Λ and the incoming electron. The coefficient functions are given as A(y)= (1-y)^2 + y^2, B(y) = 1-2y, χ = Q^4/[(Q^2-M_Z^2)^2 + Γ_Z^2 M_Z^2] sin^4 2θ_W, and χ_ int^q = -2 e_q Q^2(Q^2-M_Z^2)/[(Q^2-M_Z^2)^2 + Γ_Z^2 M_Z^2] sin^2 2θ_W. c_V^q/e and c_A^q/e are the coupling constants of the vector current and axis-vector current parts of the quark/electron, with Z^0. M_Z being the mass of Z^0 and Γ_Z being the width. Notice that λ_q̅ (y) = - λ_q (1-y). The quark helicity and antiquark helicity have the opposite sign. This is in line with the sign flip in Section <ref>. Therefore, the polarization of Λ̅^0 is expected to have the opposite sign with that of Λ^0.
Since the quark helicity λ_q (y) and the production weight ω_q (y) are calculable in quantum field theory, the measurement of the longitudinal polarization of final state Λ^0 as a function of z can directly provide information of the longitudinal spin transfer. Such experiments were eventually carried out by ALEPH and OPAL collaborations at LEP in the 1990s <cit.>. As shown in Figure <ref>, the longitudinal polarization increases monotonically with increasing z, which provides a hint on how to parameterize the longitudinal spin transfer.
Following the release of these experimental data, many phenomenological studies <cit.> were carried out to understand the longitudinal spin [-25]transfer G_1L(z). Among them, the de Florian–Stratmann–Vogelsang (DSV) offers three scenarios. The first scenario is based on the naive parton model, which assumes that only the s quark contributes to the longitudinal spin transfer at the initial scale. The second scenario assumes that the u and d quarks contribute to negative G_1L(z) at the initial scale. The third scenario assumes that u, d, and s contribute equally. All three can describe the experimental data reasonably well. A more recent Chen–Yang–Zhou–Liang (CYZL) also obtained a good description of the experimental data utilizing the LO formula. The ambiguity again highlights the difficulties in the quantitative study of FFs. It can only be removed through a global analysis of the experimental data in various high-energy reactions. Therefore, many works have also made predictions for the longitudinal polarization of Λ produced in polarized SIDIS <cit.> and pp collisions <cit.>.
The inclusive DIS process with the polarized lepton beam has been used to probe the spin structure of the nucleon <cit.>. In this process, only the momentum of the final state lepton was measured. Therefore, we can gain information on the nucleon structure but lose those on the hadronization. To restore the access to (spin-dependent) FFs, we have to rely on the semi-inclusive process and measure (the polarization of) one final state hadron (There are two fragmentation regimes in SIDIS, namely the current fragmentation and the target fragmentation. Although the target fragmentation function is also currently a hot topic, it is beyond the scope of this review. We only focus on the study of the current fragmentation function). However, it is not a simple task to do so in the real world. Despite the difficulties, early attempts from the E665 <cit.> and HERMES <cit.> collaborations were still successfully performed. Recent measurements from HERMES <cit.> and COMPASS <cit.> collaborations have also elevated the quality of experimental data to a level that sheds light on phenomenological studies. These experiments measure the spin transfer coefficient D_LL(z) (it is important to not get confused with the spin-alignment-dependent FF D_1LL(z) of vector mesons) which, at the leading order and leading twist approximation, is given by <cit.>
D_LL (x_B,z) = ∑_q e_q^2 x_B f_1,q (x_B) G_1,q (z)/∑_q e_q^2 x_B f_1,q (x_B) D_1,q (z),
with f_1,q (x_B) being the unpolarized PDF. Due to the presence of the unpolarized PDF of proton/nucleus, the polarized SIDIS experiment favors more contributions from the u and d quarks at large x_B than from the e^+e^- collider. We show the HERMES data set as a function of z (integrating over x_B) and the COMPASS data set as a function of x_B (integrating over z) in Figure <ref>.
The experimental data from E665 <cit.> suggested a difference between D_LL for the Λ^0 production and that for the Λ̅^0 production. This was later confirmed by the COMPASS <cit.> experiment. In <cit.>, it was shown that such a difference serves as a flavor tag in the study of the G_1L FF. More studies on the flavor dependence of PDFs/FFs have been performed <cit.>. The NOMAD collaboration also carried out similar measurements in the neutrino SIDIS experiment <cit.>. Because of the flavor-changing feature of the charged weak interaction, this experiment opens more opportunities for quantitative research on the flavor dependence of spin-dependent FFs. A sophisticated investigation was presented in <cit.>.
RHIC is the first and, so far, the only polarized proton–proton collider. The helicity of the incident protons can be transferred to that of the partons through the longitudinal spin transfer g_1L(x) of PDFs. Therefore, it also has the capability of probing G_1L(z) of the fragmentation. The first measurement was performed in 2009 <cit.>, while an improved analysis was presented in 2018 <cit.>. These experiments measure the spin transfer coefficient D_LL which is defined as follows:
D_LL≡σ^p^+ p →Λ^+ + X - σ^p^+ p →Λ^- + X/σ^p^+ p →Λ^+ + X + σ^p^+ p →Λ^- + X.
The + symbol in the superscript denotes the helicity of the corresponding proton or Λ hyperon. The updated experimental data from the STAR collaboration <cit.> at RHIC are shown in Figure <ref>. This experimental data tend to favor the first and second scenarios in the DSV parameterization <cit.>. However, it cannot concretely rule out any scenario yet due to the large uncertainties. Moreover, the Xu–Liang–Sichtermann approach <cit.> based on the SU(6) spin-flavor symmetry can also describe this data well. Moreover, RHIC also measured the transverse spin transfer coefficient D_TT, which is sensitive to the convolution of the transversity PDF and the transversity FF <cit.>.
The polarizations of partons participating the same hard scattering are strongly correlated. The helicity amplitudes of different partonic processes have been evaluated and summarized in <cit.>. Thus, <cit.> proposes the dihadron polarization correlation as a probe to the longitudinal spin transfer G_1L in e^+e^- annihilations at low energy where the fragmenting quarks are not polarized. Recently, this idea was further investigated and applied to the unpolarized pp collisions in <cit.>. By measuring the longitudinal polarization correlation of two almost back-to-back hadrons, we also gain access to the longitudinal spin transfer in unpolarized pp collisions. Since this observable avoids the contamination from the longitudinal spin transfer g_1L in PDFs. which is also poorly known, <cit.> innovated a means to investigating the longitudinal spin transfer G_1L in FFs at RHIC, Tevatron, and the LHC. Furthermore, this work can also be used to constrain the FF of circularly polarized gluons.
Recently, the Belle collaboration measured the transverse polarization of Λ hyperons in e^+e^- annihilations <cit.>, sparking considerable theoretical interest <cit.>. In this experiment, one first defines the hadron production plane and then measures the transverse polarization along its normal direction. Since there are two transverse directions, we refer to the polarization along one as P_N and the other one as P_T. The hadron production plane can be defined in two ways. The first one is defined by the thrust axis and the Λ momentum. In the second, the thrust axis is replaced by the momentum of a reference hadron (in the back-to-back side). Therefore, this experiment is dedicated to probing the D_1T^⊥ (z,p_⊥) FF. While the p_T-differential experimental data of P_N contain sizable uncertainties, the p_T-integrated version is quite precise, as shown in Figure <ref>. Employing the Trento convention <cit.> for the definition of D_1T^⊥ (z,p_⊥), the p_⊥ integrated transverse polarization is given by <cit.>
P_N (z_Λ) = ∑_q e_q^2 ∫ d^2 p_⊥ d^2 p_h⊥- P̂_⊥Λ· p_⊥/z_Λ M_Λ D_1,q^h (z_h,p_h⊥) D_1T,q^⊥Λ (z_Λ,p_⊥)/∑_q e_q^2 ∫ d^2 p_⊥ d^2 p_h⊥ D_1,q^h (z_h,p_h⊥) D_1,q^Λ (z_Λ,p_⊥)|_P_⊥Λ = z_Λ/z_hp_h⊥ + p_⊥,
where P̂_⊥Λ is the unit vector along the direction of P_⊥Λ. The integral in the denominator simply reduces to the product of two colinear FFs. However, to evaluate the numerator, we need to first parameterize the p_⊥ and p_h⊥ dependence at the initial scale, which then evolves to the TMD factorization scale through use of the Collins–Soper–Sterman evolution equation. Nonetheless, since the collisional energy at Belle is not very high, a Gaussian ansatz is already a good approximation. More sophisticated approaches incorporating the p_⊥ dependence can be found in <cit.>.
As shown in Figure <ref>, the distinct difference between P_N measured in the Λ+π^+( or K^+) and Λ+π^-( or K^-) processes offers an opportunity to explore the flavor dependence. Early attempts, such as the D'Alesio–Murgia–Zaccheddu (DMZ) <cit.> and Callos-Kang–Terry (CKT) <cit.> parameterizations, adopted the strategy that valence parton FFs differ from each other and that parton FFs are the same, i.e., D_1T,u^⊥Λ≠ D_1T,d^⊥Λ≠ D_1T,s^⊥Λ≠ D_1T, sea^⊥Λ. However, this approach violates the isospin symmetry, which is one of the most important features of strong interaction. Furthermore, a model calculation <cit.> based on the strict SU(6) spin-flavor symmetry failed to describe the experimental data. However, it was first shown in <cit.> that the isospin symmetric Chen–Liang–Pan–Song–Wei (CLPSW) parameterization can still describe the experimental data well as long as the artificial constraint on sea parton FFs is released. This perspective was further investigated in Ref. <cit.> recently, which concluded that one can obtain good fit to the Belle data with and without implementing the isospin symmetry constraint after taking into account the charm contribution. This confirms that the current Belle dataset does not represent an isospin symmetry violation in the hadronization. Furthermore, <cit.> proposed to test the isospin symmetry at the future EIC experiment. By comparing the transverse polarizations in ep and eA scatterings at large x, we can ultimately check the difference between D_1T,u^⊥ and D_1T,d^⊥.
The future EIC is a polarized electron–proton/ion collider with unprecedentedly high luminosity. It will open a new window for the quantitative study of spin-dependent FFs. Several works <cit.> have proposed and made predictions for different observables at the future EIC with polarized proton beams. These observables are sensitive to various combinations of spin-dependent PDFs and FFs. Therefore, the future measurement will reveal information on both hadron structure and hadronization. A recent work <cit.> also proposed a method to study spin-dependent PDFs/FFs in unpolarized experiments. The key idea is that the polarizations of the final state quark and initial state parton are correlated. Thanks to the Boer–Mulders function in the PDFs, the initial state quark are transversely polarized although the polarization depends on the azimuthal angle. This transverse polarization can further propagate into final state observables through chiral-odd FFs. By measuring the azimuthal-angle-dependent longitudinal and transverse polarizations of final state Λ, we can probe H_1L^⊥ and H_1T^⊥ even in the unpolarized SIDIS process. Moreover, we can also measure the azimuthal-angle-dependent polarizations in e^+e^- annihilations to probe combinations of the Collins function and spin-dependent chiral-odd FFs <cit.>. This idea is akin to those explored in <cit.>.
§.§ Vector Mesons
Most vector mesons decay through parity-conserving strong interactions. Their polarization vector does not enter the angular distribution of the daughter hadrons. Therefore, it is not possible to measure their polarization vector. In contrast, the tensor polarization does play a role in the angular distribution and therefore can be measured.
Among them, spin alignment, which quantifies the deviation from 1/3 of ρ_00 in the spin-density matrix, has received the most attention.
Several collaborations <cit.> at LEP have measured the spin alignment of different vector mesons produced in the e^+e^- annihilation at the Z^0-pole. We show the spin alignment of K^*0 and ρ^0 measured by the OPAL <cit.> and DELPHI <cit.> collaborations in Figure <ref>. The off-diagonal matrix elements were also measured in some of the experiments. Thereafter, the NOMAD collaboration measured the vector meson spin alignment for the first time in the neutrino DIS experiment <cit.>. These measurements offer more information on the hadronization mechanism and have led to several phenomenological studies <cit.>.
Figure <ref> shows that ρ_00 is consistent with 1/3 (i.e., no spin alignment) at the small-z region. However, at large z, a clear spin alignment is observed. This pattern is similar to that for the longitudinal polarization of Λ also measured at LEP <cit.> (shown in Figure <ref>).
As mentioned above, the quarks produced at LEP and also those at NOMAD are strongly polarized. Therefore, it is tempting to attribute the tensor polarization of final state vector mesons to the longitudinal polarization of the fragmenting quarks. However, a simple tensor structure analysis <cit.> shows that this is not the case. The spin alignment of the final state mesons is not coupled with the quark polarization. Instead, it is coupled with the quark-polarization-summed cross section. The vector meson spin alignment in e^+e^- collisions is given by
ρ_00 = 1/3 - 1/3∑_q ω_q (y) D_1LL,q (z)/∑_q ω_q (y) D_1,q (z),
where, ω_q is defined to be the same as that for the Λ production in the previous section, and D_1LL(z) is the corresponding FF that is responsible for the vector meson spin alignment. As shown in the above equation, the longitudinal polarization of the fragmenting quark does not play a role here. It was thus first proposed in <cit.> that the vector meson spin alignment can also be observed in other high-energy collisions with unpolarized quarks fragmenting. Fitting to the experimental data from LEP, other work <cit.> extracted D_1LL(z) and made predictions for the spin alignment of high p_T vector mesons in unpolarized pp collisions at RHIC and the LHC <cit.>. Furthermore, from the same mechanism, there will be a significant spin alignment for vector mesons produced in the unpolarized SIDIS. Measuring vector meson spin alignment at the future EIC will cast new light on the quantitative study of the D_1LL (z) FF.
Notice that the spin alignment of low-p_T vector mesons in AA collisions has also been measured at RHIC <cit.> and LHC <cit.> recently. These low-p_T hadrons in relativistic heavy-ion collisions are produced through a different hadronization mechanism than those of fragmentation. Their tensor polarization originates from a different source.
§ MODEL CALCULATION
The PDFs and FFs are defined in terms of quark–gluon correlators as laid out in Section <ref>. Owing to the nonperturbative nature of the hadron state, we cannot directly evaluate them theoretically. Thus far, several proposals for computing quantities that can be related to the PDFs in the lattice QCD approach have been put forward <cit.>. However, it is not possible to study FFs in the lattice QCD yet. In the current stage, the quantitative information is mainly extracted from the experimental data.
However, due to the limited amount of experimental data, the TMD PDFs and FFs are not yet well constrained. As a complementary tool, model calculations have usually been employed to compute different PDFs over the past decades <cit.>. These investigations offer quantitative insight into the hadron structure and therefore are indispensable for phenomenological study. The same also goes for the FFs. Most of the models can be used to evaluate both PDFs and FFs. We make a nonexclusive brief summary on FF calculations.
There are quite a few models that can be categorized as a spectator model <cit.>. Among them, the quark–diquark model is a simple one which provides the quark–baryon–diantiquark vertex,so that the baryon FFs can be easily evaluated. In <cit.>, the colinear baryon FFs were calculated at the leading twist using the quark–diquark model, while in <cit.>, the TMD FFs were further computed at the leading and subleading twists. To compute the meson FFs or the gluon FFs, we need an improved version which offers the vertex among the fragmenting parton, hadron, and spectator. In <cit.>, the Collins function was calculated for pions and kaons in this method. Recently, in <cit.>, approach to calculate leading twist gluon TMD FFs was presented. The chiral invariant model <cit.> investigated the chiral symmetry and the spontaneous breaking with an effective Lagrangian of quarks, gluons, and goldstone bosons. It can also be classified into the spectator model category. Utilizing this model, several authors <cit.> calculated the pion and kaon FFs. In <cit.>, an extended version was also developed to compute the vector meson FFs. Furthermore, several works <cit.> have evaluated the FFs of different hadrons using a parameterized quark–hadron coupling.
[-15]The Nambu–Jona–Lasinio (NJL) model originates from <cit.> who developed an effective theory describing the quark–hadron interaction. It has been employed to evaluate PDFs of different hadrons <cit.>. Incorporating with the Feynman–Field model (also known as the quark–jet model) established in <cit.>, the NJL=-jet model has been employed to calculate both colinear and TMD FFs of different hadrons <cit.>. Recent works have also computed FFs of gluon <cit.> and charm quark <cit.> with this approach. The Feynman–Field model relates the total FF to the first rank FF. However, it does not specify how to compute the first rank FF. Therefore, in principle, it can be hybridized with another model which provides with the first rank FF to prolong the applicability of the corresponding model.
We make a final remark on the model calculation to conclude this section. All the above-mentioned models compute FFs employing the effective Lagrangian of partons and hadrons of interest. While these calculations offer quantitative insight into the hadronization scheme, we should draw a line between conclusions that are model-dependent and those that are model-independent.
§ SUMMARY
There are a multitude of topics within the subject of FFs. In this review, we constrain ourselves in a very limited scope that we are familiar with. First, we briefly summarized the derivation of the TMD factorization and the establishment of the QCD evolution equation at the leading twist level. The TMD factorization and the corresponding evolution at the higher twist level are still ongoing topics. Second, we are particularly interested in the spin-related effects. With the spin degree of freedom being taken into account, the interplay between the transverse momentum and the hadron/quark polarization presents a highly intriguing phenomena that can be investigated in experiments. As a result, we need to define more TMD FFs to fully describe the fragmentation process. In quantum field theory, TMD FFs are introduced in the decomposition of parton correlators. We summarized the final results up to the twist-4 level for spin-0, spin-1/2, and spin-1 hadron productions. Finally, although all the TMD FFs have clear definitions in terms of parton fields and hadron states, they are nonperturbative quantities that cannot be directly evaluated from quantum field theory. In contrast to TMD PDFs, FFs cannot be computed even in the lattice QCD approach. The quantitative investigation thus mainly concentrates on the extraction from experimental measurements and model calculations. We summarized several spin-related experiments conducted over the past decades and the corresponding phenomenological studies. In the last section, we also briefly presented several model calculations.
The study of TMD FFs is still a very active field, and many mysteries remain to be explored. The Electron-Ion Collider (EIC) and the Electron-Ion Collider in China (EicC) have been proposed to be built as the new high-energy colliders in the next generation. They will provide new experimental data for the quantitative study of TMD FFs and can significantly boost our understanding of the hadronization mechanism.
All authors contribute to the writting and proofreading of this review.
K.-B.C. is supported by the National Natural Science Foundation of China under grants no. 12005122 and no. 11947055, as well as by the Shandong Province Natural Science Foundation under grant no. ZR2020QA082. T.L. is supported in part by the National Natural Science Foundation of China under grants no. 12175117 and no. 20221017-1. Y.-K.S. is supported in part by the National Natural Science Foundation of China under grant no. 11505080, and by the Shandong Province Natural Science Foundation under grant no. ZR2018JL006. S.-Y.W. is supported by the Taishan Fellowship of Shandong Province for junior scientists.
Data available on request from the authors.
The authors declare no conflict of interest.
-0cm
References
999
[Fritzsch et al.(1973)Fritzsch, Gell-Mann, and
Leutwyler]Fritzsch:1973pi
Fritzsch, H.; Gell-Mann, M.; Leutwyler, H.
Advantages of the Color Octet Gluon Picture.
Phys. Lett. B 1973, 47, 365–368.
<https://doi.org/10.1016/0370-2693(73)90625-4>.
[Yang and Mills(1954)]Yang:1954ek
Yang, C.N.; Mills, R.L.
Conservation of Isotopic Spin and Isotopic Gauge Invariance.
Phys. Rev. 1954, 96, 191–195.
<https://doi.org/10.1103/PhysRev.96.191>.
[Metz and Vossen(2016)]Metz:2016swz
Metz, A.; Vossen, A.
Parton Fragmentation Functions.
Prog. Part. Nucl. Phys. 2016, 91, 136–202.
<https://doi.org/10.1016/j.ppnp.2016.08.003>.
[Bjorken and Paschos(1969)]Bjorken:1969ja
Bjorken, J.D.; Paschos, E.A.
Inelastic Electron Proton and gamma Proton Scattering, and the
Structure of the Nucleon.
Phys. Rev. 1969, 185, 1975–1982.
<https://doi.org/10.1103/PhysRev.185.1975>.
[Feynman(1973)]Feynman:1973xc
Feynman, R.P.
Photon-Hadron Interactions. Basic Books, 1973.
[Berman et al.(1971)Berman, Bjorken, and Kogut]Berman:1971xz
Berman, S.M.; Bjorken, J.D.; Kogut, J.B.
Inclusive Processes at High Transverse Momentum.
Phys. Rev. D 1971, 4, 3388.
<https://doi.org/10.1103/PhysRevD.4.3388>.
[Mueller(1978)]Mueller:1978xu
Mueller, A.H.
Cut Vertices and their Renormalization: A Generalization of the
Wilson Expansion.
Phys. Rev. D 1978, 18, 3705.
<https://doi.org/10.1103/PhysRevD.18.3705>.
[Collins and Soper(1982)]Collins:1981uw
Collins, J.C.; Soper, D.E.
Parton Distribution and Decay Functions.
Nucl. Phys. B 1982, 194, 445–492.
<https://doi.org/10.1016/0550-3213(82)90021-9>.
[Collins et al.(1989)Collins, Soper, and Sterman]Collins:1989gx
Collins, J.C.; Soper, D.E.; Sterman, G.F.
Factorization of Hard Processes in QCD.
Adv. Ser. Direct. High Energy Phys. 1989, 5, 1–91.
<https://doi.org/10.1142/9789814503266_0001>.
[Collins(2013)]Collins:2011zzd
Collins, J.
Foundations of Perturbative QCD; Cambridge
University Press: Cambridge, UK, 2013; Volume 32.
[Collins and Soper(1981)]Collins:1981uk
Collins, J.C.; Soper, D.E.
Back-To-Back Jets in QCD.
Nucl. Phys. B 1981, 193, 381;
Erratum in Nucl. Phys. B 1983, 213, 545.
<https://doi.org/10.1016/0550-3213(81)90339-4>.
[Aybat and Rogers(2011)]Aybat:2011zv
Aybat, S.M.; Rogers, T.C.
TMD Parton Distribution and Fragmentation Functions with QCD
Evolution.
Phys. Rev. D 2011, 83, 114042.
<https://doi.org/10.1103/PhysRevD.83.114042>.
[Kang et al.(2020)Kang, Shao, and Zhao]Kang:2020yqw
Kang, Z.B.; Shao, D.Y.; Zhao, F.
QCD resummation on single hadron transverse momentum distribution
with the thrust axis.
JHEP 2020, 12, 127.
<https://doi.org/10.1007/JHEP12(2020)127>.
[Boglione and Simonelli(2021a)]Boglione:2020auc
Boglione, M.; Simonelli, A.
Factorization of e^+e^- → H X cross section, differential in
z_h, P_T and thrust, in the 2-jet limit.
JHEP 2021, 02, 076.
<https://doi.org/10.1007/JHEP02(2021)076>.
[Boglione and Simonelli(2021b)]Boglione:2020cwn
Boglione, M.; Simonelli, A.
Universality-breaking effects in e^+e^- hadronic production
processes.
Eur. Phys. J. C 2021, 81, 96.
<https://doi.org/10.1140/epjc/s10052-020-08821-y>.
[Makris et al.(2021)Makris, Ringer, and Waalewijn]Makris:2020ltr
Makris, Y.; Ringer, F.; Waalewijn, W.J.
Joint thrust and TMD resummation in electron-positron and
electron-proton collisions.
JHEP 2021, 02, 070.
<https://doi.org/10.1007/JHEP02(2021)070>.
[Collins(1993)]Collins:1992kk
Collins, J.C.
Fragmentation of transversely polarized quarks probed in transverse
momentum distributions.
Nucl. Phys. B 1993, 396, 161–182.
<https://doi.org/10.1016/0550-3213(93)90262-N>.
[Boer et al.(2003)Boer, Mulders, and Pijlman]Boer:2003cm
Boer, D.; Mulders, P.J.; Pijlman, F.
Universality of T odd effects in single spin and azimuthal
asymmetries.
Nucl. Phys. B 2003, 667, 201–241.
<https://doi.org/10.1016/S0550-3213(03)00527-3>.
[Guan et al.(2019)Guan et al.]Belle:2018ttu
Guan, Y.; Vossen, A.; Adachi, I.; Adamczyk, K.; Ahn, J.K.; Aihara, H.; Sanuki, T.; Chilikin, K.; Cho, K.; Choi, S.-K.; et al.
Observation of Transverse Λ/Λ̅ Hyperon
Polarization in e^+e^- Annihilation at Belle.
Phys. Rev. Lett. 2019, 122, 042001.
<https://doi.org/10.1103/PhysRevLett.122.042001>.
[Matevosyan et al.(2018)Matevosyan, Kotzinian, and
Thomas]Matevosyan:2018jht
Matevosyan, H.H.; Kotzinian, A.; Thomas, A.W.
Semi-inclusive back-to-back production of a hadron pair and a single
hadron in e^+e^- annihilation.
JHEP 2018, 10, 008.
<https://doi.org/10.1007/JHEP10(2018)008>.
[Gamberg et al.(2019)Gamberg, Kang, Pitonyak, Schlegel, and
Yoshida]Gamberg:2018fwy
Gamberg, L.; Kang, Z.B.; Pitonyak, D.; Schlegel, M.; Yoshida, S.
Polarized hyperon production in single-inclusive electron-positron
annihilation at next-to-leading order.
JHEP 2019, 01, 111.
<https://doi.org/10.1007/JHEP01(2019)111>.
[Anselmino et al.(2019)Anselmino, Kishore, and
Mukherjee]Anselmino:2019cqd
Anselmino, M.; Kishore, R.; Mukherjee, A.
Polarizing fragmentation function and the
polarization in e^+e^- processes.
Phys. Rev. D 2019, 100, 014029.
<https://doi.org/10.1103/PhysRevD.100.014029>.
[Anselmino et al.(2020)Anselmino, Mukherjee, and
Vossen]Anselmino:2020vlp
Anselmino, M.; Mukherjee, A.; Vossen, A.
Transverse spin effects in hard semi-inclusive collisions.
Prog. Part. Nucl. Phys. 2020, 114, 103806.
<https://doi.org/10.1016/j.ppnp.2020.103806>.
[D'Alesio et al.(2020)D'Alesio, Murgia, and
Zaccheddu]DAlesio:2020wjq
D'Alesio, U.; Murgia, F.; Zaccheddu, M.
First extraction of the Λ polarizing fragmentation function
from Belle e^+e^- data.
Phys. Rev. D 2020, 102, 054001.
<https://doi.org/10.1103/PhysRevD.102.054001>.
[Callos et al.(2020)Callos, Kang, and Terry]Callos:2020qtu
Callos, D.; Kang, Z.B.; Terry, J.
Extracting the transverse momentum dependent polarizing
fragmentation functions.
Phys. Rev. D 2020, 102, 096007.
<https://doi.org/10.1103/PhysRevD.102.096007>.
[Kang et al.(2020)Kang, Lee, and Zhao]Kang:2020xyq
Kang, Z.B.; Lee, K.; Zhao, F.
Polarized jet fragmentation functions.
Phys. Lett. B 2020, 809, 135756.
<https://doi.org/10.1016/j.physletb.2020.135756>.
[Li et al.(2021)Li, Wang, Yang, and Lu]Li:2020oto
Li, H.; Wang, X.; Yang, Y.; Lu, Z.
The transverse polarization of Λ hyperons in
e^+e^-→Λ ^↑ h X processes within TMD
factorization.
Eur. Phys. J. C 2021, 81, 289.
<https://doi.org/10.1140/epjc/s10052-021-09064-1>.
[Chen et al.(2021)Chen, Liang, Pan, Song, and Wei]Chen:2021hdn
Chen, K.B.; Liang, Z.T.; Pan, Y.L.; Song, Y.K.; Wei, S.Y.
Isospin Symmetry of Fragmentation Functions.
Phys. Lett. B 2021, 816, 136217.
<https://doi.org/10.1016/j.physletb.2021.136217>.
[Gamberg et al.(2021)Gamberg, Kang, Shao, Terry, and
Zhao]Gamberg:2021iat
Gamberg, L.; Kang, Z.B.; Shao, D.Y.; Terry, J.; Zhao, F.
Transverse Λ polarization in e^+e^- collisions.
Phys. Lett. B 2021, 818, 136371.
<https://doi.org/10.1016/j.physletb.2021.136371>.
[D'Alesio et al.(2021)D'Alesio, Murgia, and
Zaccheddu]DAlesio:2021dcx
D'Alesio, U.; Murgia, F.; Zaccheddu, M.
General helicity formalism for two-hadron production in e^+e^-
annihilation within a TMD approach.
JHEP 2021, 10, 078.
<https://doi.org/10.1007/JHEP10(2021)078>.
[D'Alesio et al.(2022)D'Alesio, Gamberg, Murgia, and
Zaccheddu]DAlesio:2022brl
D'Alesio, U.; Gamberg, L.; Murgia, F.; Zaccheddu, M.
Transverse Λ polarization in e^+e^- processes within a TMD
factorization approach and the polarizing fragmentation function.
JHEP 2022, 12, 074.
<https://doi.org/10.1007/JHEP12(2022)074>.
[Boglione et al.(2022)Boglione, Gonzalez-Hernandez, and
Simonelli]Boglione:2022nzq
Boglione, M.; Gonzalez-Hernandez, J.O.; Simonelli, A.
Transverse momentum dependent fragmentation functions from recent
BELLE data.
Phys. Rev. D 2022, 106, 074024.
<https://doi.org/10.1103/PhysRevD.106.074024>.
[Ellis et al.(1982)Ellis, Furmanski, and Petronzio]Ellis:1982wd
Ellis, R.K.; Furmanski, W.; Petronzio, R.
Power Corrections to the Parton Model in QCD.
Nucl. Phys. B 1982, 207, 1–14.
<https://doi.org/10.1016/0550-3213(82)90132-8>.
[Ellis et al.(1983)Ellis, Furmanski, and Petronzio]Ellis:1982cd
Ellis, R.K.; Furmanski, W.; Petronzio, R.
Unraveling Higher Twists.
Nucl. Phys. B 1983, 212, 29.
<https://doi.org/10.1016/0550-3213(83)90597-7>.
[Qiu(1990)]Qiu:1988dn
Qiu, J.W.
Twist Four Contributions to the Parton Structure Functions.
Phys. Rev. D 1990, 42, 30–44.
<https://doi.org/10.1103/PhysRevD.42.30>.
[Qiu and Sterman(1991a)]Qiu:1990xxa
Qiu, J.W.; Sterman, G.F.
Power corrections in hadronic scattering. 1. Leading 1/Q**2
corrections to the Drell-Yan cross-section.
Nucl. Phys. B 1991, 353, 105–136.
<https://doi.org/10.1016/0550-3213(91)90503-P>.
[Qiu and Sterman(1991b)]Qiu:1990xy
Qiu, J.W.; Sterman, G.F.
Power corrections to hadronic scattering. 2. Factorization.
Nucl. Phys. B 1991, 353, 137–164.
<https://doi.org/10.1016/0550-3213(91)90504-Q>.
[Balitsky and Braun(1991)]Balitsky:1990ck
Balitsky, I.I.; Braun, V.M.
The Nonlocal operator expansion for inclusive particle production in
e+ e- annihilation.
Nucl. Phys. B 1991, 361, 93–140.
<https://doi.org/10.1016/0550-3213(91)90618-8>.
[Levelt and Mulders(1994a)]Levelt:1994np
Levelt, J.; Mulders, P.J.
Time reversal odd fragmentation functions in semiinclusive
scattering of polarized leptons from unpolarized hadrons.
Phys. Lett. B 1994, 338, 357–362.
<https://doi.org/10.1016/0370-2693(94)91391-9>.
[Levelt and Mulders(1994b)]Levelt:1993ac
Levelt, J.; Mulders, P.J.
Quark correlation functions in deep inelastic semiinclusive
processes.
Phys. Rev. D 1994, 49, 96–113.
<https://doi.org/10.1103/PhysRevD.49.96>.
[Kotzinian(1995)]Kotzinian:1994dv
Kotzinian, A.
New quark distributions and semiinclusive electroproduction on the
polarized nucleons.
Nucl. Phys. B 1995, 441, 234–248.
<https://doi.org/10.1016/0550-3213(95)00098-D>.
[Mulders and Tangerman(1996)]Mulders:1995dh
Mulders, P.J.; Tangerman, R.D.
The Complete tree level result up to order 1/Q for polarized deep
inelastic leptoproduction.
Nucl. Phys. B 1996, 461, 197–237;
Erratum in Nucl. Phys. B 1997, 484, 538–540.
<https://doi.org/10.1016/0550-3213(95)00632-X>.
[Boer et al.(1997)Boer, Jakob, and Mulders]Boer:1997mf
Boer, D.; Jakob, R.; Mulders, P.J.
Asymmetries in polarized hadron production in e+ e- annihilation up
to order 1/Q.
Nucl. Phys. B 1997, 504, 345–380.
<https://doi.org/10.1016/S0550-3213(97)00456-2>.
[Kotzinian and Mulders(1997)]Kotzinian:1997wt
Kotzinian, A.M.; Mulders, P.J.
Probing transverse quark polarization via azimuthal asymmetries in
leptoproduction.
Phys. Lett. B 1997, 406, 373–380.
<https://doi.org/10.1016/S0370-2693(97)00708-9>.
[Boer and Mulders(1998)]Boer:1997nt
Boer, D.; Mulders, P.J.
Time reversal odd distribution functions in leptoproduction.
Phys. Rev. D 1998, 57, 5780–5786.
<https://doi.org/10.1103/PhysRevD.57.5780>.
[Boer et al.(1998)Boer, Jakob, and Mulders]Boer:1997qn
Boer, D.; Jakob, R.; Mulders, P.J.
Leading asymmetries in two hadron production in e+ e- annihilation
at the Z pole.
Phys. Lett. B 1998, 424, 143–151.
<https://doi.org/10.1016/S0370-2693(98)00136-1>.
[Bacchetta and Mulders(2000)]Bacchetta:2000jk
Bacchetta, A.; Mulders, P.J.
Deep inelastic leptoproduction of spin-one hadrons.
Phys. Rev. D 2000, 62, 114004.
<https://doi.org/10.1103/PhysRevD.62.114004>.
[Boer et al.(2000)Boer, Jakob, and Mulders]Boer:1999uu
Boer, D.; Jakob, R.; Mulders, P.J.
Angular dependences in electroweak semiinclusive leptoproduction.
Nucl. Phys. B 2000, 564, 471–485.
<https://doi.org/10.1016/S0550-3213(99)00586-6>.
[Bacchetta et al.(2004)Bacchetta, Mulders, and
Pijlman]Bacchetta:2004zf
Bacchetta, A.; Mulders, P.J.; Pijlman, F.
New observables in longitudinal single-spin asymmetries in
semi-inclusive DIS.
Phys. Lett. B 2004, 595, 309–317.
<https://doi.org/10.1016/j.physletb.2004.06.052>.
[Bacchetta et al.(2007)Bacchetta, Diehl, Goeke, Metz, Mulders, and
Schlegel]Bacchetta:2006tn
Bacchetta, A.; Diehl, M.; Goeke, K.; Metz, A.; Mulders, P.J.; Schlegel, M.
Semi-inclusive deep inelastic scattering at small transverse
momentum.
JHEP 2007, 02, 093.
<https://doi.org/10.1088/1126-6708/2007/02/093>.
[Boer(2009)]Boer:2008fr
Boer, D.
Angular dependences in inclusive two-hadron production at BELLE.
Nucl. Phys. B 2009, 806, 23–67.
<https://doi.org/10.1016/j.nuclphysb.2008.06.011>.
[Eguchi et al.(2006)Eguchi, Koike, and Tanaka]Eguchi:2006qz
Eguchi, H.; Koike, Y.; Tanaka, K.
Single Transverse Spin Asymmetry for Large-p(T) Pion Production in
Semi-Inclusive Deep Inelastic Scattering.
Nucl. Phys. B 2006, 752, 1–17.
<https://doi.org/10.1016/j.nuclphysb.2006.05.036>.
[Eguchi et al.(2007)Eguchi, Koike, and Tanaka]Eguchi:2006mc
Eguchi, H.; Koike, Y.; Tanaka, K.
Twist-3 Formalism for Single Transverse Spin Asymmetry Reexamined:
Semi-Inclusive Deep Inelastic Scattering.
Nucl. Phys. B 2007, 763, 198–227.
<https://doi.org/10.1016/j.nuclphysb.2006.11.016>.
[Koike and Tanaka(2007)]Koike:2006qv
Koike, Y.; Tanaka, K.
Master Formula for Twist-3 Soft-Gluon-Pole Mechanism to Single
Transverse-Spin Asymmetry.
Phys. Lett. B 2007, 646, 232–241;
Erratum in Phys. Lett. B 2008, 668, 458–459;
<https://doi.org/10.1016/j.physletb.2007.01.044>.
[Kanazawa and Koike(2013)]Kanazawa:2013uia
Kanazawa, K.; Koike, Y.
Contribution of twist-3 fragmentation function to single
transverse-spin asymmetry in semi-inclusive deep inelastic scattering.
Phys. Rev. D 2013, 88, 074022.
<https://doi.org/10.1103/PhysRevD.88.074022>.
[Pitonyak et al.(2014)Pitonyak, Schlegel, and
Metz]Pitonyak:2013dsu
Pitonyak, D.; Schlegel, M.; Metz, A.
Polarized hadron pair production from electron-positron
annihilation.
Phys. Rev. D 2014, 89, 054032.
<https://doi.org/10.1103/PhysRevD.89.054032>.
[Yang and Lu(2017)]Yang:2016qsf
Yang, Y.; Lu, Z.
Polarized hyperon production in semi-inclusive
deep inelastic scattering off an unpolarized nucleon target.
Phys. Rev. D 2017, 95, 074026.
<https://doi.org/10.1103/PhysRevD.95.074026>.
[Liang and Wang(2007)]Liang:2006wp
Liang, Z.T.; Wang, X.N.
Azimuthal and single spin asymmetry in deep-inelastic lepton-nucleon
scattering.
Phys. Rev. D 2007, 75, 094002.
<https://doi.org/10.1103/PhysRevD.75.094002>.
[Liang et al.(2008)Liang, Wang, and Zhou]Liang:2008vz
Liang, Z.T.; Wang, X.N.; Zhou, J.
The Transverse-momentum-dependent Parton Distribution Function and
Jet Transport in Medium.
Phys. Rev. D 2008, 77, 125010.
<https://doi.org/10.1103/PhysRevD.77.125010>.
[Gao et al.(2010)Gao, Liang, and Wang]Gao:2010mj
Gao, J.H.; Liang, Z.T.; Wang, X.N.
Nuclear dependence of azimuthal asymmetry in semi-inclusive deep
inelastic scattering.
Phys. Rev. C 2010, 81, 065211.
<https://doi.org/10.1103/PhysRevC.81.065211>.
[Song et al.(2011)Song, Gao, Liang, and Wang]Song:2010pf
Song, Y.k.; Gao, J.h.; Liang, Z.t.; Wang, X.N.
Twist-4 contributions to the azimuthal asymmetry in SIDIS.
Phys. Rev. D 2011, 83, 054010.
<https://doi.org/10.1103/PhysRevD.83.054010>.
[Song et al.(2014)Song, Gao, Liang, and Wang]Song:2013sja
Song, Y.k.; Gao, J.h.; Liang, Z.t.; Wang, X.N.
Azimuthal asymmetries in semi-inclusive DIS with polarized beam
and/or target and their nuclear dependences.
Phys. Rev. D 2014, 89, 014005.
<https://doi.org/10.1103/PhysRevD.89.014005>.
[Wei et al.(2014)Wei, Song, and Liang]Wei:2013csa
Wei, S.y.; Song, Y.k.; Liang, Z.t.
Higher twist contribution to fragmentation function in inclusive
hadron production in e^+e^- annihilation.
Phys. Rev. D 2014, 89, 014024.
<https://doi.org/10.1103/PhysRevD.89.014024>.
[Wei et al.(2015)Wei, Chen, Song, and Liang]Wei:2014pma
Wei, S.Y.; Chen, K.b.; Song, Y.k.; Liang, Z.t.
Leading and higher twist contributions in semi-inclusive
e^+e^- annihilation at high energies.
Phys. Rev. D 2015, 91, 034015.
<https://doi.org/10.1103/PhysRevD.91.034015>.
[Chen et al.(2016)Chen, Yang, Wei, and Liang]Chen:2016moq
Chen, K.b.; Yang, W.h.; Wei, S.y.; Liang, Z.t.
Tensor polarization dependent fragmentation functions and
e+e-\to V \pi X at high energies.
Phys. Rev. D 2016, 94, 034003.
<https://doi.org/10.1103/PhysRevD.94.034003>.
[Wei et al.(2017)Wei, Song, Chen, and Liang]Wei:2016far
Wei, S.y.; Song, Y.k.; Chen, K.b.; Liang, Z.t.
Twist-4 contributions to semi-inclusive deeply inelastic scatterings
with polarized beam and target.
Phys. Rev. D 2017, 95, 074017.
<https://doi.org/10.1103/PhysRevD.95.074017>.
[Vladimirov et al.(2022)Vladimirov, Moos, and
Scimemi]Vladimirov:2021hdn
Vladimirov, A.; Moos, V.; Scimemi, I.
Transverse momentum dependent operator expansion at next-to-leading
power.
JHEP 2022, 01, 110.
<https://doi.org/10.1007/JHEP01(2022)110>.
[Rodini and Vladimirov(2022a)]Rodini:2022wki
Rodini, S.; Vladimirov, A.
Definition and evolution of transverse momentum dependent
distribution of twist-three.
JHEP 2022, 08, 031;
Erratum in JHEP 2022, 12, 048.
<https://doi.org/10.1007/JHEP08(2022)031>.
[Rodini and Vladimirov(2022b)]Rodini:2022wic
Rodini, S.; Vladimirov, A.
Factorization for Quasi-TMD Distributions of Sub-Leading Power
2022.
[Ebert et al.(2022)Ebert, Gao, and Stewart]Ebert:2021jhy
Ebert, M.A.; Gao, A.; Stewart, I.W.
Factorization for azimuthal asymmetries in SIDIS at next-to-leading
power.
JHEP 2022, 06, 007.
<https://doi.org/10.1007/JHEP06(2022)007>.
[Balitsky and Tarasov(2017)]Balitsky:2017flc
Balitsky, I.; Tarasov, A.
Higher-twist corrections to gluon TMD factorization.
JHEP 2017, 07, 095.
<https://doi.org/10.1007/JHEP07(2017)095>.
[Balitsky and Tarasov(2018)]Balitsky:2017gis
Balitsky, I.; Tarasov, A.
Power corrections to TMD factorization for Z-boson production.
JHEP 2018, 05, 150.
<https://doi.org/10.1007/JHEP05(2018)150>.
[Balitsky(2021a)]Balitsky:2020jzt
Balitsky, I.
Gauge-invariant TMD factorization for Drell-Yan hadronic tensor at
small x.
JHEP 2021, 05, 046.
<https://doi.org/10.1007/JHEP05(2021)046>.
[Balitsky(2021b)]Balitsky:2021fer
Balitsky, I.
Drell-Yan angular lepton distributions at small x from TMD
factorization.
JHEP 2021, 09, 022.
<https://doi.org/10.1007/JHEP09(2021)022>.
[Gamberg et al.(2022)Gamberg, Kang, Shao, Terry, and
Zhao]Gamberg:2022lju
Gamberg, L.; Kang, Z.B.; Shao, D.Y.; Terry, J.; Zhao, F.
Transverse-momentum-dependent factorization at next-to-leading
power. 2022.
[Collins and Soper(1982)]Collins:1981va
Collins, J.C.; Soper, D.E.
Back-To-Back Jets: Fourier Transform from B to K-Transverse.
Nucl. Phys. B 1982, 197, 446–476.
<https://doi.org/10.1016/0550-3213(82)90453-9>.
[Collins et al.(1985)Collins, Soper, and Sterman]Collins:1984kg
Collins, J.C.; Soper, D.E.; Sterman, G.F.
Transverse Momentum Distribution in Drell-Yan Pair and W and Z Boson
Production.
Nucl. Phys. B 1985, 250, 199–224.
<https://doi.org/10.1016/0550-3213(85)90479-1>.
[Ji et al.(2005)Ji, Ma, and Yuan]Ji:2004wu
Ji, X.D.; Ma, J.P.; Yuan, F.
QCD factorization for semi-inclusive deep-inelastic scattering at
low transverse momentum.
Phys. Rev. D 2005, 71, 034005.
<https://doi.org/10.1103/PhysRevD.71.034005>.
[Ji et al.(2004)Ji, Ma, and Yuan]Ji:2004xq
Ji, X.D.; Ma, J.P.; Yuan, F.
QCD factorization for spin-dependent cross sections in DIS and
Drell-Yan processes at low transverse momentum.
Phys. Lett. B 2004, 597, 299–308.
<https://doi.org/10.1016/j.physletb.2004.07.026>.
[Sun et al.(2014)Sun, Yuan, and Yuan]Sun:2014gfa
Sun, P.; Yuan, C.P.; Yuan, F.
Soft Gluon Resummations in Dijet Azimuthal Angular Correlations in
Hadronic Collisions.
Phys. Rev. Lett. 2014, 113, 232001.
<https://doi.org/10.1103/PhysRevLett.113.232001>.
[Sun et al.(2015)Sun, Yuan, and Yuan]Sun:2015doa
Sun, P.; Yuan, C.P.; Yuan, F.
Transverse Momentum Resummation for Dijet Correlation in Hadronic
Collisions.
Phys. Rev. D 2015, 92, 094007.
<https://doi.org/10.1103/PhysRevD.92.094007>.
[Becher and Neubert(2011)]Becher:2010tm
Becher, T.; Neubert, M.
Drell-Yan Production at Small q_T, Transverse Parton Distributions
and the Collinear Anomaly.
Eur. Phys. J. C 2011, 71, 1665.
<https://doi.org/10.1140/epjc/s10052-011-1665-7>.
[Echevarria et al.(2012)Echevarria, Idilbi, and
Scimemi]Echevarria:2011epo
Echevarria, M.G.; Idilbi, A.; Scimemi, I.
Factorization Theorem For Drell-Yan At Low q_T And Transverse
Momentum Distributions On-The-Light-Cone.
JHEP 2012, 7, 002.
<https://doi.org/10.1007/JHEP07(2012)002>.
[Chiu et al.(2012)Chiu, Jain, Neill, and Rothstein]Chiu:2012ir
Chiu, J.Y.; Jain, A.; Neill, D.; Rothstein, I.Z.
A Formalism for the Systematic Treatment of Rapidity Logarithms in
Quantum Field Theory.
JHEP 2012, 05, 084.
<https://doi.org/10.1007/JHEP05(2012)084>.
[Li et al.(2020)Li, Neill, and Zhu]Li:2016axz
Li, Y.; Neill, D.; Zhu, H.X.
An exponential regulator for rapidity divergences.
Nucl. Phys. B 2020, 960, 115193.
<https://doi.org/10.1016/j.nuclphysb.2020.115193>.
[Kang et al.(2017)Kang, Liu, Ringer, and Xing]Kang:2017glf
Kang, Z.B.; Liu, X.; Ringer, F.; Xing, H.
The transverse momentum distribution of hadrons within jets.
JHEP 2017, 11, 068.
<https://doi.org/10.1007/JHEP11(2017)068>.
[Sterman(1978)]Sterman:1978bi
Sterman, G.F.
Mass Divergences in Annihilation Processes. 1. Origin and Nature of
Divergences in Cut Vacuum Polarization Diagrams.
Phys. Rev. D 1978, 17, 2773.
<https://doi.org/10.1103/PhysRevD.17.2773>.
[Libby and Sterman(1978)]Libby:1978bx
Libby, S.B.; Sterman, G.F.
Mass Divergences in Two Particle Inelastic Scattering.
Phys. Rev. D 1978, 18, 4737.
<https://doi.org/10.1103/PhysRevD.18.4737>.
[Davies et al.(1984)Davies, Webber, and Stirling]Davies:1984sp
Davies, C.T.H.; Webber, B.R.; Stirling, W.J.
Drell-Yan Cross-Sections at Small Transverse Momentum Nucl. Phys. 1984, 413.
<https://doi.org/10.1016/0550-3213(85)90402-X>.
[Landry et al.(2003)Landry, Brock, Nadolsky, and
Yuan]Landry:2002ix
Landry, F.; Brock, R.; Nadolsky, P.M.; Yuan, C.P.
Tevatron Run-1 Z boson data and Collins-Soper-Sterman resummation
formalism.
Phys. Rev. D 2003, 67, 073016.
<https://doi.org/10.1103/PhysRevD.67.073016>.
[Guzzi et al.(2014)Guzzi, Nadolsky, and Wang]Guzzi:2013aja
Guzzi, M.; Nadolsky, P.M.; Wang, B.
Nonperturbative contributions to a resummed leptonic angular
distribution in inclusive neutral vector boson production.
Phys. Rev. D 2014, 90, 014030.
<https://doi.org/10.1103/PhysRevD.90.014030>.
[Sun et al.(2018)Sun, Isaacson, Yuan, and Yuan]Sun:2014wpa
Sun, P.; Isaacson, J.; Yuan, C.P.; Yuan, F.
Nonperturbative functions for SIDIS and Drell–Yan
processes.
Int. J. Mod. Phys. A 2018, 33, 1841006.
<https://doi.org/10.1142/S0217751X18410063>.
[Ladinsky and Yuan(1994)]Ladinsky:1993zn
Ladinsky, G.A.; Yuan, C.P.
The Nonperturbative regime in QCD resummation for gauge boson
production at hadron colliders.
Phys. Rev. D 1994, 50, R4239.
<https://doi.org/10.1103/PhysRevD.50.R4239>.
[Sun and Yuan(2013)]Sun:2013hua
Sun, P.; Yuan, F.
Transverse momentum dependent evolution: Matching semi-inclusive
deep inelastic scattering processes to Drell-Yan and W/Z boson production.
Phys. Rev. D 2013, 88, 114012.
<https://doi.org/10.1103/PhysRevD.88.114012>.
[Aidala et al.(2014)Aidala, Field, Gamberg, and
Rogers]Aidala:2014hva
Aidala, C.A.; Field, B.; Gamberg, L.P.; Rogers, T.C.
Limits on transverse momentum dependent evolution from
semi-inclusive deep inelastic scattering at moderate Q.
Phys. Rev. D 2014, 89, 094002.
<https://doi.org/10.1103/PhysRevD.89.094002>.
[Echevarria et al.(2014)Echevarria, Idilbi, Kang, and
Vitev]Echevarria:2014xaa
Echevarria, M.G.; Idilbi, A.; Kang, Z.B.; Vitev, I.
QCD Evolution of the Sivers Asymmetry.
Phys. Rev. D 2014, 89, 074013.
<https://doi.org/10.1103/PhysRevD.89.074013>.
[Kang et al.(2011)Kang, Xiao, and Yuan]Kang:2011mr
Kang, Z.B.; Xiao, B.W.; Yuan, F.
QCD Resummation for Single Spin Asymmetries.
Phys. Rev. Lett. 2011, 107, 152002.
<https://doi.org/10.1103/PhysRevLett.107.152002>.
[Collins and Rogers(2015)]Collins:2014jpa
Collins, J.; Rogers, T.
Understanding the large-distance behavior of
transverse-momentum-dependent parton densities and the Collins-Soper
evolution kernel.
Phys. Rev. D 2015, 91, 074020.
<https://doi.org/10.1103/PhysRevD.91.074020>.
[Qiu and Zhang(2001)]Qiu:2000hf
Qiu, J.W.; Zhang, X.F.
Role of the nonperturbative input in QCD resummed Drell-Yan Q_T
distributions.
Phys. Rev. D 2001, 63, 114011.
<https://doi.org/10.1103/PhysRevD.63.114011>.
[Kang et al.(2016)Kang, Prokudin, Sun, and Yuan]Kang:2015msa
Kang, Z.B.; Prokudin, A.; Sun, P.; Yuan, F.
Extraction of Quark Transversity Distribution and Collins
Fragmentation Functions with QCD Evolution.
Phys. Rev. D 2016, 93, 014009.
<https://doi.org/10.1103/PhysRevD.93.014009>.
[Sivers(1990)]Sivers:1989cc
Sivers, D.W.
Single Spin Production Asymmetries from the Hard Scattering of
Point-Like Constituents.
Phys. Rev. D 1990, 41, 83.
<https://doi.org/10.1103/PhysRevD.41.83>.
[Sivers(1991)]Sivers:1990fh
Sivers, D.W.
Hard scattering scaling laws for single spin production
asymmetries.
Phys. Rev. D 1991, 43, 261–263.
<https://doi.org/10.1103/PhysRevD.43.261>.
[Song(1967)]Song:1967
Song, H.
Spin-3/2 Polarization in Production of N^* by Neutrinos.
Phys. Rev. 1967, 162, 1615.
<https://doi.org/10.1103/PhysRev.162.1615>.
[Zhao et al.(2022)Zhao, Zhang, Liang, Liu, and Zhou]Zhao:2022lbw
Zhao, J.; Zhang, Z.; Liang, Z.T.; Liu, T.; Zhou, Y.J.
Inclusive and semi-inclusive production of spin-3/2 hadrons in e+e-
annihilation.
Phys. Rev. D 2022, 106, 094006.
<https://doi.org/10.1103/PhysRevD.106.094006>.
[Collins(2002)]Collins:2002kn
Collins, J.C.
Leading twist single transverse-spin asymmetries: Drell-Yan and deep
inelastic scattering.
Phys. Lett. B 2002, 536, 43–48.
<https://doi.org/10.1016/S0370-2693(02)01819-1>.
[Ji and Yuan(2002)]Ji:2002aa
Ji, X.d.; Yuan, F.
Parton distributions in light cone gauge: Where are the final state
interactions?
Phys. Lett. B 2002, 543, 66–72.
<https://doi.org/10.1016/S0370-2693(02)02384-5>.
[Belitsky et al.(2003)Belitsky, Ji, and Yuan]Belitsky:2002sm
Belitsky, A.V.; Ji, X.; Yuan, F.
Final state interactions and gauge invariant parton distributions.
Nucl. Phys. B 2003, 656, 165–198.
<https://doi.org/10.1016/S0550-3213(03)00121-4>.
[Chen et al.(2017)Chen, Yang, Zhou, and Liang]Chen:2016iey
Chen, K.b.; Yang, W.h.; Zhou, Y.j.; Liang, Z.t.
Energy dependence of hadron polarization in e^+e^-→ hX at high
energies.
Phys. Rev. D 2017, 95, 034009.
<https://doi.org/10.1103/PhysRevD.95.034009>.
[Chen et al.(2020)Chen, Liang, Song, and Wei]Chen:2020pty
Chen, K.b.; Liang, Z.t.; Song, Y.k.; Wei, S.y.
Spin alignment of vector mesons in high energy pp collisions.
Phys. Rev. D 2020, 102, 034001.
<https://doi.org/10.1103/PhysRevD.102.034001>.
[Abdallah et al.(2023)Abdallah et al.]STAR:2022fan
Abdallah, M.S.; Aboona, B.E.; Adam, J.; Adamczyk, L.; Adams, J.R.; Adkins, J.K.; Agakishiev, G.; Aggarwal, I.; Aggarwal, M.M.; Ahammed, Z.; et al.
Pattern of global spin alignment of and K^*0
mesons in heavy-ion collisions.
Nature 2023, 614, 244–248.
<https://doi.org/10.1038/s41586-022-05557-5>.
[Mulders and Rodrigues(2001)]Mulders:2000sh
Mulders, P.J.; Rodrigues, J.
Transverse momentum dependence in gluon distribution and
fragmentation functions.
Phys. Rev. D 2001, 63, 094021.
<https://doi.org/10.1103/PhysRevD.63.094021>.
[Zyla et al.(2020)Zyla et al.]ParticleDataGroup:2020ssz
Zyla, P.A.; et al.
Review of Particle Physics.
PTEP 2020, 2020, 083C01.
<https://doi.org/10.1093/ptep/ptaa104>.
[Augustin and Renard(1980)]Augustin:1978wf
Augustin, J.E.; Renard, F.M.
How to Measure Quark Helicities in e^+ e^- → Hadrons.
Nucl. Phys. B 1980, 162, 341.
<https://doi.org/10.1016/0550-3213(80)90269-2>.
[Gustafson and Hakkinen(1993)]Gustafson:1992iq
Gustafson, G.; Hakkinen, J.
Lambda polarization in e+ e- annihilation at the Z0 pole.
Phys. Lett. B 1993, 303, 350–354.
<https://doi.org/10.1016/0370-2693(93)91444-R>.
[Buskulic et al.(1996)Buskulic et al.]ALEPH:1996oew
Buskulic, D.; et al.
Measurement of Lambda polarization from Z decays.
Phys. Lett. B 1996, 374, 319–330.
<https://doi.org/10.1016/0370-2693(96)00300-0>.
[Ackerstaff et al.(1998)Ackerstaff et al.]OPAL:1997oem
Ackerstaff, K.; et al.
Polarization and forward—Backward asymmetry of Lambda baryons in
hadronic Z0 decays.
Eur. Phys. J. C 1998, 2, 49–59.
<https://doi.org/10.1007/s100520050123>.
[Kotzinian et al.(1998)Kotzinian, Bravar, and von
Harrach]Kotzinian:1997vd
Kotzinian, A.; Bravar, A.; von Harrach, D.
Lambda and anti-Lambda polarization in lepton induced processes.
Eur. Phys. J. C 1998, 2, 329–337.
<https://doi.org/10.1007/s100520050142>.
[de Florian et al.(1998)de Florian, Stratmann, and
Vogelsang]deFlorian:1997zj
de Florian, D.; Stratmann, M.; Vogelsang, W.
QCD analysis of unpolarized and polarized Lambda baryon production
in leading and next-to-leading order.
Phys. Rev. D 1998, 57, 5811–5824.
<https://doi.org/10.1103/PhysRevD.57.5811>.
[Liang and Boros(1997)]Liang:1997rt
Liang, Z.T.; Boros, C.
Hyperon polarization and single spin left-right asymmetry in
inclusive production processes at high-energies.
Phys. Rev. Lett. 1997, 79, 3608–3611.
<https://doi.org/10.1103/PhysRevLett.79.3608>.
[Boros and Liang(1998)]Boros:1998kc
Boros, C.; Liang, Z.t.
Spin content of Lambda and its longitudinal polarization in e^+ e^-
annihilation at high-energies.
Phys. Rev. D 1998, 57, 4491–4494.
<https://doi.org/10.1103/PhysRevD.57.4491>.
[Ma and Soffer(1999)]Ma:1998pd
Ma, B.Q.; Soffer, J.
Quark flavor separation in Lambda Baryon fragmentation.
Phys. Rev. Lett. 1999, 82, 2250–2253.
<https://doi.org/10.1103/PhysRevLett.82.2250>.
[Ma et al.(2000a)Ma, Schmidt, and Yang]Ma:1999gj
Ma, B.Q.; Schmidt, I.; Yang, J.J.
Flavor and spin structure of lambda baryon at large x.
Phys. Lett. B 2000, 477, 107–113.
<https://doi.org/10.1016/S0370-2693(00)00167-2>.
[Ma et al.(2000b)Ma, Schmidt, and Yang]Ma:1999wp
Ma, B.Q.; Schmidt, I.; Yang, J.J.
Quark structure of Lambda from Lambda polarization in Z decays.
Phys. Rev. D 2000, 61, 034017.
<https://doi.org/10.1103/PhysRevD.61.034017>.
[Liu and Liang(2000)]Liu:2000fi
Liu, C.x.; Liang, Z.t.
Spin structure and longitudinal polarization of hyperon in e^+ e^-
annihilation at high-energies.
Phys. Rev. D 2000, 62, 094001.
<https://doi.org/10.1103/PhysRevD.62.094001>.
[Jaffe(1996)]Jaffe:1996wp
Jaffe, R.L.
Polarized Λ's in the current fragmentation region.
Phys. Rev. D 1996, 54, R6581–R6585.
<https://doi.org/10.1103/PhysRevD.54.R6581>.
[de Florian et al.(1997)de Florian, Stratmann, and
Vogelsang]deFlorian:1997kt
de Florian, D.; Stratmann, M.; Vogelsang, W.
Polarized Lambda production at HERA.
In Proceedings of the Workshop on Physics with Polarized Protons at
Hera: 1st Meeting, 1997.
[Ashery and Lipkin(1999)]Ashery:1999am
Ashery, D.; Lipkin, H.J.
Expected polarization of Lambda particles produced in deep inelastic
polarized lepton scattering.
Phys. Lett. B 1999, 469, 263–269.
<https://doi.org/10.1016/S0370-2693(99)01229-0>.
[de Florian et al.(1998)de Florian, Stratmann, and
Vogelsang]deFlorian:1998ba
de Florian, D.; Stratmann, M.; Vogelsang, W.
Polarized Lambda baryon production in p p collisions.
Phys. Rev. Lett. 1998, 81, 530–533.
<https://doi.org/10.1103/PhysRevLett.81.530>.
[Jager et al.(2003)Jager, Schafer, Stratmann, and
Vogelsang]Jager:2002xm
Jager, B.; Schafer, A.; Stratmann, M.; Vogelsang, W.
Next-to-leading order QCD corrections to high p(T) pion production
in longitudinally polarized pp collisions.
Phys. Rev. D 2003, 67, 054005.
<https://doi.org/10.1103/PhysRevD.67.054005>.
[Xu et al.(2002)Xu, Liu, and Liang]Xu:2002hz
Xu, Q.h.; Liu, C.x.; Liang, Z.t.
Longitudinal polarization of hyperons in high p(T) jets in singly
polarized pp collisions at high-energies.
Phys. Rev. D 2002, 65, 114008.
<https://doi.org/10.1103/PhysRevD.65.114008>.
[Xu et al.(2006)Xu, Liang, and Sichtermann]Xu:2005ru
Xu, Q.h.; Liang, Z.t.; Sichtermann, E.
Anti-lambda polarization in high energy pp collisions with polarized
beam.
Phys. Rev. D 2006, 73, 077503.
<https://doi.org/10.1103/PhysRevD.73.077503>.
[Adeva et al.(1998)Adeva et al.]SpinMuon:1998eqa
Adeva, B.; et al.
Spin asymmetries A(1) and structure functions g1 of the proton and
the deuteron from polarized high-energy muon scattering.
Phys. Rev. D 1998, 58, 112001.
<https://doi.org/10.1103/PhysRevD.58.112001>.
[Abe et al.(1998)Abe et al.]E143:1998hbs
Abe, K.; et al.
Measurements of the proton and deuteron spin structure functions
g(1) and g(2).
Phys. Rev. D 1998, 58, 112003.
<https://doi.org/10.1103/PhysRevD.58.112003>.
[Airapetian et al.(1998)Airapetian et al.]HERMES:1998cbu
Airapetian, A.; et al.
Measurement of the proton spin structure function g1(p) with a pure
hydrogen target.
Phys. Lett. B 1998, 442, 484–492.
<https://doi.org/10.1016/S0370-2693(98)01341-0>.
[Adams et al.(2000)Adams et al.]E665:1999fso
Adams, M.R.; et al.
Lambda and anti-lambda polarization from deep inelastic muon
scattering.
Eur. Phys. J. C 2000, 17, 263–267.
<https://doi.org/10.1007/s100520000493>.
[Airapetian et al.(2001)Airapetian et al.]HERMES:1999buc
Airapetian, A.; et al.
Measurement of longitudinal spin transfer to Lambda hyperons in deep
inelastic lepton scattering.
Phys. Rev. D 2001, 64, 112005.
<https://doi.org/10.1103/PhysRevD.64.112005>.
[Airapetian et al.(2006)Airapetian et al.]HERMES:2006lro
Airapetian, A.; et al.
Longitudinal Spin Transfer to the Lambda Hyperon in Semi-Inclusive
Deep-Inelastic Scattering.
Phys. Rev. D 2006, 74, 072004.
<https://doi.org/10.1103/PhysRevD.74.072004>.
[Alekseev et al.(2009)Alekseev et al.]COMPASS:2009nhs
Alekseev, M.; et al.
Measurement of the Longitudinal Spin Transfer to Lambda and
Anti-Lambda Hyperons in Polarised Muon DIS.
Eur. Phys. J. C 2009, 64, 171–179.
<https://doi.org/10.1140/epjc/s10052-009-1143-7>.
[Ma et al.(2000)Ma, Schmidt, Soffer, and Yang]Ma:2000uu
Ma, B.Q.; Schmidt, I.; Soffer, J.; Yang, J.J.
Lambda, anti-Lambda polarization and spin transfer in lepton deep
inelastic scattering.
Eur. Phys. J. C 2000, 16, 657–664.
<https://doi.org/10.1007/s100520000447>.
[Zhou et al.(2009)Zhou, Chen, Liang, and Xu]Zhou:2009mx
Zhou, S.s.; Chen, Y.; Liang, Z.t.; Xu, Q.h.
Longitudinal polarization of hyperon and anti-hyperon in
semi-inclusive deep-inelastic scattering.
Phys. Rev. D 2009, 79, 094018.
<https://doi.org/10.1103/PhysRevD.79.094018>.
[Chi et al.(2014)Chi, Du, and Ma]Chi:2014xba
Chi, Y.; Du, X.; Ma, B.Q.
Nucleon strange ss̅ asymmetry to the Λ/Λ̅
fragmentation.
Phys. Rev. D 2014, 90, 074003.
<https://doi.org/10.1103/PhysRevD.90.074003>.
[Gluck et al.(2001)Gluck, Reya, Stratmann, and
Vogelsang]Gluck:2000dy
Gluck, M.; Reya, E.; Stratmann, M.; Vogelsang, W.
Models for the polarized parton distributions of the nucleon.
Phys. Rev. D 2001, 63, 094005.
<https://doi.org/10.1103/PhysRevD.63.094005>.
[Ma et al.(2001)Ma, Schmidt, and Yang]Ma:2000ip
Ma, B.Q.; Schmidt, I.; Yang, J.J.
Nucleon transversity distribution from azimuthal spin asymmetry in
pion electroproduction.
Phys. Rev. D 2001, 63, 037501.
<https://doi.org/10.1103/PhysRevD.63.037501>.
[Ma et al.(2000)Ma, Schmidt, Soffer, and Yang]Ma:2000cg
Ma, B.Q.; Schmidt, I.; Soffer, J.; Yang, J.J.
The Flavor and spin structure of hyperons from quark fragmentation.
Phys. Rev. D 2000, 62, 114009.
<https://doi.org/10.1103/PhysRevD.62.114009>.
[Yang(2001)]Yang:2001yda
Yang, J.J.
Quark fragmentation functions in a diquark model for Lambda
production.
Phys. Rev. D 2001, 64, 074010.
<https://doi.org/10.1103/PhysRevD.64.074010>.
[Leader et al.(2002)Leader, Sidorov, and Stamenov]Leader:2001kh
Leader, E.; Sidorov, A.V.; Stamenov, D.B.
A New evaluation of polarized parton densities in the nucleon.
Eur. Phys. J. C 2002, 23, 479–485.
<https://doi.org/10.1007/s100520200901>.
[Blumlein and Bottcher(2002)]Blumlein:2002qeu
Blumlein, J.; Bottcher, H.
QCD analysis of polarized deep inelastic data and parton
distributions.
Nucl. Phys. B 2002, 636, 225–263.
<https://doi.org/10.1016/S0550-3213(02)00342-5>.
[Leader and Stamenov(2003)]Leader:2002az
Leader, E.; Stamenov, D.B.
Can the polarization of the strange quarks in the proton be
positive?
Phys. Rev. D 2003, 67, 037503.
<https://doi.org/10.1103/PhysRevD.67.037503>.
[Leader et al.(2006)Leader, Sidorov, and Stamenov]Leader:2005ci
Leader, E.; Sidorov, A.V.; Stamenov, D.B.
Longitudinal polarized parton densities updated.
Phys. Rev. D 2006, 73, 034023.
<https://doi.org/10.1103/PhysRevD.73.034023>.
[Astier et al.(2000)Astier et al.]NOMAD:2000wdf
Astier, P.; et al.
Measurement of the Lambda polarization in nu/mu charged current
interactions in the NOMAD experiment.
Nucl. Phys. B 2000, 588, 3–36.
<https://doi.org/10.1016/S0550-3213(00)00503-4>.
[Astier et al.(2001)Astier et al.]NOMAD:2001iup
Astier, P.; et al.
Measurement of the anti-Lambda polarization in muon-neutrino charged
current interactions in the NOMAD experiment.
Nucl. Phys. B 2001, 605, 3–14.
<https://doi.org/10.1016/S0550-3213(01)00181-X>.
[Ellis et al.(2002)Ellis, Kotzinian, and Naumov]Ellis:2002zv
Ellis, J.R.; Kotzinian, A.; Naumov, D.V.
Intrinsic polarized strangeness and Lambda0 polarization in deep
inelastic production.
Eur. Phys. J. C 2002, 25, 603–613.
<https://doi.org/10.1140/epjc/s2002-01025-2>.
[Abelev et al.(2009)Abelev et al.]STAR:2009hex
Abelev, B.I.; et al.
Longitudinal Spin Transfer to Lambda and anti-Lambda Hyperons in
Polarized Proton-Proton Collisions at s**(1/2) = 200-GeV.
Phys. Rev. D 2009, 80, 111102.
<https://doi.org/10.1103/PhysRevD.80.111102>.
[Adam et al.(2018)Adam et al.]STAR:2018pps
Adam, J.; et al.
Improved measurement of the longitudinal spin transfer to Λ
and Λ̅ hyperons in polarized proton-proton collisions at √(s) = 200 GeV.
Phys. Rev. D 2018, 98, 112009.
<https://doi.org/10.1103/PhysRevD.98.112009>.
[de Florian et al.(1998)de Florian, Soffer, Stratmann, and
Vogelsang]deFlorian:1998am
De Florian, D.; Soffer, J.; Stratmann, M.; Vogelsang, W.
Bounds on transverse spin asymmetries for Lambda baryon production
in pp collisions at BNL RHIC.
Phys. Lett. B 1998, 439, 176–182.
<https://doi.org/10.1016/S0370-2693(98)01016-8>.
[Gastmans and Wu(1990)]Gastmans:1990xh
Gastmans, R.; Wu, T.T.
The Ubiquitous Photon: Helicity Method for QED and QCD;
1990; Volume 80.
[Chen et al.(1995)Chen, Goldstein, Jaffe, and Ji]Chen:1994ar
Chen, K.; Goldstein, G.R.; Jaffe, R.L.; Ji, X.D.
Probing quark fragmentation functions for spin 1/2 baryon production
in unpolarized e+ e- annihilation.
Nucl. Phys. B 1995, 445, 380–398.
<https://doi.org/10.1016/0550-3213(95)00193-V>.
[Zhang and Wei(2023)]Zhang:2023ugf
Zhang, H.C.; Wei, S.Y.
Probing the longitudinal spin transfer via dihadron polarization
correlations in unpolarized e^+e^- and pp collisions 2023.
[Bacchetta et al.(2004)Bacchetta, D'Alesio, Diehl, and
Miller]Bacchetta:2004jz
Bacchetta, A.; D'Alesio, U.; Diehl, M.; Miller, C.A.
Single-spin asymmetries: The Trento conventions.
Phys. Rev. D 2004, 70, 117504.
<https://doi.org/10.1103/PhysRevD.70.117504>.
[Chen et al.(2022)Chen, Liang, Song, and Wei]Chen:2021zrr
Chen, K.b.; Liang, Z.t.; Song, Y.k.; Wei, S.y.
Longitudinal and transverse polarizations of
hyperon in unpolarized SIDIS and e+e- annihilation.
Phys. Rev. D 2022, 105, 034027.
<https://doi.org/10.1103/PhysRevD.105.034027>.
[Yang et al.(2021)Yang, Wang, and Lu]Yang:2021zgy
Yang, Y.; Wang, X.; Lu, Z.
Predicting the sinϕ_S Single-Spin-Asymmetry of Λ
Production off transversely polarized nucleon in SIDIS.
Phys. Rev. D 2021, 103, 114011.
<https://doi.org/10.1103/PhysRevD.103.114011>.
[Li et al.(2021)Li, Wang, and Lu]Li:2021txj
Li, H.; Wang, X.; Lu, Z.
sin(ϕ_Λ-ϕ_S azimuthal asymmetry in transversely
polarized Λ production in SIDIS within TMD factorization at the EIC.
Phys. Rev. D 2021, 104, 034020.
<https://doi.org/10.1103/PhysRevD.104.034020>.
[Kang et al.(2021)Kang, Lee, Shao, and Zhao]Kang:2021ffh
Kang, Z.B.; Lee, K.; Shao, D.Y.; Zhao, F.
Spin asymmetries in electron-jet production at the future electron
ion collider.
JHEP 2021, 11, 005.
<https://doi.org/10.1007/JHEP11(2021)005>.
[Kang et al.(2022)Kang, Terry, Vossen, Xu, and
Zhang]Kang:2021kpt
Kang, Z.B.; Terry, J.; Vossen, A.; Xu, Q.; Zhang, J.
Transverse Lambda production at the future Electron-Ion Collider.
Phys. Rev. D 2022, 105, 094033.
<https://doi.org/10.1103/PhysRevD.105.094033>.
[Ellis and Hwang(2012)]Ellis:2011kq
Ellis, J.; Hwang, D.S.
Spin Correlations of Lambda anti-Lambda Pairs as a Probe of
Quark-Antiquark Pair Production.
Eur. Phys. J. C 2012, 72, 1877.
<https://doi.org/10.1140/epjc/s10052-012-1877-5>.
[Ackerstaff et al.(1997)Ackerstaff et al.]OPAL:1997vmw
Ackerstaff, K.; et al.
Spin alignment of leading K*(892)0 mesons in hadronic Z0 decays.
Phys. Lett. B 1997, 412, 210–224.
<https://doi.org/10.1016/S0370-2693(97)01077-0>.
[Abreu et al.(1997)Abreu et al.]DELPHI:1997ruo
Abreu, P.; et al.
Measurement of the spin density matrix for the rho0, K*0 (892) and
phi produced in Z0 decays.
Phys. Lett. B 1997, 406, 271–286.
<https://doi.org/10.1016/S0370-2693(97)00758-2>.
[Ackerstaff et al.(1997)Ackerstaff et al.]OPAL:1997nwj
Ackerstaff, K.; et al.
Study of phi (1020), D*+- and B* spin alignment in hadronic Z0
decays.
Z. Phys. C 1997, 74, 437–449.
<https://doi.org/10.1007/s002880050406>.
[Abbiendi et al.(2000)Abbiendi et al.]OPAL:1999hxs
Abbiendi, G.; et al.
A Study of spin alignment of rho(770)+- and omega(782) mesons in
hadronic Z0 decays.
Eur. Phys. J. C 2000, 16, 61–70.
<https://doi.org/10.1007/s100520050003>.
[Chukanov et al.(2006)Chukanov et al.]NOMAD:2006kuc
Chukanov, A.; et al.
Production properties of K*(892)+- vector mesons and their spin
alignment as measured in the NOMAD experiment.
Eur. Phys. J. C 2006, 46, 69–79.
<https://doi.org/10.1140/EPJC/S2006-02500-4>.
[Anselmino et al.(1998)Anselmino, Bertini, Murgia, and
Pire]Anselmino:1998jv
Anselmino, M.; Bertini, M.; Murgia, F.; Pire, B.
Off diagonal helicity density matrix elements for heavy vector
mesons inclusively produced in N N, gamma N and lepton N interactions.
Phys. Lett. B 1998, 438, 347–352.
<https://doi.org/10.1016/S0370-2693(98)00978-2>.
[Anselmino et al.(1999)Anselmino, Bertini, Caruso, Murgia, and
Quintairos]Anselmino:1999cg
Anselmino, M.; Bertini, M.; Caruso, F.; Murgia, F.; Quintairos, P.
Off diagonal helicity density matrix elements for vector mesons
produced in polarized e+ e- processes.
Eur. Phys. J. C 1999, 11, 529–537.
<https://doi.org/10.1007/s100520050652>.
[Schafer et al.(1999)Schafer, Szymanowski, and
Teryaev]Schafer:1999am
Schafer, A.; Szymanowski, L.; Teryaev, O.V.
Tensor polarization of vector mesons from quark and gluon
fragmentation.
Phys. Lett. B 1999, 464, 94–100.
<https://doi.org/10.1016/S0370-2693(99)00984-3>.
[Xu et al.(2001)Xu, Liu, and Liang]Xu:2001hz
Xu, Q.h.; Liu, C.x.; Liang, Z.t.
Spin alignment of vector meson in e+ e- annihilation at Z0 pole.
Phys. Rev. D 2001, 63, 111301.
<https://doi.org/10.1103/PhysRevD.63.111301>.
[Bacchetta and Mulders(2001)]Bacchetta:2001rb
Bacchetta, A.; Mulders, P.J.
Positivity bounds on spin one distribution and fragmentation
functions.
Phys. Lett. B 2001, 518, 85–93.
<https://doi.org/10.1016/S0370-2693(01)01051-6>.
[Shlyapnikov(2001)]Shlyapnikov:2001jf
Shlyapnikov, P.V.
Comparing fragmentation of strange quark in Z0 decays and K+ p
reactions.
Phys. Lett. B 2001, 512, 18–24.
<https://doi.org/10.1016/S0370-2693(01)00691-8>.
[Xu and Liang(2002)]Xu:2002vz
Xu, Q.h.; Liang, Z.t.
Spin alignments of vector mesons in deeply inelastic lepton nucleon
scattering.
Phys. Rev. D 2002, 66, 017301.
<https://doi.org/10.1103/PhysRevD.66.017301>.
[Xu and Liang(2003a)]Xu:2003fq
Xu, Q.h.; Liang, Z.t.
Spin alignment of the high p(T) vector mesons in polarized pp
collisions at high-energies.
Phys. Rev. D 2003, 67, 114013.
<https://doi.org/10.1103/PhysRevD.67.114013>.
[Xu and Liang(2003b)]Xu:2003rs
Xu, Q.h.; Liang, Z.t.
Spin alignment of vector mesons in unpolarized hadron hadron
collisions at high-energies.
Phys. Rev. D 2003, 68, 034023.
<https://doi.org/10.1103/PhysRevD.68.034023>.
[Abelev et al.(2008)Abelev et al.]STAR:2008lcm
Abelev, B.I.; et al.
Spin alignment measurements of the K*0(892) and phi (1020) vector
mesons in heavy ion collisions at s(NN)**(1/2) = 200 GeV.
Phys. Rev. C 2008, 77, 061902.
<https://doi.org/10.1103/PhysRevC.77.061902>.
[Kundu(2021)]Kundu:2021lra
Kundu, S.
Spin alignment measurements of vector mesons with ALICE at the LHC.
Nucl. Phys. A 2021, 1005, 121912.
<https://doi.org/10.1016/j.nuclphysa.2020.121912>.
[Ji(2013)]Ji:2013dva
Ji, X.
Parton Physics on a Euclidean Lattice.
Phys. Rev. Lett. 2013, 110, 262002.
<https://doi.org/10.1103/PhysRevLett.110.262002>.
[Ji et al.(2018)Ji, Zhang, and Zhao]Ji:2017oey
Ji, X.; Zhang, J.H.; Zhao, Y.
Renormalization in Large Momentum Effective Theory of Parton
Physics.
Phys. Rev. Lett. 2018, 120, 112001.
<https://doi.org/10.1103/PhysRevLett.120.112001>.
[Radyushkin(2017)]Radyushkin:2017cyf
Radyushkin, A.V.
Quasi-parton distribution functions, momentum distributions, and
pseudo-parton distribution functions.
Phys. Rev. D 2017, 96, 034025.
<https://doi.org/10.1103/PhysRevD.96.034025>.
[Orginos et al.(2017)Orginos, Radyushkin, Karpie, and
Zafeiropoulos]Orginos:2017kos
Orginos, K.; Radyushkin, A.; Karpie, J.; Zafeiropoulos, S.
Lattice QCD exploration of parton pseudo-distribution functions.
Phys. Rev. D 2017, 96, 094503.
<https://doi.org/10.1103/PhysRevD.96.094503>.
[Khan et al.(2021)Khan et al.]HadStruc:2021wmh
Khan, T.; et al.
Unpolarized gluon distribution in the nucleon from lattice quantum
chromodynamics.
Phys. Rev. D 2021, 104, 094516.
<https://doi.org/10.1103/PhysRevD.104.094516>.
[Chodos et al.(1974a)Chodos, Jaffe, Johnson, Thorn,
and Weisskopf]Chodos:1974je
Chodos, A.; Jaffe, R.L.; Johnson, K.; Thorn, C.B.; Weisskopf, V.F.
A New Extended Model of Hadrons.
Phys. Rev. D 1974, 9, 3471–3495.
<https://doi.org/10.1103/PhysRevD.9.3471>.
[Chodos et al.(1974b)Chodos, Jaffe, Johnson, and
Thorn]Chodos:1974pn
Chodos, A.; Jaffe, R.L.; Johnson, K.; Thorn, C.B.
Baryon Structure in the Bag Theory.
Phys. Rev. D 1974, 10, 2599.
<https://doi.org/10.1103/PhysRevD.10.2599>.
[Jakob et al.(1997)Jakob, Mulders, and Rodrigues]Jakob:1997wg
Jakob, R.; Mulders, P.J.; Rodrigues, J.
Modeling quark distribution and fragmentation functions.
Nucl. Phys. A 1997, 626, 937–965.
<https://doi.org/10.1016/S0375-9474(97)00588-5>.
[Petrov et al.(1998)Petrov, Pobylitsa, Polyakov, Bornig, Goeke,
and Weiss]Petrov:1998kf
Petrov, V.Y.; Pobylitsa, P.V.; Polyakov, M.V.; Bornig, I.; Goeke, K.; Weiss, C.
Off—forward quark distributions of the nucleon in the large N(c)
limit.
Phys. Rev. D 1998, 57, 4325–4333.
<https://doi.org/10.1103/PhysRevD.57.4325>.
[Bentz et al.(1999)Bentz, Hama, Matsuki, and Yazaki]Bentz:1999gx
Bentz, W.; Hama, T.; Matsuki, T.; Yazaki, K.
NJL model on the light cone and pion structure function.
Nucl. Phys. A 1999, 651, 143–173.
<https://doi.org/10.1016/S0375-9474(99)00130-X>.
[Penttinen et al.(2000)Penttinen, Polyakov, and
Goeke]Penttinen:1999th
Penttinen, M.; Polyakov, M.V.; Goeke, K.
Helicity skewed quark distributions of the nucleon and chiral
symmetry.
Phys. Rev. D 2000, 62, 014024.
<https://doi.org/10.1103/PhysRevD.62.014024>.
[Brodsky et al.(2002a)Brodsky, Hwang, and
Schmidt]Brodsky:2002cx
Brodsky, S.J.; Hwang, D.S.; Schmidt, I.
Final state interactions and single spin asymmetries in
semiinclusive deep inelastic scattering.
Phys. Lett. B 2002, 530, 99–107.
<https://doi.org/10.1016/S0370-2693(02)01320-5>.
[Brodsky et al.(2002b)Brodsky, Hwang, and
Schmidt]Brodsky:2002rv
Brodsky, S.J.; Hwang, D.S.; Schmidt, I.
Initial state interactions and single spin asymmetries in Drell-Yan
processes.
Nucl. Phys. B 2002, 642, 344–356.
<https://doi.org/10.1016/S0550-3213(02)00617-X>.
[Boer et al.(2003)Boer, Brodsky, and Hwang]Boer:2002ju
Boer, D.; Brodsky, S.J.; Hwang, D.S.
Initial state interactions in the unpolarized Drell-Yan process.
Phys. Rev. D 2003, 67, 054003.
<https://doi.org/10.1103/PhysRevD.67.054003>.
[Brodsky et al.(2003)Brodsky, Hwang, and Schmidt]Brodsky:2002pr
Brodsky, S.J.; Hwang, D.S.; Schmidt, I.
Single hadronic spin asymmetries in weak interaction processes.
Phys. Lett. B 2003, 553, 223–228.
<https://doi.org/10.1016/S0370-2693(02)03259-8>.
[Gamberg et al.(2003)Gamberg, Goldstein, and
Oganessyan]Gamberg:2003ey
Gamberg, L.P.; Goldstein, G.R.; Oganessyan, K.A.
Novel transversity properties in semiinclusive deep inelastic
scattering.
Phys. Rev. D 2003, 67, 071504.
<https://doi.org/10.1103/PhysRevD.67.071504>.
[Bacchetta et al.(2004)Bacchetta, Schaefer, and
Yang]Bacchetta:2003rz
Bacchetta, A.; Schaefer, A.; Yang, J.J.
Sivers function in a spectator model with axial vector diquarks.
Phys. Lett. B 2004, 578, 109–118.
<https://doi.org/10.1016/j.physletb.2003.10.045>.
[Belitsky et al.(2004)Belitsky, Ji, and Yuan]Belitsky:2003nz
Belitsky, A.V.; Ji, X.d.; Yuan, F.
Quark imaging in the proton via quantum phase space distributions.
Phys. Rev. D 2004, 69, 074014.
<https://doi.org/10.1103/PhysRevD.69.074014>.
[Yuan(2003)]Yuan:2003wk
Yuan, F.
Sivers function in the MIT bag model.
Phys. Lett. B 2003, 575, 45–54.
<https://doi.org/10.1016/j.physletb.2003.09.052>.
[Lu and Ma(2004)]Lu:2004hu
Lu, Z.; Ma, B.Q.
Non-zero transversity distribution of the pion in a
quark-spectator-antiquark model.
Phys. Rev. D 2004, 70, 094044.
<https://doi.org/10.1103/PhysRevD.70.094044>.
[Goeke et al.(2006)Goeke, Meissner, Metz, and
Schlegel]Goeke:2006ef
Goeke, K.; Meissner, S.; Metz, A.; Schlegel, M.
Checking the Burkardt sum rule for the Sivers function by model
calculations.
Phys. Lett. B 2006, 637, 241–244.
<https://doi.org/10.1016/j.physletb.2006.05.004>.
[Cloet et al.(2008)Cloet, Bentz, and Thomas]Cloet:2007em
Cloet, I.C.; Bentz, W.; Thomas, A.W.
Transversity quark distributions in a covariant quark-diquark
model.
Phys. Lett. B 2008, 659, 214–220.
<https://doi.org/10.1016/j.physletb.2007.09.071>.
[Meissner et al.(2007)Meissner, Metz, and Goeke]Meissner:2007rx
Meissner, S.; Metz, A.; Goeke, K.
Relations between generalized and transverse momentum dependent
parton distributions.
Phys. Rev. D 2007, 76, 034002.
<https://doi.org/10.1103/PhysRevD.76.034002>.
[Bacchetta et al.(2008)Bacchetta, Conti, and
Radici]Bacchetta:2008af
Bacchetta, A.; Conti, F.; Radici, M.
Transverse-momentum distributions in a diquark spectator model.
Phys. Rev. D 2008, 78, 074010.
<https://doi.org/10.1103/PhysRevD.78.074010>.
[Pasquini et al.(2008)Pasquini, Cazzaniga, and
Boffi]Pasquini:2008ax
Pasquini, B.; Cazzaniga, S.; Boffi, S.
Transverse momentum dependent parton distributions in a light-cone
quark model.
Phys. Rev. D 2008, 78, 034025.
<https://doi.org/10.1103/PhysRevD.78.034025>.
[Avakian et al.(2008a)Avakian, Efremov, Schweitzer,
and Yuan]Avakian:2008dz
Avakian, H.; Efremov, A.V.; Schweitzer, P.; Yuan, F.
Transverse momentum dependent distribution function h_1T^⊥
and the single spin asymmetry A_UT^sin(3ϕ-ϕ_S).
Phys. Rev. D 2008, 78, 114024.
<https://doi.org/10.1103/PhysRevD.78.114024>.
[Avakian et al.(2008b)Avakian, Efremov, Schweitzer,
and Yuan]Avakian:2008mu
Avakian, H.; Efremov, A.V.; Schweitzer, P.; Yuan, F.
Pretzelosity distribution function h(1T)**perpendicular.
In Proceedings of the 2nd International Workshop on Transverse
Polarization Phenomena in Hard Processes, Ferrara, Italy, 28–31 May 2008.
<https://doi.org/10.1142/9789814277785_0026>.
[Boffi et al.(2009)Boffi, Efremov, Pasquini, and
Schweitzer]Boffi:2009sh
Boffi, S.; Efremov, A.V.; Pasquini, B.; Schweitzer, P.
Azimuthal spin asymmetries in light-cone constituent quark models.
Phys. Rev. D 2009, 79, 094012.
<https://doi.org/10.1103/PhysRevD.79.094012>.
[Efremov et al.(2009)Efremov, Schweitzer, Teryaev, and
Zavada]Efremov:2009ze
Efremov, A.V.; Schweitzer, P.; Teryaev, O.V.; Zavada, P.
Transverse momentum dependent distribution functions in a covariant
parton model approach with quark orbital motion.
Phys. Rev. D 2009, 80, 014021.
<https://doi.org/10.1103/PhysRevD.80.014021>.
[She et al.(2009)She, Zhu, and Ma]She:2009jq
She, J.; Zhu, J.; Ma, B.Q.
Pretzelosity h(1T)**perpendicular and quark orbital angular
momentum.
Phys. Rev. D 2009, 79, 054008.
<https://doi.org/10.1103/PhysRevD.79.054008>.
[Avakian et al.(2010)Avakian, Efremov, Schweitzer, and
Yuan]Avakian:2010br
Avakian, H.; Efremov, A.V.; Schweitzer, P.; Yuan, F.
The transverse momentum dependent distribution functions in the bag
model.
Phys. Rev. D 2010, 81, 074035.
<https://doi.org/10.1103/PhysRevD.81.074035>.
[Pasquini and Yuan(2010)]Pasquini:2010af
Pasquini, B.; Yuan, F.
Sivers and Boer-Mulders functions in Light-Cone Quark Models.
Phys. Rev. D 2010, 81, 114013.
<https://doi.org/10.1103/PhysRevD.81.114013>.
[Lu and Ma(2016)]Lu:2016vqu
Lu, Z.; Ma, B.Q.
Gluon Sivers function in a light-cone spectator model.
Phys. Rev. D 2016, 94, 094022.
<https://doi.org/10.1103/PhysRevD.94.094022>.
[Ma et al.(2020)Ma, Zhu, and Lu]Ma:2019agv
Ma, Z.L.; Zhu, J.Q.; Lu, Z.
Quasiparton distribution function and quasigeneralized parton
distribution of the pion in a spectator model.
Phys. Rev. D 2020, 101, 114005.
<https://doi.org/10.1103/PhysRevD.101.114005>.
[Bacchetta et al.(2020)Bacchetta, Celiberto, Radici, and
Taels]Bacchetta:2020vty
Bacchetta, A.; Celiberto, F.G.; Radici, M.; Taels, P.
Transverse-momentum-dependent gluon distribution functions in a
spectator model.
Eur. Phys. J. C 2020, 80, 733.
<https://doi.org/10.1140/epjc/s10052-020-8327-6>.
[Luan and Lu(2022)]Luan:2022fjc
Luan, X.; Lu, Z.
Sivers function of sea quarks in the light-cone model.
Phys. Lett. B 2022, 833, 137299.
<https://doi.org/10.1016/j.physletb.2022.137299>.
[Tan and Lu(2023)]Tan:2023kbl
Tan, C.; Lu, Z.
Gluon generalized parton distributions and angular momentum in a
light-cone spectator model. 2023.
[Sharma and Dahiya(2023)]Sharma:2023azu
Sharma, S.; Dahiya, H.
Twist-4 T-even proton TMDs in the light-front quark-diquark model.
2023.
[Metz(2002)]Metz:2002iz
Metz, A.
Gluon-exchange in spin-dependent fragmentation.
Phys. Lett. B 2002, 549, 139–145.
<https://doi.org/10.1016/S0370-2693(02)02899-X>.
[Gamberg et al.(2008)Gamberg, Mukherjee, and
Mulders]Gamberg:2008yt
Gamberg, L.P.; Mukherjee, A.; Mulders, P.J.
Spectral analysis of gluonic pole matrix elements for
fragmentation.
Phys. Rev. D 2008, 77, 114026.
<https://doi.org/10.1103/PhysRevD.77.114026>.
[Nzar and Hoodbhoy(1995)]Nzar:1995wb
Nzar, M.; Hoodbhoy, P.
Quark fragmentation functions in a diquark model for proton and
Lambda hyperon production.
Phys. Rev. D 1995, 51, 32–36.
<https://doi.org/10.1103/PhysRevD.51.32>.
[Yang et al.(2017)Yang, Lu, and Schmidt]Yang:2017cwi
Yang, Y.; Lu, Z.; Schmidt, I.
Transverse polarization of the Λ hyperon from unpolarized
quark fragmentation in the diquark model.
Phys. Rev. D 2017, 96, 034010.
<https://doi.org/10.1103/PhysRevD.96.034010>.
[Wang et al.(2018)Wang, Yang, and Lu]Wang:2018wqo
Wang, X.; Yang, Y.; Lu, Z.
Double Collins effect in e^+ e^-→ΛΛ̅X process
in a diquark spectator model.
Phys. Rev. D 2018, 97, 114015.
<https://doi.org/10.1103/PhysRevD.97.114015>.
[Lu and Schmidt(2015)]Lu:2015wja
Lu, Z.; Schmidt, I.
Twist-3 fragmentation functions in a spectator model with gluon
rescattering.
Phys. Lett. B 2015, 747, 357–364.
<https://doi.org/10.1016/j.physletb.2015.06.011>.
[Yang et al.(2016)Yang, Lu, and Schmidt]Yang:2016mxl
Yang, Y.; Lu, Z.; Schmidt, I.
Twist-3 T-odd fragmentation functions G^⊥ and
G̃^⊥ in a spectator model.
Phys. Lett. B 2016, 761, 333–339.
<https://doi.org/10.1016/j.physletb.2016.08.053>.
[Bacchetta et al.(2008)Bacchetta, Gamberg, Goldstein, and
Mukherjee]Bacchetta:2007wc
Bacchetta, A.; Gamberg, L.P.; Goldstein, G.R.; Mukherjee, A.
Collins fragmentation function for pions and kaons in a spectator
model.
Phys. Lett. B 2008, 659, 234–243.
<https://doi.org/10.1016/j.physletb.2007.09.076>.
[Xie and Lu(2022)]Xie:2022lra
Xie, X.; Lu, Z.
T-Even Transverse Momentum Dependent Gluon Fragmentation Functions
in A Spectator Model. 2022.
[Manohar and Georgi(1984)]Manohar:1983md
Manohar, A.; Georgi, H.
Chiral Quarks and the Nonrelativistic Quark Model.
Nucl. Phys. B 1984, 234, 189–212.
<https://doi.org/10.1016/0550-3213(84)90231-1>.
[Ji and Zhu(1993)]Ji:1993qx
Ji, X.D.; Zhu, Z.K.
Quark Fragmentation Functions in Low-Energy Chiral Theory. 1993.
[Londergan et al.(1996)Londergan, Pang, and
Thomas]Londergan:1996vf
Londergan, J.T.; Pang, A.; Thomas, A.W.
Probing charge symmetry violating quark distributions in
semiinclusive leptoproduction of hadrons.
Phys. Rev. D 1996, 54, 3154–3161.
<https://doi.org/10.1103/PhysRevD.54.3154>.
[Bacchetta et al.(2002)Bacchetta, Kundu, Metz, and
Mulders]Bacchetta:2002tk
Bacchetta, A.; Kundu, R.; Metz, A.; Mulders, P.J.
Estimate of the Collins fragmentation function in a chiral invariant
approach.
Phys. Rev. D 2002, 65, 094021.
<https://doi.org/10.1103/PhysRevD.65.094021>.
[Nam and Kao(2012a)]Nam:2011hg
Nam, S.i.; Kao, C.W.
Fragmentation functions and parton distribution functions for the
pion with the nonlocal interactions.
Phys. Rev. D 2012, 85, 034023.
<https://doi.org/10.1103/PhysRevD.85.034023>.
[Nam and Kao(2012b)]Nam:2012af
Nam, S.i.; Kao, C.W.
Fragmentation and quark distribution functions for the pion and kaon
with explicit flavor-SU(3)-symmetry breaking.
Phys. Rev. D 2012, 85, 094023.
<https://doi.org/10.1103/PhysRevD.85.094023>.
[Yang et al.(2013)Yang, Jiang, Kao, and Nam]Yang:2013cza
Yang, D.J.; Jiang, F.J.; Kao, C.W.; Nam, S.i.
Quark-jet contribution to the fragmentation functions for the pion
and kaon with the nonlocal interactions.
Phys. Rev. D 2013, 87, 094007.
<https://doi.org/10.1103/PhysRevD.87.094007>.
[Andrianov and Espriu(1999)]Andrianov:1999zz
Andrianov, A.A.; Espriu, D.
Vector mesons in the extended chiral quark model.
JHEP 1999, 10, 022.
<https://doi.org/10.1088/1126-6708/1999/10/022>.
[Bacchetta et al.(2001)Bacchetta, Kundu, Metz, and
Mulders]Bacchetta:2001di
Bacchetta, A.; Kundu, R.; Metz, A.; Mulders, P.J.
The Collins fragmentation function: A Simple model calculation.
Phys. Lett. B 2001, 506, 155–160.
<https://doi.org/10.1016/S0370-2693(01)00388-4>.
[Bacchetta et al.(2003)Bacchetta, Metz, and
Yang]Bacchetta:2003xn
Bacchetta, A.; Metz, A.; Yang, J.J.
Collins fragmentation function from gluon rescattering.
Phys. Lett. B 2003, 574, 225–231.
<https://doi.org/10.1016/j.physletb.2003.09.005>.
[Gamberg et al.(2003)Gamberg, Goldstein, and
Oganessyan]Gamberg:2003eg
Gamberg, L.P.; Goldstein, G.R.; Oganessyan, K.A.
A Mechanism for the T odd pion fragmentation function.
Phys. Rev. D 2003, 68, 051501.
<https://doi.org/10.1103/PhysRevD.68.051501>.
[Amrath et al.(2005)Amrath, Bacchetta, and Metz]Amrath:2005gv
Amrath, D.; Bacchetta, A.; Metz, A.
Reviewing model calculations of the Collins fragmentation function.
Phys. Rev. D 2005, 71, 114018.
<https://doi.org/10.1103/PhysRevD.71.114018>.
[Nambu and Jona-Lasinio(1961a)]Nambu:1961tp
Nambu, Y.; Jona-Lasinio, G.
Dynamical Model of Elementary Particles Based on an Analogy with
Superconductivity 1.
Phys. Rev. 1961, 122, 345–358.
<https://doi.org/10.1103/PhysRev.122.345>.
[Nambu and Jona-Lasinio(1961b)]Nambu:1961fr
Nambu, Y.; Jona-Lasinio, G.
Dynamical Model of Elementary Particles Based on an Analogy With
Superconductivity II.
Phys. Rev. 1961, 124, 246–254.
<https://doi.org/10.1103/PhysRev.124.246>.
[Korpa and Meissner(1990)]Korpa:1989pp
Korpa, C.L.; Meissner, U.G.
Flavor Mixing Structure Functions in the Nambu-Jona-Lasinio
Model.
Phys. Rev. D 1990, 41, 1679.
<https://doi.org/10.1103/PhysRevD.41.1679>.
[Shigetani et al.(1993)Shigetani, Suzuki, and
Toki]Shigetani:1993dx
Shigetani, T.; Suzuki, K.; Toki, H.
Pion structure function in the Nambu and Jona-Lasinio model.
Phys. Lett. B 1993, 308, 383–388.
<https://doi.org/10.1016/0370-2693(93)91302-4>.
[Davidson and Ruiz Arriola(1995)]Davidson:1994uv
Davidson, R.M.; Ruiz Arriola, E.
Structure functions of pseudoscalar mesons in the SU(3) NJL model.
Phys. Lett. B 1995, 348, 163–169.
<https://doi.org/10.1016/0370-2693(95)00091-X>.
[Davidson and Ruiz Arriola(2002)]Davidson:2001cc
Davidson, R.M.; Ruiz Arriola, E.
Parton distributions functions of pion, kaon and eta pseudoscalar
mesons in the NJL model.
Acta Phys. Polon. B 2002, 33, 1791–1808.
[Nguyen et al.(2011)Nguyen, Bashir, Roberts, and
Tandy]Nguyen:2011jy
Nguyen, T.; Bashir, A.; Roberts, C.D.; Tandy, P.C.
Pion and kaon valence-quark parton distribution functions.
Phys. Rev. C 2011, 83, 062201.
<https://doi.org/10.1103/PhysRevC.83.062201>.
[Hutauruk et al.(2016)Hutauruk, Cloet, and
Thomas]Hutauruk:2016sug
Hutauruk, P.T.P.; Cloet, I.C.; Thomas, A.W.
Flavor dependence of the pion and kaon form factors and parton
distribution functions.
Phys. Rev. C 2016, 94, 035201.
<https://doi.org/10.1103/PhysRevC.94.035201>.
[Field and Feynman(1978)]Field:1977fa
Field, R.D.; Feynman, R.P.
A Parametrization of the Properties of Quark Jets.
Nucl. Phys. B 1978, 136, 1.
<https://doi.org/10.1016/0550-3213(78)90015-9>.
[Field and Wolfram(1983)]Field:1982dg
Field, R.D.; Wolfram, S.
A QCD Model for e+ e- Annihilation.
Nucl. Phys. B 1983, 213, 65–84.
<https://doi.org/10.1016/0550-3213(83)90175-X>.
[Ito et al.(2009)Ito, Bentz, Cloet, Thomas, and
Yazaki]Ito:2009zc
Ito, T.; Bentz, W.; Cloet, I.C.; Thomas, A.W.; Yazaki, K.
The NJL-jet model for quark fragmentation functions.
Phys. Rev. D 2009, 80, 074008.
<https://doi.org/10.1103/PhysRevD.80.074008>.
[Matevosyan et al.(2011a)Matevosyan, Thomas, and
Bentz]Matevosyan:2010hh
Matevosyan, H.H.; Thomas, A.W.; Bentz, W.
Calculating Kaon Fragmentation Functions from NJL-Jet Model.
Phys. Rev. D 2011, 83, 074003.
<https://doi.org/10.1103/PhysRevD.83.074003>.
[Matevosyan et al.(2011b)Matevosyan, Thomas, and
Bentz]Matevosyan:2011ey
Matevosyan, H.H.; Thomas, A.W.; Bentz, W.
Monte Carlo Simulations of Hadronic Fragmentation Functions using
NJL-Jet Model.
Phys. Rev. D 2011, 83, 114010;
Erratum in Phys. Rev. D 2012, 86, 059904.
<https://doi.org/10.1103/PhysRevD.83.114010>.
[Matevosyan et al.(2012a)Matevosyan, Bentz, Cloet,
and Thomas]Matevosyan:2011vj
Matevosyan, H.H.; Bentz, W.; Cloet, I.C.; Thomas, A.W.
Transverse Momentum Dependent Fragmentation and Quark Distribution
Functions from the NJL-jet Model.
Phys. Rev. D 2012, 85, 014021.
<https://doi.org/10.1103/PhysRevD.85.014021>.
[Matevosyan et al.(2012b)Matevosyan, Thomas, and
Bentz]Matevosyan:2012ga
Matevosyan, H.H.; Thomas, A.W.; Bentz, W.
Collins Fragmentation Function within NJL-jet Model.
Phys. Rev. D 2012, 86, 034025.
<https://doi.org/10.1103/PhysRevD.86.034025>.
[Casey et al.(2012)Casey, Matevosyan, and Thomas]Casey:2012ux
Casey, A.; Matevosyan, H.H.; Thomas, A.W.
Calculating Dihadron Fragmentation Functions in the NJL-jet model.
Phys. Rev. D 2012, 85, 114049.
<https://doi.org/10.1103/PhysRevD.85.114049>.
[Bentz et al.(2016)Bentz, Kotzinian, Matevosyan, Ninomiya, Thomas,
and Yazaki]Bentz:2016rav
Bentz, W.; Kotzinian, A.; Matevosyan, H.H.; Ninomiya, Y.; Thomas, A.W.; Yazaki,
K.
Quark-Jet model for transverse momentum dependent fragmentation
functions.
Phys. Rev. D 2016, 94, 034004.
<https://doi.org/10.1103/PhysRevD.94.034004>.
[Yang and Li(2016)]Yang:2016gnd
Yang, D.J.; Li, H.n.
Gluon fragmentation functions in the Nambu–Jona-Lasinio
model.
Phys. Rev. D 2016, 94, 054041.
<https://doi.org/10.1103/PhysRevD.94.054041>.
[Yang and Li(2020)]Yang:2020wvt
Yang, D.J.; Li, H.n.
Charm fragmentation functions in the Nambu–Jona-Lasinio
model.
Phys. Rev. D 2020, 102, 036023.
<https://doi.org/10.1103/PhysRevD.102.036023>.
|
http://arxiv.org/abs/2307.01234v1
|
20230703093020
|
Internet of Things Fault Detection and Classification via Multitask Learning
|
[
"Mohammad Arif Ul Alam"
] |
cs.LG
|
[
"cs.LG",
"cs.AI"
] |
An Explainable Deep Framework:
Towards Task-Specific Fusion for Multi-to-One MRI Synthesis
Luyi Han1,2 Tianyu Zhang1,2,3Tianyu Zhang and Luyi Han contributed equally to this work. Yunzhi Huang5 Haoran Dou6,7 Xin Wang2,3 Yuan Gao2,3 Chunyao Lu1,2 Tao Tan2,4() Ritse Mann1,2
August 1, 2023
=========================================================================================================================================================================================
This paper presents a comprehensive investigation into developing a fault detection and classification system for real-world IIoT applications. The study addresses challenges in data collection, annotation, algorithm development, and deployment. Using a real-world IIoT system, three phases of data collection simulate 11 predefined fault categories. We propose SMTCNN for fault detection and category classification in IIoT, evaluating its performance on real-world data. SMTCNN achieves superior specificity (3.5%) and shows significant improvements in precision, recall, and F1 measures compared to existing techniques.
§ INTRODUCTION
The Industrial Internet of Things (IIoT) refers to the network of interconnected devices, sensors, and systems in industrial settings that enable the collection, exchange, and analysis of data to enhance operational efficiency and productivity <cit.>. It involves integrating physical machinery and equipment with digital technologies, enabling real-time monitoring, control, and automation of industrial processes <cit.>. However, within the complex IIoT ecosystem, various faults and anomalies can occur, including equipment malfunctions, communication failures, cybersecurity breaches, and data inaccuracies. Detecting and addressing these faults is crucial for maintaining uninterrupted operations, ensuring worker safety, minimizing downtime, and optimizing resource utilization <cit.>.
Detecting and predicting anomalies in industrial environments is crucial for economic and security purposes. However, the rarity of these events poses challenges when applying existing algorithms, leading to false alarms or misdetections <cit.>. Previous studies conducted by Nardelli et al. <cit.>, Leitão et al. <cit.>, and Oks et al. <cit.> have explored the modeling of IIoT networks and cyber-physical systems in industrial settings. Notably, promising solutions have been presented in reference <cit.>. Fault detection in industrial environments has always been challenging, with difficulties arising from device interoperability and limitations in data collection. The most promising state-of-the-art method utilizes Generative Adversarial Networks (GANs) for the detection and classification of system failures using a dataset containing missing values <cit.>. However, the focus of our paper is to develop a simple yet powerful unified method for the real-time detection and classification of IIoT faults using data collected from a real-world system.
To date, the existing literature on IIoT fault detection and classification has predominantly focused on either anomaly detection or fault classification for industrial processes. In this paper, we propose a novel framework called Sequential Multitask Cascaded Neural Network (SMTCNN) that integrates various approaches for anomaly detection and classification. The concept of Multitask Cascaded Neural Network (MTCNN) has gained significant popularity in face recognition research <cit.>, where it involves three cascaded tasks: detecting the face area, detecting facial features such as eyes, nose, and mouth, and finally speeding up the face detection process. We adapt the MTCNN framework for sequential data, resulting in SMTCNN, which consists of three tasks: change-point detection, anomaly detection, and fault classification. The framework is designed to effectively handle small fault event datasets and large normal datasets using a cascaded learning scheme. Our work makes the following key contributions:
* We developed a novel Sequential Multitask Cascaded Neural Network (SMTCNN) architecture, which incorporates three sequential tasks: change-point detection, refinement of change-point detection for anomaly/fault event identification, and classification of the detected faulty events using simple neural networks.
* We devised a comprehensive data collection methodology encompassing normal, abnormal, and real-world scenarios to gather IIoT data from a deployed system. Subsequently, we collected the data and conducted an evaluation of our proposed SMTCNN model. Our experimental results demonstrate the superior performance of our approach compared to existing state-of-the-art methods, particularly in terms of specificity (3.5%).
§ TESTBED SETTING
The four IIoT devices consisted of an industrial application module and an IIoT device monitoring agent module. Two devices monitored the oven temperature, while the other two monitored smoke and humidity levels, all at a frequency of once per minute. Data from the devices were transmitted to the network gateway via WirelessHART for alarm functionality. The IIoT device monitoring agent captured hardware and firmware metrics, which were relayed to the gateway. Each device had sensors, a radio, microcontroller unit (MCU), and power supply, with the radio using WirelessHART for communication with the MCU. The gateway managed network operations, security tasks, and connected the IWSN to an IP network.
§ DATASETS GENERATION
The dataset utilized in this research was created by executing the firmware of the IIoT devices for an extended duration and capturing the corresponding metrics for each function in a dedicated database. To evaluate the system's behavior in the presence of faults, deliberate firmware and hardware anomalies were introduced into the WirelessHART testbed at predetermined intervals. These anomalies were carefully designed to emulate real-world situations and assess the resilience and efficacy of the fault detection and classification algorithms. The injected faults encompassed 11 distinct fault classes that can be classified into four distinct types (Table <ref>).
By employing the aforementioned fault generation technique, three distinct datasets were generated, each comprising time logs, energy consumption, CPU usage, and time fields (representing the duration of each instance in seconds) for every data point.
* Anomaly Only Dataset: To collect this small dataset, a series of continuous anomalous event were deliberately induced in the IIoT system. The dataset specifically targets 11 distinct categories of anomalies, aiming to provide comprehensive coverage of potential abnormal behaviors within the system.
* Normal Only Dataset: This dataset was generated in a normal operational environment of IIoT, where no known faults were present.
* Real-time Mixed Dataset: This dataset comprises a fully operational real-time deployed IIoT system, where randomly chosen faults from the aforementioned 11 categories were sporadically introduced (1% of the time during the day) to simulate a realistic real-world scenario.
§ IIOT FAULTS DETECTION AND CLASSIFICATION
§.§ Unsupervised Change-Point Detection using Simple LSTM Autoencoder
We have developed an unsupervised change-point detection algorithm utilizing an autoencoder that is based on the Long Short-Term Memory (LSTM) neural network. Our approach was inspired by the work of Elhalwagy et al. <cit.>, who proposed a hybridization of unsupervised LSTM and Capsule network for online change-point detection in time-series data. The architecture of our autoencoder model consists of two LSTM-based encoders, one for each channel of the input data. The encoded representations from both channels are concatenated and passed to the decoder. The decoder's role is to reconstruct the input data, and during training, the model is optimized using the mean squared error (MSE) loss function and the Adam optimizer. To mitigate overfitting, we employed early stopping during training and restored the weights associated with the best performance. Subsequently, the trained autoencoder is utilized to reconstruct the time series data. Change-points are identified by establishing a threshold, denoted as τ, which is determined based on the mean and standard deviation of the reconstruction errors. Any reconstruction error exceeding the threshold is considered a change-point. The algorithm effectively detects change-points, signifying significant variations or anomalies in the sequential data that may indicate the occurrence of faulty events in IIoT.
§.§ Supervised IIoT Faults Classification
Due to the limited size of the collected IIoT faults dataset, we employed state-of-the-art algorithms, excluding neural networks, for fault classification. Specifically, we utilized Random Forest (RF), Support Vector Machines (SVM), Naive Bayes (NB), Logistic Regression (LG), Decision Tree (DT), and Stochastic Gradient Descent (SGD) algorithms. To effectively apply these algorithms, we employed suitable segmentation and windowing techniques.
§.§ Sequential Multitask Cascaded Neural Networks (SMTCNN)
Fig. <ref> shows our overall SMTCNN model architecture for IIoT Fault detection and classification.
Problem formulation: Let X be the input feature matrix with three columns. The binary label set Y_anomaly indicates the presence or absence of an anomaly, while the 12-class set Y_class represents different fault types, with label 12 indicating no fault and labels 1 to 11 corresponding to 11 specific fault types. We define three tasks: T_1, T_2, and T_3, each with their respective output variables O_t1, O_t2, and O_t3.
Task 1: Anomalous segment identification: This approach takes inspiration from the original Multitask Cascaded Neural Network (MTCNN) algorithm <cit.>, originally designed for face recognition. Specifically, we focus on Task-1 of the MTCNN algorithm, which is responsible for detecting bounding boxes around faces. In the context of our change-point detection algorithm for real-time sequential data, we adapt this concept to identify the start and end points of change-points, similar to bounding boxes in the temporal domain. These identified change-points act as markers to segment the data into regions of interest that may contain anomalous events requiring further analysis. The underlying hypothesis behind this segmentation approach is that these specific regions contain valuable information related to anomalous events and merit closer examination.
Task 2: Fine-tuning anomaly detection: This step considers the potential segments predicted by Task 1 and aims to predict anomalies. For this purpose, we employ a simple two-layer LSTM network followed by a fully connected layer with two outputs. This network takes the original features as input.
Task 3: Anomaly detection and classification: This task involves the final refinement of anomaly detection and classification using the original feature set X, the output of Task 1 (O_t1) which represents the segmented mask on the time-series data indicating detected change-points, and the output of Task 2 (O_t2) which represents the fine-tuned detected anomalies. The core network for Task 3 is also a simple two-layer LSTM network with a fully connected neural network that uses softmax activation and has an output dimension of 12 (representing 11 different fault categories and one label for the existence of a fault). Thus, the final loss function is defined as follows:
Loss_SMTCNN = -1/N∑_i=1^N∑_t=1^T∑_c=1^C y_i,t,clog(p_i,t,c)
In the above equation, N is the number of samples, T is the number of time steps in the sequential data, C is the number of classes, y_i,t,c is the one-hot encoded ground truth label for sample i, time step t, and class c, and p_i,t,c is the predicted probability of sample i at time step t belonging to class c.
§ EXPERIMENTAL EVALUATION
The proposed SMTCNN framework was implemented using Python, TensorFlow, and scikit-learn. For our study, we collected three datasets: 8,432 data points of anomalous data, 740,448 data points of normal data, and 718,444 data points of a mixed dataset representing real-time scenarios. The unsupervised change-point detection algorithm was trained using the normal data only, while the supervised IIoT faults classification algorithm was trained using the anomalous data only. Finally, the SMTCNN algorithm was trained and evaluated using the mixed dataset, encompassing both normal and anomalous data.
§.§ Baseline
We have implemented few state-of-art algorithms as well as few versions of our proposed framework by removing few modules for prooving their importance in the pipeline.
* B1 (GAN based Method): This method <cit.> proposed a GAN-based anomaly detection and classification algorithm to detect and classify faults in IIoT.
* B2 (SMTCNN without change-point-detection segmentation): In this framework, we removed the change-point detection based segmentation masking, but kept all other modules.
* B2 (SMTCNN without supervised faults classification): In this framework, we removed supervised faults classification module in the SMTCNN entwork, but kept every other modules.
§.§ Results
We employed the traditional 10-fold cross-validation technique to evaluate the supervised segmented IIoT fault classification. However, as traditional 10-fold cross-validation is not suitable for sequential data, we adapted our approach. To train and assess the performance of our proposed sequential algorithm, SMTCNN, we divided the entire sequential data into two halves. Subsequently, we randomly selected a sequence of data from the first half for training purposes and another sequence of data from the second half for testing. The lengths of the training and testing sequences were also randomly chosen within a range of 50% to 80% of the available data samples for each fold. This process was repeated 10 times to generate 10 different sets of training and testing sequences. We evaluated our proposed algorithm's performance using metrics such as balanced accuracy, precision, recall, specificity, and F1 measure. Additionally, we calculated the standard deviation of these metrics to assess overfitting. The performance details of the baselines and our overall framework are provided in Table <ref>, clearly demonstrating the superior performance of our framework compared to the baseline algorithms.
We observed that our framework exhibits marginal improvements in terms of accuracy, precision, recall, and F-1 measure compared to the baseline algorithms. However, the improvement in specificity is particularly noteworthy, with a significant increase of 3.5% over the baseline (B1). This enhancement represents a substantial advancement in the field of IIoT fault detection research. A closer examination of Table <ref> reveals that the performance of SMTCNN without change-point detection segmentation (B1) and without supervised fault classification pretraining (B2) experiences a significant drop in performance.
§.§ Conclusions and Future Works
This paper represents an initial step towards the collection, simulation, and prediction of faults in real-world deployed IIoT systems. It is a part of our broader vision to develop a secure, fault-tolerant, and reliable smart and connected industrial infrastructure. The primary objective of this study is to release our preliminary collected datasets for the advancement of IIoT fault generation and the integration of multitask learning in fault detection. However, it is essential to acknowledge that automatic fault detection encompasses a wide range of use cases that were not specifically addressed in this research.
In the context of IIoT, it is crucial to consider the diversity among different facilities, which may necessitate specific modifications to the fault detection algorithm by incorporating scalable and adaptable machine learning techniques. Moreover, in real-world scenarios, IIoT systems often encounter a significant number of missing values resulting from internet connectivity issues or power interruptions, which were not explicitly tackled in this study.
00
i1Boyes et. al. (October 2018). "The industrial internet of things (IIoT): An analysis framework". Computers in Industry. 101: 1–12.
i2Brauner et. al. "A Computer Science Perspective on Digital Transformation in Production". ACM Transactions on Internet of Things. 3 (2): 15:1–15:32 2022.
i3Gilchrist, Alasdair (2016). "Industry 4.0 - the industrial internet of things". Apress Media. doi:10.1007/978-1-4842-2047-4
i4Y. Chi et. al., "Knowledge-Based Fault Diagnosis in Industrial Internet of Things: A Survey," in IEEE Internet of Things Journal, vol. 9, no. 15, pp. 12886-12900, 1 Aug.1, 2022
i5Zhang, K., Zhang, Z., Li, Z., and Qiao, Y. (2016). Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Processing Letters, 23(10):1499–1503.
i6H. He and E. A. Garcia, “Learning from imbalanced data,” IEEE Trans. Knowl. Data Eng., vol. 21, no. 9, pp. 1263–1284, Sep. 2009.
i7B. Cheng et. al., “Industrial cyberphysical systems: Realizing cloud-based big data infrastructures,” IEEE Ind. Electron. Mag., vol. 12, no. 1, pp. 25–35, Mar. 2018
i8P. Leitao et. al., “Smart agents in industrial cyber–physical systems,” Proc. IEEE, vol. 104, no. 5, pp. 1086–1101, May 2016.
state_of_artM. Dzaferagic et. al., Fault Detection and Classification in Industrial IoT in Case of Missing Sensor Data, in IEEE Internet of Things Journal, vol. 9, no. 11, pp. 8892-8900, 1 June1, 2022
i9S. J. Oks, A. Fritzsche et. al., “An application map for industrial cyber-physical systems,” in Industrial Internet of Things. Cham, Switzerland: Springer, 2017, pp. 21–46.
capsuleAyman Elhalwagy, Tatiana Kalganova: Hybridization of Capsule and LSTM Networks for unsupervised anomaly detection on multivariate data. CoRR abs/2202.05538 (2022)
19S. Yin et. al., “Real-time monitoring and control of industrial cyberphysical systems: With integrated plant-wide monitoring and control framework,” IEEE Ind. Electron. Mag., vol. 13, no. 4, pp. 38–47, Dec. 2019.
|
http://arxiv.org/abs/2307.02277v1
|
20230705132427
|
Privacy-Preserving Federated Heavy Hitter Analytics for Non-IID Data
|
[
"Jiaqi Shao",
"Shanshan Han",
"Chaoyang He",
"Bing Luo"
] |
cs.DC
|
[
"cs.DC"
] |
[
Privacy-Preserving Federated Heavy Hitter Analytics for Non-IID Data
Jiaqi Shaosch
Shanshan Hanyyy,comp
Chaoyang Hecomp
Bing Luosch
yyyDepartment of Computer Science, University of California, Irvine, Irvine, USA;
compFedML Inc., Sunnyvale, USA;
schDepartment of Data Science Research Center, Duke Kunshan University, Jiangsu, China;
Bing Luobing.luo@dukekunshan.edu.cn
Federated Analytics, Heavy Hitter Identification, Non-IID data, Local Differential Privacy
0.3in
]
ℳ
ℛ
𝒟
𝒰
𝒜
Pr
Federated heavy hitter analytics involves the identification of the most frequent items within distributed data. Existing methods for this task often encounter challenges such as compromising privacy or sacrificing utility. To address these issues,
we introduce a novel privacy-preserving algorithm that exploits the hierarchical structure to discover local and global heavy hitters in non-IID data by utilizing perturbation and similarity techniques.
We conduct extensive evaluations on both synthetic and real datasets to validate the effectiveness of our approach. We also present FedCampus, a demonstration application to showcase the capabilities of our algorithm in analyzing population statistics.
§ INTRODUCTION
Identifying heavy hitters (frequently occurring items) is crucial in data mining. However, this task becomes challenging with distributed and sensitive data due to privacy and scalability concerns <cit.>. For example, analyzing user behaviors across multiple devices (smartphones, smartwatches, IoT) requires finding frequent items while maintaining privacy and efficiency.
The advent of Federated Analytics (FA) has facilitated the examination of data from disparate entities without the requirement of data centralization <cit.>. It follows federated learning (FL) <cit.>, where a central server interacts with clients and aggregates their responses to gain global insights.
Some algorithms rely on a trusted server and use central differential privacy (CDP) <cit.> to protect data privacy, such as TrieHH <cit.> and TrieHH++ <cit.>.
These algorithms have shown promising results in finding heavy hitters while achieving a good trade-off between accuracy and efficiency. Nonetheless, the necessity of a trusted server might not be feasible in several scenarios.
Other algorithms employ LDP <cit.> to protect individual privacy without a trusted server by perturbing individual's local data before sending, such as PEM <cit.>, RAPPOR <cit.>, PrivTrie <cit.>, TreeHist and Bitstogram <cit.>.
These approaches normally need to construct a tree via the data's prefixes (also known as “trie"), which, however, may suffer from domain limitation. In other words, the next level of construction completely depends on the construction of previous levels, and thus, this can affect the accuracy and efficiency of identifying heavy hitters.
Additionally, non-Independent and Identically Distributed (non-IID) data present unique obstacles to distributed heavy hitter identification in practical applications.
For instance, variations in user tweets or Reddit comments regarding vocabulary and word frequency, arising from factors such as topics, communities, and personal preferences, form clusters that are more homogenous internally than when compared with different clusters, as illustrated in Figure <ref>.
Consequently, these data form clusters characterized by higher similarity within a cluster compared to different clusters.
However, existing heavy hitters identification algorithms <cit.> do not consider such non-IID scenarios,
potentially reducing the effectiveness of the algorithm.
For instance, when data is clustered by topic, current algorithms tend to focus on identifying the most common words across all topics, rather than the most distinctive heavy hitters for each individual topic. This, coupled with the domain limitation issue in LDP, further decrease informativeness in non-IID settings.
In response to the above concerns, this paper introduces a hierarchical FA design to recognize heavy hitters within non-IID clusters, as depicted in Figure <ref>.
We summarize the key contributions as follows:
(1) We develop an intra-cluster identification algorithm with a novel
LDP-based intra-cluster algorithm to avoid domain limitation when identifying local heavy hitters;
(2) We propose a cross-cluster identification algorithm to filter out noisy local heavy hitters from non-IID data clusters;
(3) We evaluate our algorithm on synthetic and real datasets and demonstrate its superior performance in non-IID settings. Moreover,
we also deploy the algorithm in our demo application FedCampus for a campus-scale population statistics analysis.
§ SYSTEM DESIGN
This section describes our design of a hierarchical FA algorithm to identify top-k heavy hitters from non-IID data clusters of clients, which consists of two phases: intra-cluster (IC) identification (<ref>) and cross-cluster (CC) identification (<ref>), as shown in Figure <ref>.
§.§ Intra-Cluster (IC) Identification
In the IC identification, we use LDP-based perturbation to discover local heavy hitters within each cluster while preserving privacy. An auxiliary server (i.e., AuxServer in Figure <ref>) interacts with clients in each cluster to construct a trie based on their perturbed data. The trie efficiently stores and retrieves heavy hitters, facilitating their identification.
However, transmitting individual data to the AuxServer poses privacy risks, and the identification process may have domain limitations by excluding data beyond the predefined domain during trie construction.
GRRX. To tackle the aforementioned challenges, we propose a novel algorithm called Intra-Cluster (IC) identification algorithm (Algorithm <ref>). Our approach leverages the GRRX mechanism to perturb clients' data, ensuring individual data privacy while mitigating the domain limitation issue.
GRRX extends the Generalized Random Response (GRR) technique <cit.> that provides ε-LDP (ε being the privacy parameter).
However, GRR suffers from a domain limitation problem, potentially omitting data items falling outside the predefined domain Φ.
To overcome domain limitation, we introduce GRRX (corresponding to Line <ref>-<ref> in Algorithm <ref>) which enables each client to add an arbitrary item X to Φ, masking out-of-domain data and expanding the domain to Φ^⋆. Specifically, it perturbs a data item v to another item y, where the perturbation probability of GRRX depends on the size of the extended domain Φ^⋆, the privacy parameter ε, and is defined as _Φ^⋆[v = y]=e^ε/e^ε+d for v and y in Φ^⋆, and _Φ^⋆[v ≠ y]=1/e^ε+d for v and y not in Φ^⋆, where d represents the size of the original domain Φ.
To determine the random item X added to the domain, each client ℓ utilizes its own data v_ℓ and its prefix. If the prefix is not in Φ, X is set to v_ℓ. Otherwise, X is randomly selected from a binary prefix range [0,2^b], where b denotes the prefix length.
By incorporating GRRX into our algorithm, we can effectively handle any data item without compromising privacy or accuracy while effectively addressing the domain limitation inherent in the GRR mechanism.
Incremental Group-size Strategy. To enhance the effectiveness of the IC algorithm, we introduce an incremental group-size strategy. Traditional approaches such as <cit.> employ a uniform group-size strategy, where an equal number of clients are assigned to each group. However, this uniform approach often leads to a loss in the identification accuracy of heavy hitters.
Our incremental group-size strategy is motivated by the insight that later groups can benefit from the information obtained by earlier groups, allowing for improved refinement of prefixes and more precise identification of heavy hitters. Consequently, we allocate a larger number of clients to the later groups using a linearly incremental group-size strategy. In this strategy, the number of clients in each group G_i is calculated as n/2g + (i-1)n/g(g-1), where n denotes the total number of clients for trie construction, and g represents the total number of groups. By implementing this incremental group-size strategy, we enhance both the efficiency and accuracy of heavy hitter identification within each cluster.
§.§ Cross-Cluster (CC) Identification
In the previous IC identification subsection, we adopted LDP-based perturbation to protect the privacy of local heavy hitters within each cluster. Although the perturbation mechanism achieves LDP via adding noise to the local data, it brings redundancy issue among the local heavy hitters from different clusters. Moreover, the data across clusters are non-IID, which makes it more challenging to find global heavy hitters that are consistent and representative across clusters.
Therefore, we propose a Cross-Cluster (CC) identification algorithm (Algorithm <ref>) that aggregates the local heavy hitters from different clusters to identify global heavy hitters.
The CC Identification algorithm consists of two main steps: importance calculation and similarity filtering.
Importance Calculation
The process of importance calculation (refer to Line <ref> to <ref> in Algorithm <ref>) involves assigning a score to each local heavy hitter based on its relative frequency within its cluster. This score serves as an indicator of the representativeness exhibited by the local heavy hitter for its respective cluster, while simultaneously preserving those with higher relative frequencies in their clusters.
Similarity Filtering
The process of similarity filtering aims to eliminate redundant or noisy local heavy hitters by evaluating their hamming distance <cit.> against a predetermined threshold, denoted as δ. The selection of this threshold, along with its underlying rationale, is elaborated upon in the proof provided in Appendix <ref>.
By leveraging the hamming distance, we can quantify the distinctiveness exhibited by each local heavy hitter relative to the other local heavy hitters.
This design corresponds to the algorithm outlined in Line <ref> to <ref> in Algorithm <ref>.
By combining these two techniques, our algorithm can find global heavy hitters across non-IID data by selecting and filtering consistent and distinctive local heavy hitters from different clusters. These informative global heavy hitters reflect the diversity and similarity of the data across clusters, providing insights into unique and common data patterns.
§ EXPERIMENTS AND IMPLEMENTATION
This section first describes the experiment setting in <ref> and then presents the results of our algorithm evaluation in <ref>. Finally, we illustrate our algorithm deployment in our demo application, called FedCampus in <ref>.
§.§ Experiment Setting
Datasets.
In order to emulate real-world situations characterized by diverse linguistic communities, we employ a simulation approach to generate synthetic non-IID data.
These clusters, along with their non-IID properties, such as the number of clients and unique words per cluster, are summarized in Table <ref> (each client is associated with a single word). Additionally, we adhere to the well-established Zipf's distribution as described by <cit.> to determine the frequency distribution of unique words within each cluster, and these unique words are not shared across the other clusters.
Metrics.
For evaluating the algorithm, we employ the metrics of recall and F1 score. Furthermore, we explore the impact of different privacy parameters ε from 0.5 to 9.5 to examine the effects on the algorithm's performance.
§.§ Evaluation
In this section, we conduct experiments to evaluate our proposed algorithms. In <ref>, we assess the intra-cluster algorithm to identify local heavy hitters within clusters.
In <ref>, we examine the cross-cluster algorithm to identify global heavy hitters across non-IID data clusters.
In addition, we assess the impact of the expected number (k) of heavy hitters on performance in <ref>.
§.§.§ Evaluations of the Intra-Cluster (IC)
This section evaluates the effectiveness of the IC algorithm for finding local heavy hitters within each cluster. We compare our approach with state-of-the-art FA heavy hitter identification algorithm, TrieHH <cit.> and PEM with GRR <cit.>.
Ablation study of GRR/GRRX and group-size strategy.
We conduct an ablation study to measure the impact of perturbation mechanism and group-size strategy on top-k local heavy hitter identification. We compare the perturbation algorithm with three variants (Table <ref>) that differ in the noise mechanism (GRR or GRRX) and the group-size strategy (uniform or incremental). We also include TrieHH, a CDP algorithm, as a baseline for comparison.
Component Analysis. We compare our method with others on six synthetic clusters, varying privacy levels (ε). Figure <ref> presents recall and F1 score for cluster sizes of 2,000 and 9,500, with similar results for other cluster sizes (see Appendix <ref>).
Our algorithm consistently outperforms other methods across clusters and privacy levels. It excels in handling non-IID data with GRRX, overcoming domain limitations, and leveraging incremental group-size for increased information utilization. Moreover, our IC algorithm effectively handles small cluster sizes, ideal for scenarios with fewer available clients.
While similar to XTU, our method surpasses it due to incremental group-size, enhancing informativeness. PEM and GTF exhibit poorer performance due to domain limitations. TrieHH performs well only for the 9,500 client cluster, indicating sensitivity to cluster size.
§.§.§ Evaluations of the Cross-cluster (CC)
In this section, we conduct evaluations of our CC algorithm for the identification of global heavy hitters across non-IID clusters. We begin by presenting the experimental assessment of our algorithm on synthetic data and subsequently evaluate its performance on real datasets.
Global heavy hitters identification.
We compare the performance of our CC algorithm with TrieHH, a baseline method that uses CDP, for identifying the global heavy hitters across non-IID clusters. We measure the recall and F1 scores of the methods under different values of the privacy parameter ε, ranging from 0.5 to 9.5. The global heavy hitters are the union of the local heavy hitters in each cluster, and the higher the relative frequency of a local heavy hitter in its cluster, the more likely it is to be a global result.
Performance comparison for Synthetic Data. We aggregate the same six synthetic clusters (Table <ref>) used in the intra-cluster experiment into a single dataset comprising 34,500 clients. Figure <ref> illustrates the recall and F1 scores of our algorithm and TrieHH at different privacy levels (ε). Our algorithm consistently outperforms TrieHH across all privacy levels, underscoring its ability to effectively filter out noisy local heavy hitters and handle non-IID data.
Performance comparison for Real Data
Next, we use two real datasets, Sentiment140 <cit.> and Reddit <cit.>, to simulate two non-IID clusters, each representing a cluster with non-IID data.
To alleviate the computational and communication burdens and address the issue of client availability, we employ weighted sampling to carefully select a total of 20,000 words from each cluster while preserving the frequency distribution that is inherent to the original dataset. This cost-effective design, which draws inspiration from prior research on Federated Learning (FL) <cit.>, enables us to make optimal use of limited resources while ensuring the data remains representative.
Table <ref> shows the number of clients and unique words before and after sampling. We then apply our algorithm to identify the top-k heavy hitters across the clusters. The results also indicate that our algorithm consistently outperforms TrieHH in most cases, demonstrating its superior accuracy and efficiency in handling real non-IID data. Additional details on the performance evaluation can be found in Appendix <ref>. However, we observed that our algorithm's performance deteriorates when ε is too small. This can be attributed to the fact that a smaller ε corresponds to a higher level of privacy protection, which introduces more noise in the data perturbation and aggregation process under LDP.
§.§.§ Impact of k
We evaluate the impact of k, the number of expected heavy hitters, on the real datasets.
The results of varying k are shown in Figure <ref>.
We achieve high recall and F1 scores across different values of k compared to other methods. Our algorithm has a stable performance when varying k from 3 to 5, and the recall and F1 scores do not change much as k increases, indicating that it can handle different levels of granularity and diversity in the data.
§.§ FedCampus Application: Step Count Analysis
This section illustrates the application of our algorithm within FedCampus, a platform that facilitates privacy-preserving federated analytics on a campus-wide scale. One specific application within FedCampus involves analyzing step counts among distinct participant clusters. Step counts inherently involve sensitive personal data, necessitating privacy safeguards. Therefore, we utilize our algorithm to identify the most prevalent step counts, known as “heavy hitters", across various clusters.
Data Collection and Preprocessing.
Our study involved participants with varying backgrounds and degrees of physical activity. Each participant was equipped with a wearable device configured to register their daily step count automatically. To capture the non-IID nature of the step count data, we meticulously arranged participant clusters to ensure a broad representation of walking routines, lifestyles, and activity levels (Table <ref> presents the statistics of the data amassed from FedCampus, including the total number of daily steps for each participant over one week). Each cluster exhibited a distinct distribution of step counts, thereby encapsulating real-world disparities and dependencies.
Moreover, all collected data were anonymized through the removal of identifiable user information. Moreover, we conducted rigorous manual scrutiny to eliminate invalid data, such as missing values or outliers.
To augment privacy safeguards and mitigate data sensitivity, we applied quantization methodologies at various precision levels. These strategies partitioned the step count data into discrete intervals and depicted them with fewer bits. By approximating interval modes instead of precise values, we further enhance privacy protection while preserving utility.
Evaluation of Results.
We utilized the proposed algorithm to compute mode intervals for step counts, representing ranges of frequently occurring values. The chosen precision level for quantization impacts the balance between utility and privacy. Figure <ref> delineates the mode interval ranges across disparate quantization levels. Elevated precision levels result in narrower intervals, thus providing enhanced utility but compromising privacy. Conversely, reduced precision levels produce broader intervals, forfeiting some utility to amplify privacy. Additional details regarding FedCampus can be found in Appendix <ref>.
§ CONCLUSION AND FUTURE WORKS
In conclusion, we have introduced a federated heavy hitters identification algorithm for non-IID scenarios. Our hierarchical design achieves good performance at identifying local and global heavy hitters on both synthetic and real datasets, while effectively managing privacy risks and utility loss.
We also demonstrated FedCampus, our privacy-preserving campus-scale statistics analysis platform. Our empirical study reveals successful heavy hitter identification via our LDP-based perturbation method with uniform privacy parameters. For future work, we aim to incorporate individual privacy preferences, conduct a formal privacy analysis. These efforts will enhance the customization of privacy protection and contribute to a more rigorous understanding of the privacy guarantees provided by our approach.
§ ACKNOWLEDGEMENTS
We would like to acknowledge the support received from the Kunshan Government Research (KGR) Funding 23KKSGR024 for the research conducted by Jiaqi Shao and Bing Luo.
icml2023
§ APPENDIX
§.§ Proof of Proposition
Proposition: In a given dataset, using the hamming distance to measure the similarity of any pair (c, c^')
with a threshold δ = 0.5 * 𝑙𝑒𝑛(c), 𝑙𝑒𝑛(c) is the number of bits of c,
avoids the worst case that all items are treated as the same entity.
Suppose there are n bit-strings = {c_1, c_2, …, c_n },
and the maximal length of bit-string is L = 𝑚𝑎𝑥 {𝑙𝑒𝑛(c_i), …, 𝑙𝑒𝑛(c_n) }.
For each pair of c_i and c_j, if the hamming distance satisfies d(c_i, c_j) < δ, c_i and c_j can be considered as similar entities.
with the threshold δ = α * 𝑙𝑒𝑛(c_i),
α≥d(c_i, c_j)/𝑙𝑒𝑛(c_i) for similar entities.
To begin, the total hamming distance of all pairs of (c_i, c_j) is ∑_i=1^n∑_j=1^n d(c_i, c_j) = ∑_i=1^L m_i (n-m_i),
where m_i represents the count of 1's in all c_i ∈ for the i^th bit position,
and n-m_i represents the count of 0's in all c_i ∈ for the i^th bit position.
To calculate the expected value of the total hamming distance,
we assume the value (1 or 0) of the i^th bit position for all c_i ∈
follows a binomial distribution.
𝔼[ ∑_i=1^n∑_j=1^n d(c_i, c_j) ]
= 𝔼[ ∑_i=1^L m_i (n-m_i) ]
= ∑_i=1^L𝔼[ m_i (n-m_i) ]
= ∑_i=1^L𝔼[ m_i ] 𝔼[ n-m_i ]
= L (n/2)^2
If all entities in is similar, then we calculate the expectation of α:
𝔼[α] ≥𝔼[d(c_i, c_j)/𝑚𝑎𝑥 {𝑙𝑒𝑛(c_i, c_j)}]
≥1/L𝔼[d(c_i, c_j)]
= 1/(n(n-1)/2)*L𝔼[ ∑_i=1^n∑_j=1^n d(c_i, c_j) ]
> 0.5
Therefore, to identify the differences of all entities in , α = 0.5 can be used for threshold, 𝑖.𝑒., δ = 0.5 * 𝑙𝑒𝑛(c_i),
§.§ Experimental Results for Intra-cluster Local Heavy Hitters Identification
In this section, we present the experimental results for our intra-cluster local heavy hitters identification algorithm, which aims to identify high-frequency words within each cluster while preserving the privacy of the participants. We evaluate our algorithm using six synthetic datasets with varying numbers of clients and unique words per cluster, as shown in Table <ref>. The frequency of the unique words in each cluster follows Zipf's distribution.
We compare our algorithm with three variants that differ in the perturbation mechanism (GRR or GRRX) and the group-size strategy (uniform or incremental), as well as TrieHH, a baseline method that uses CDP. We measure the recall and F1 scores of the methods under different values of the privacy parameter ε, ranging from 0.5 to 9.5. Figure <ref> shows that our algorithm consistently outperforms the other methods across all clusters and privacy levels, demonstrating its effectiveness for non-IID data with GRRX, which overcomes the domain limitation, and incremental group-size, which leverages more information from later groups. We also find that our algorithm can handle small cluster sizes, which means it can work with clusters with fewer clients and benefit for scenarios with fewer clients. Our method is similar to XTU, but it outperforms XTU because it uses incremental group-size, increasing informativeness. PEM and GTF are also similar methods, but they perform worse than ours because they suffer from domain limitations.
Furthermore, we observed that TrieHHwith poorly on most clusters, except for the one with 9,500 clients, where it achieves high recall and F1 score at high ε values. This suggests that TrieHH is sensitive to cluster size.
Overall, our algorithm demonstrates superior performance in identifying local heavy hitters within each cluster while preserving the participants' privacy.
§.§ Comparative Analysis of Real Data Performance
In this section, we focus on the evaluation of our algorithm with two real-world datasets, specifically Sentiment140 and Reddit. These datasets facilitate the simulation of two non-IID clusters, with each signifying a unique cluster encompassing non-IID data.
To mitigate computational and communication burdens and accommodate for client availability, we utilize weighted sampling to select a total of 20,000 words from each cluster while upholding the frequency distribution intrinsic to the original data.
Table <ref> delineates the number of clients and unique words pre and post-sampling. Subsequent to this, we implement our algorithm for the identification of the top-k heavy hitters across these clusters.
The resultant findings, as illustrated in Figure <ref>, indicate that our algorithm consistently outperforms TrieHH in the majority of scenarios, thus demonstrating its enhanced accuracy and efficiency when dealing with non-IID data. Nevertheless, we noted a degradation in our algorithm's performance when ε is excessively diminutive. This can be ascribed to the fact that a smaller ε corresponds to heightened privacy safeguards, which inadvertently introduces amplified noise and uncertainty within the data perturbation and aggregation phases under the mechanism of LDP.
§.§ FedCampus Demo
FedCampus is a platform that facilitates federated analytics (FA) on data collected from various edge devices, such as smartphones and smartwatches, within a campus-scale environment. One of the features of this platform is its ability to maintain the privacy and security of the participants throughout the analytics process. By leveraging advanced privacy-preserving techniques, FedCampus provides a powerful tool for conducting secure and privacy-preserving analytics on edge devices. The application's interface is shown in Figure <ref>.
Peering into the future, FedCampus aspires to extend its support for federated learning, along with an array of diverse computational and analytical paradigms, ushering in a new era of privacy-preserving data analysis and machine learning on the edge. This broad spectrum of capabilities envisages transforming FedCampus into a comprehensive tool for diverse research and practical applications in distributed, privacy-aware learning and analytics.
|
http://arxiv.org/abs/2307.01339v1
|
20230703202142
|
Differential curvature invariants and event horizon detection for accelerating Kerr-Newman black holes in (anti-)de Sitter spacetime
|
[
"G. V. Kraniotis"
] |
gr-qc
|
[
"gr-qc"
] |
Differential curvature invariants and event horizon detection for accelerating Kerr-Newman black holes in (anti-)de Sitter spacetime
G. V. Kraniotis [email: gvkraniotis@gmail.com]
University of Ioannina, Physics Department
Section of
Theoretical Physics, GR- 451 10, Greece
======================================================================================================================================================
We compute analytically differential invariants for accelerating, rotating and charged black holes with a cosmological constant Λ. In particular, we compute in closed form novel explicit algebraic expressions for curvature invariants constructed from covariant derivatives of the Riemann and Weyl tensors, such as the Karlhede and the Lake-Abdelqader invariants, for the Kerr-Newman-(anti-)de Sitter and accelerating Kerr-Newman-(anti-)de Sitter black hole spacetimes. We explicitly show that some of the computed curvature invariants are vanishing on the event and Cauchy horizons and/or the ergosurface of the accelerating, charged and rotating black holes with a non-zero cosmological constant. Therefore they can serve as possible detectors of the event horizon and ergosurface for such black hole metrics which belong to the most general type D solution of the Einstein-Maxwell equations with a cosmological constant.
§ INTRODUCTION
It is well known that in a semi-Riemannian manifold there are three causal types of submanifolds: spacelike (Riemannian), timelike (Lorentzian) and lightlike (degenerate), depending on the character of the induced metric on the tangent space <cit.>,<cit.>.
Coordinate singularities can sometimes be interpreted as various kinds of horizons. Lightlike submanifolds (in particular, lightlike hypersurfaces) are interesting in general relativity since they produce models of different types of horizons . These include the: event horizon, Killing horizons,Cauchy horizons,Cosmological and acceleration horizons.
The event horizon of a black hole is a codimension one null hypersurface, which constitutes the boundary of the black hole region from which causal geodesics cannot reach future null infinity. As a consequence,
the event horizon is highly nonlocal and a priori we need the full knowledge of spacetime to locate it [In this respect, the event horizon constitutes essentially a global (teleological) object <cit.>. ]. In relation to this, the images obtained recently by the Event Horizon Telescope (EHT) and analysed by computer simulations <cit.>, contain the environment of the black hole as well as the codimension two cross section of the event horizon but not the event horizon which is a 2+1 dimensional hypersurface.
Thus the analytical study of black hole horizon, and the localisation of it becomes an issue of crucial importance.
The Killing horizons, are surfaces where the norm of some Killing vector vanishes. On the other hand, Cauchy horizons are future boundaries of regions that can be uniquely determined by initial data on some appropriate spacelike hypersurface.
The curvature scalar invariants of the Riemann tensor are important in General Relativity because they allow a manifestly coordinate invariant characterisation of certain geometrical properties of spacetimes such as, among others, curvature singularities, gravitomagnetism, anomalies <cit.>-<cit.>. Recently, we calculated explicit analytic expressions for the set of Zakhary-McIntosh curvature invariants for accelerating Kerr-Newman black holes in (anti-)de Sitter spacetime as well as for the Kerr-Newman-(anti-)de Sitter black hole <cit.>. We also calculated in <cit.>, explicit algebraic expressions for the Euler-Poincare density invariant and the Kretschmann scalar for both types of black hole spacetimes. We also highlighted that for accelerating rotating and charged black holes with Λ≠0, the integrated Chern-Pontryagin-Hirzebruch invariant gives a non-zero result for the quantum photon chiral anomaly <cit.>.
On the other hand differential curvature invariants,such as covariant derivatives of the curvature tensor <cit.> and gradients of non-differential invariants <cit.> are necessary for a complete description of the local geometry. They have been suggested as possible detectors of the event horizon and ergosurface of the Kerr black holes <cit.>.
It is the purpose of this paper to apply the formalism of Karlhede et al <cit.> and Lake-Abdelqader <cit.> (see also <cit.>), to the
case of accelerating and rotating charged black holes with non-zero cosmological constant Λ and compute for the first time analytic algebraic expressions for the corresponding curvature differential invariants. Specifically, we compute novel closed-form algebraic expressions for these local curvature invariants for the
accelerating Kerr-Newman-(anti-)de Sitter black hole. These black hole metrics belong to the most general type D solution of the Einstein-Maxwell equations with a cosmological constant and constitute the physically most important case <cit.>,<cit.>.
Indeed, besides the intrinsic theoretical interest of accelerating or non-accelerating KN(a)dS black holes , a variety of observations support and single out their physical relevance in Nature.
A wide variety of astronomical and cosmological observations in the last two decades, including high-redshift type Ia supernovae, cosmic microwave background radiation and large scale structure indicate convincingly an accelerating expansion of the Universe <cit.>,<cit.>,<cit.>,<cit.>. Such observational data can be explained by a positive cosmological constant Λ (Λ>0) with a magnitude Λ∼ 10^-56 cm^-2 <cit.>.
Recent observations of structures near the galactic centre region SgrA* by the GRAVITY experiment, indicate possible presence of a small electric charge of central supermassive black hole <cit.>,<cit.>. Accretion disk physics around magnetised Kerr black holes under the influence of cosmic repulsion is extensively discussed in the review <cit.> [We also mention that supermassive black holes as possible sources of ultahigh-energy cosmic rays have been suggested in <cit.>, where it has been shown that large values of the Lorentz γ factor of an escaping ultrahigh-energy particle from the inner regions of the black hole accretion disk may occur only in the presence of the induced charge of the black hole.].
Furthermore, observations of the galactic centre supermassive black hole indicate that it is rotating. With regard to the spin a (Kerr rotation parameter) we note that observations of near-infrared periodic flares have revealed that the central black
hole SgrA^* is rotating with a reported spin parameter: a=0.52 (± 0.1,± 0.08,± 0,08) <cit.>. The error estimates here the uncertainties in the
period, black hole mass and distance to the galactic centre, respectively. Observation of X-ray flares confirmed that the spin of the supermassive black hole is
indeed substantial and values of the Kerr parameter as high as: a=0.9939^+0.0026_-0.0074 have been obtained <cit.>. For our plots we choose values for the Kerr parameter consistent with these observations.
On the other hand, the Kerr parameter a of SgrA* can be measured by precise observations of the predicted by theory Lense-Thirring and periastron precessions from the observed orbits of S-stars in the central arcsecond of Milky Way <cit.>,<cit.>.
Therefore, it is quite interesting to study the combined effect of the cosmological constant,rotation parameter and electromagnetic fields on the geometry of spacetime surrounding the black hole singularity through the explicit algebraic computation and plotting of the Karlhede and Abdelqader-Lake differential curvature invariants, taking into account also the acceleration parameter. In addition, determining the role of such local invariants as detectors of event and ergosurface horizons for the most general black hole solution of Einstein-Maxwell equations, is a very important step in black hole theory and phenomenology.
The material of this paper is organised as follows: In sections <ref> and
<ref> we present the definitions of the Abdelqader-Lake and Karlhede local scalar curvature invariants that we shall use in computing explicit algebraic expressions for these differential invariants for the accelerating and non-accelerating Kerr-Newman black holes in (anti-)de Sitter spacetime.
In section <ref> we derive a novel closed-form analytic expression for the norm of the covariant derivative of the Riemann tensor, i.e. the Karlhede invariant, for the Kerr-Newman-(anti-)de Sitter black hole, see Theorem <ref> and eqn.(<ref>). In section <ref> we derive explicit closed form expressions for the Abdelqader-Lake differential invariants for non-accelerating Kerr-Newman black holes with cosmological constant, see Theorems: Theorem <ref> (eqn.(<ref>))-Theorem <ref> (eqn.(<ref>)), and Theorem <ref>-Theorem <ref>.
In section <ref>, we derive new explicit analytic expressions for the Abdelqader-Lake differential invariants for accelerating Kerr-Newman black holes in the presence of the cosmological constant. For economy of space we present novel explicit expressions for accelerating Kerr black holes, see Theorems: Theorem <ref> (eqn.(<ref>))-Theorem <ref>. Also Theorems: Theorem <ref>-Theorem <ref> for the differential invariant Q_2 for accelerating Kerr black holes in (anti-)de Sitter spacetime.
However and interestingly enough, we derived a very compact explicit formula for the differential invariant Q_2 for the accelerating Kerr-Newman black hole in (anti-)de Sitter spacetime: Theorem <ref> and eqn.(<ref>).
From eqn.(<ref>) we conclude that the local invariant Q_2 can serve as an event horizon detector, since it vanishes at the horizon radii and acceleration horizon radii.
§ PRELIMINARIES ON DIFFERENTIAL CURVATURE INVARIANTS
Taking into account the contribution from the cosmological
constant Λ, the generalisation of the Kerr-Newman solution <cit.>,<cit.>,
is described by the Kerr-Newman de Sitter (KNdS) metric
element which in Boyer-Lindquist (BL) coordinates is given by <cit.>,<cit.>,<cit.>,<cit.> (in units where G=1 and c=1):
ds^2 =-Δ_r^KN/Ξ^2ρ^2(d
t-asin^2θdϕ)^2+ρ^2/Δ_r^KN
dr^2+ρ^2/Δ_θdθ
^2
+Δ_θsin^2θ/Ξ^2ρ^2(ad
t-(r^2+a^2)dϕ)^2
Δ_θ:=1+a^2Λ/3cos^2θ,
Ξ:=1+a^2Λ/3,
Δ_r^KN:=( 1-Λ/3r^2) ( r^2
+a^2) -2mr+q^2,
ρ^2=r^2+a^2cos^2θ,
where a,m,q, denote the Kerr parameter, mass and electric charge
of the black hole, respectively.
The KN(a)dS metric is the most general exact stationary solution of the Einstein-Maxwell system of differential equations, that represents a non-accelerating, rotating, charged black hole with Λ≠0.
This
is accompanied by a non-zero electromagnetic field
F=dA, where the vector potential is
<cit.>,<cit.>:
A=-qr/Ξ(r^2+a^2cos^2θ)(dt-asin^2θdϕ).
The Christoffel symbols of the second kind are expressed in the coordinate basis in the form:
Γ^λ_ μν=1/2g^λα(g_μα,ν+g_να,μ-g_μν,α),
where the summation convention is adopted and a comma denotes a partial derivative.
The Riemann curvature tensor is given by:
R^κ_ λμν=Γ^κ_ λν,μ-Γ^κ_ λμ,ν+
Γ^α_λνΓ^κ_ αμ-Γ^α_λμΓ^κ_αν.
The symmetric Ricci tensor and the Ricci scalar are defined by:
R_μν=R^α_ μαν, R=g^αβR_αβ,
while the Weyl tensor C_κλμν (the trace-free part of the curvature tensor) is given explicitly in terms of the curvature tensor and the metric from the expression:
C_κλμν=R_κλμν +1/2(R_λμg_κν+R_κνg_λμ-R_λνg_κμ-R_κμg_λν)
+1/6R(g_κμg_λν-g_κνg_λμ).
The Weyl tensor has in general, ten independent components which at any point are completely independent of the Ricci components. It corresponds to the free gravitational field [Globally, however, the Weyl tensor and Ricci tensor are not independent, as they are connected by the differential Bianchi identities. These identities determine the interaction between the free gravitational field and the field sources.] <cit.>.
The dual of the Weyl tensor, C_αβγδ^*, is defined by:
C_αβγδ^*=1/2E_αβκλC^κλ_ γδ,
where E_αβκλ is the Levi-Civita pseudotensor.
§.§ The Bianchi identities and the Karlhede invariant
Using its covariant form, the classical Bianchi identities read:
R_λμνκ;η+R_λμην;κ+R_λμκη;ν=0
In eqn(<ref>), the symbol ;denotes covariant differentiation.
Karlhede and collaborators introduced the following coordinate-invariant and Lorentz invariant object the so called Karlhede invariant <cit.>:
𝔎=R^λμνκ;ηR_λμνκ;η.
Karlhede et al computed the invariant 𝔎 for the Schwarzschild black hole and remarkably showed that it is zero and changes sign on the Schwarzschild event horizon.
Indeed, their result for the differential invariant in Eqn.(<ref>)for the Schwarzschild black hole is <cit.>:
𝔎=-240 (6 m^3 r^3-3 m^2 r^4)/r^12.
Unfortunately, this intriguing result does not generalises to the case of the Kerr black hole. The analytic computation of the Karlhede invariant for the Kerr solution reads:
𝔎^Kerr=720 m^2/(r^2+a^2cos(θ)^2)^9(cos(θ)^4 a^4-4 cos(θ)^3 a^3 r -6 cos(θ)^2 a^2 r^2+4 cos(θ) a r^3+r^4)
(cos(θ)^4 a^4+4 cos(θ)^3 a^3 r -6 cos(θ)^2 a^2 r^2-4 cos(θ) a r^3+r^4) (a^2cos(θ)^2-2 m r +r^2).
It is evident from Eqn.(<ref>) that the Karlhede curvature invariant does not vanish on the Kerr event and Cauchy horizons.
In Boyer and Lindquist coordinates the event and Cauchy horizons are located on the surface defined by Δ(r)≡ r^2+a^2-2mr=0, and are given by the expressions:
r_±=m±√(m^2-a^ 2).
The outer horizon r_+ is referred to as the event horizon, while r_- is known as the Cauchy horizon.
However, we note that it vanishes on the infinite-redshift surfaces, where g_tt=0. Equivalently at the roots of the quadratic equation:
r^2+a^2cos(θ)^2-2mr=0,
or
r_E^±=m±√(m^2-a^2cos(θ)^2).
Indeed, shortly after the discovery of the Kerr black hole, it was realised that a region existed outside of the black hole's event horizon where no time-like observer could remain stationary. Six years after the discovery of the Kerr metric, Penrose showed that particles within this ergosphere region could possess negative energy, as measured by an observer at infinity <cit.>.
Let us consider a coordinate-stationary observer with a four-velocity u^μ=(u^t,0,0,0). For the observer to have a time-like trajectory, we require g_ttu^tu^t<0, or alternatively:
g_tt =-a^2-2 m r +r^2/r^2+a^2cos(θ)^2+sin(θ)^2 a^2/r^2+a^2cos(θ)^2
=-(1-2 m r/r^2+a^2cos(θ)^2)<0.
This inequality implies:
r^2-2 m r + a^2cos(θ)^2>0
The two roots of the quadratic expression (with positive leading coefficient) in inequality (<ref>) are given by Eqn.(<ref>).
Thus the inequality for an observer with a physical trajectory is satisfied for
r>r_E^+
or r<r_E^-.
Between the two roots r_E^± the quadratic is negative (opposite sign of the sign of its leading coefficient).
These arguments imply that there is a region outside of r_+ where no stationary observer can exist. This space is called the ergosphere and is bounded from above by the surface defined by r_E^+ [For the Schwarzschild solution the surface of infinite redshift g_tt=0(r=2m) and the event horizon coincide. In <cit.> the imbedding expression for the Kretschmann scalar was used to prove the uniqueness theorems
for the Schwarzschild and Reissener-Nordström black hole
solutions.].
§.§ Lake-Abdelqader differential curvature invariants
In the work <cit.>, Abdelqader and Lake, introduced the following curvature invariants and studied them for the Kerr metric:
I_1 ≡ C_αβγδC^αβγδ,
I_2 ≡ C^*_αβγδC^αβγδ,
I_3 ≡∇_μC_αβγδ∇^μC^αβγδ,
I_4 ≡∇_μC_αβγδ∇^μC^*αβγδ,
I_5 ≡ k_μk^μ,
I_6 ≡ l_μl^μ, I_7≡ k_μl^μ,
where k_μ=-∇_μI_1 and l_μ=-∇_μI_2.
They also defined the following invariants:
Q_1 ≡1/3√(3)(I_1^2-I_2^2)(I_5-I_6)+4I_1I_2I_7/(I_1^2+I_2^2)^9/4,
Q_2 ≡1/27I_5I_6-I_7^2/(I_1^2+I_2^2)^5/2,
Q_3 ≡1/6√(3)I_5+I_6/(I_1^2+I_2^2)^5/4.
Page and Shoom observed that the Abdelqader-Lake invariant Q_2 can be rewritten as follows <cit.>:
Q_2=(I_6+I_5)^2-(12/5)^2(I_1^2+I_2^2)(I_3^2+I_4^2)/108(I_1^2+I_2^2)^5/2.
Their crucial observation was that the curvature invariant Q_2 can be expressed as the norm of the wedge product of two differential forms, namely:
27 (I_1^2+I_2^2)^5/2 Q_2=2‖ dI_1∧ dI_2 ‖^2,
where <cit.>:
‖ dI_1∧ dI_2 ‖^2=1/2((k_μk^μ)(l_νl^ν)-(k_μl^μ)(k_νl^ν)).
To calculate the above curvature invariants for the metric (<ref>) we used MapleTM2021.
§ COMPUTATION OF THE NORM OF THE COVARIANT DERIVATIVE OF RIEMANN TENSOR IN THE KERR-NEWMAN-(ANTI-)DE SITTER SPACETIME
We start our computations with the analytic calculation of the Karlhede curvature invariant for the Kerr-Newman-(anti-)de Sitter black hole.
We calculated in closed analytic form the Karlhede invariant for the Kerr-Newman-(anti-)de Sitter black hole. Our result is:
𝔎≡∇_μR_αβγδ∇^μR^αβγδ
=1/3 (r^2+a^2cos(θ)^2)^9[720 Λcos(θ)^12 a^12 m^2-720 a^10(Λ a^2 m^2+28 Λ m^2 r^2-16 Λ m q^2 r
+76/45Λ q^4-3 m^2) cos(θ)^10+19440 (23 Λ m^2 r^4/9-8 Λ m q^2 r^3/3+(Λ a^2 m^2+248/405Λ q^4-3 m^2) r^2
-16 m (a^2 q^2Λ +3/8 m^2-3 q^2) r/27+76 q^2(a^2 q^2Λ +135/76 m^2-3 q^2)/1215) a^8cos(θ)^8-30240 a^6(-22 Λ m q^2 r^5/35
+(Λ a^2 m^2+278/945Λ q^4-3 m^2) r^4
-10 m (a^2 q^2Λ +14/5 m^2-3 q^2) r^3/7+356 q^2(a^2 q^2Λ +1755/178 m^2-3 q^2) r^2/945
+2 q^2(a^2-151 q^2/45) m r/7-22 a^2 q^4/315+16 q^6/315) cos(θ)^6-30240 a^4 r^2(23 Λ m^2 r^6/14-166 Λ m q^2 r^5/105+(Λ a^2 m^2
+278/945Λ q^4-3 m^2) r^4-74 (a^2 q^2Λ -525/37 m^2-3 q^2) m r^3/105-103 m^2 q^2 r^2/7-2 q^2(a^2-353 q^2/15) m r/7
+26 a^2 q^4/105-14 q^6/15) cos(θ)^4+19440 a^2 r^4(28 Λ m^2 r^6/27-76 Λ m q^2 r^5/45+(Λ a^2 m^2+248/405Λ q^4-3 m^2) r^4
-44 (a^2 q^2Λ -42/11 m^2-3 q^2) m r^3/27+712 q^2(a^2 q^2Λ -4023/178 m^2-3 q^2) r^2/1215+4 q^2(a^2+295 q^2/27) m r/5-52 a^2 q^4/135
-248 q^6/135) cos(θ)^2-720 (Λ m^2 r^6-12 Λ m q^2 r^5/5+(Λ a^2 m^2+76/45Λ q^4-3 m^2) r^4
-12 (a^2 q^2Λ -5/2 m^2-3 q^2) m r^3/5+76 q^2(a^2 q^2Λ -783/76 m^2-3 q^2) r^2/45+12 q^2(a^2+65 q^2/9) m r/5
-44 a^2 q^4/15-76 q^6/15) r^6].
We computed analytically the Karlhede invariant for the Kerr-(anti-)de Sitter black hole. Our result is:
∇_μR_αβγδ∇^μR^αβγδ=1/3 (r^2+a^2cos(θ)^2)^9(720 Λcos(θ)^12 a^12 m^2
-720 a^10(Λ a^2 m^2+28 Λ m^2 r^2-3 m^2) cos(θ)^10
+19440 (23 Λ m^2 r^4/9+(Λ a^2 m^2-3 m^2) r^2-2 m^3 r/9) a^8cos(θ)^8
-30240 a^6((Λ a^2 m^2-3 m^2) r^4-4 m^3 r^3) cos(θ)^6
-30240 a^4 r^2(23 Λ m^2 r^6/14+(Λ a^2 m^2-3 m^2) r^4+10 m^3 r^3) cos(θ)^4
+19440 a^2 r^4(28 Λ m^2 r^6/27+(Λ a^2 m^2-3 m^2) r^4+56 m^3 r^3/9) cos(θ)^2
-720 (Λ m^2 r^6+(Λ a^2 m^2-3 m^2) r^4+6 m^3 r^3) r^6)
=240 m^2/(r^2+a^2cos(θ)^2)^9(cos(θ)^4 a^4-4 cos(θ)^3 a^3 r -6 cos(θ)^2 a^2 r^2+4 cos(θ) a r^3+r^4)
×(cos(θ)^4 a^4+4 cos(θ)^3 a^3 r
-6 cos(θ)^2 a^2 r^2-4 cos(θ) a r^3+r^4)
×(Λcos(θ)^4 a^4-Λcos(θ)^2 a^4-Λ a^2 r^2-Λ r^4+3 a^2cos(θ)^2-6 m r +3 r^2).
For zero rotation (i.e a=0), Eqn.(<ref>) reduces to the analytic exact expression of the Karlhede invariant for the Reissner-Nordström-(anti-)de Sitter black hole:
∇_μR_αβγδ∇^μR^αβγδ
=-240 (Λ m^2 r^6-12 Λ m q^2 r^5/5+(76 Λ q^4/45-3 m^2) r^4-12 (-5 m^2/2-3 q^2) m r^3/5+76 q^2(-783 m^2/76-3 q^2) r^2/45+52 q^4 m r/3-76 q^6/15)/r^12
=-(-720 m^2/r^8+1728 q^2 m/r^9-1216 q^4/r^10) (1+q^2/r^2-2 m/r-Λ r^2/3).
We observe from eqn.(<ref>), that the Karlhede invariant for the Reissner-Nordström-(anti-)de Sitter black hole vanishes on the black hole horizon.
In Fig.<ref> we plot level curves for the Karlhede invariant we computed in Eqn.( <ref>) , in the r-θ space, for different sets of values of the physical black hole parameters a,Λ,m.
We computed an explicit algebraic expression for the differential curvature invariant ∇_μC^*_αβγδ∇^μC^*αβγδ
for the Kerr-(anti-)de Sitter black hole spacetime:
∇_μC^*_αβγδ∇^μC^*αβγδ=-240 m^2/(r^2+a^2cos(θ)^2)^9(a^4cos(θ)^4-4 cos(θ)^3 a^3 r -6 a^2cos(θ)^2 r^2+4 cos(θ) a r^3+r^4)
×(a^4cos(θ)^4+4 cos(θ)^3 a^3 r -6 a^2cos(θ)^2 r^2-4 cos(θ) a r^3+r^4)
×(Λcos(θ)^4 a^4-Λcos(θ)^2 a^4-Λ a^2 r^2-Λ r^4+3 a^2cos(θ)^2-6 m r +3 r^2).
It is evident from Eqn.(<ref>) that the differential curvature invariant ∇_μC^*_αβγδ∇^μC^*αβγδ vanishes at the ergosurfaces, i.e the roots of the equation: g_tt=0. It serves therefore as a detector of ergosurfaces for the Kerr-(anti-) de Sitter black hole.
The invariant ∇_μC^*_αβγδ∇^μC^*αβγδ
for the Kerr spacetime is calculated in closed analytic form with the result:
∇_μC^*_αβγδ∇^μC^*αβγδ=
-720 m^2/(r^2+a^2cos(θ)^2)^9(cos(θ)^4 a^4-4 cos(θ)^3 a^3 r -6 cos(θ)^2 a^2 r^2+4 cos(θ) a r^3+r^4)
×(cos(θ)^4 a^4+4 cos(θ)^3 a^3 r -6 cos(θ)^2 a^2 r^2-4 cos(θ) a r^3+r^4) (a^2cos(θ)^2-2 m r +r^2).
We calculated in closed analytic form the invariant ∇_μC^*_αβγδ∇^μC^*αβγδ
for the Reissner-Nordström black hole. Indeed, our computation yields:
∇_μC^*_αβγδ∇^μC^*αβγδ=48 (15 m^2 r^2-36 m q^2 r +22 q^4) (2 m r -q^2-r^2)/r^12
§ ANALYTIC COMPUTATION OF THE LAKE-ABDELQADE LOCAL INVARIANTS FOR THE KERR-NEWMAN-(ANTI) DE SITTER BLACK HOLE AND THEIR ROLE IN DETECTING BLACK HOLE HORIZONS AND/OR ERGOSURFACES
§.§ Rotating black holes with Λ≠0
Our analytic computation for the invariant I_3 for a Kerr-Newman-(anti-)de Sitter black hole yields the result:
I_3=1/(r^2+a^2cos(θ)^2)^9(240 Λ a^12cos(θ)^12 m^2-240 [Λ a^2 m^2+28 m^2 r^2Λ -16 m q^2 r Λ +22/15 q^4Λ
-3 m^2] a^10cos(θ)^10+6480 {23 m^2 r^4Λ/9-8 m q^2 r^3Λ/3+(Λ a^2 m^2+254/405 q^4Λ -3 m^2) r^2
-16 (a^2 q^2Λ +3/8 m^2-3 q^2) m r/27+22 q^2(a^2 q^2Λ +45/22 m^2-3 q^2)/405} a^8cos(θ)^8-10080 [-22 m q^2 r^5Λ/35
+(Λ a^2 m^2+92/315 q^4Λ -3 m^2) r^4-10 (a^2 q^2Λ +14/5 m^2-3 q^2) m r^3/7+122 (a^2 q^2Λ +585/61 m^2-3 q^2) q^2 r^2/315
+2 q^2(a^2-10 q^2/3) m r/7-2 a^2 q^4/35+q^6/21] a^6cos(θ)^6-10080 {23 m^2 r^6Λ/14-166 m q^2 r^5Λ/105+(Λ a^2 m^2+92/315 q^4Λ
-3 m^2) r^4-74 m (a^2 q^2Λ -525/37 m^2-3 q^2) r^3/105-103 m^2 q^2 r^2/7-2 (a^2-352 q^2/15) q^2 m r/7+2 a^2 q^4/7
-97 q^6/105} a^4 r^2cos(θ)^4+6480 [28 m^2 r^6Λ/27-76 m q^2 r^5Λ/45+(Λ a^2 m^2+254/405 q^4Λ -3 m^2) r^4
-44 m (a^2 q^2Λ -42/11 m^2-3 q^2) r^3/27+244 q^2(a^2 q^2Λ -1341/61 m^2-3 q^2) r^2/405+4 (a^2+298 q^2/27) q^2 m r/5-4 a^2 q^4/9
-254 q^6/135] a^2 r^4cos(θ)^2-240 [m^2 r^6Λ -12 m q^2 r^5Λ/5+(Λ a^2 m^2+22/15 q^4Λ -3 m^2) r^4
-12 (a^2 q^2Λ -5/2 m^2-3 q^2) m r^3/5+22 (a^2 q^2Λ -261/22 m^2-3 q^2) q^2 r^2/15+(12/5 a^2 m q^2+16 q^4 m ) r -12 a^2 q^4/5
-22 q^6/5] r^6).
We computed the invariants I_6,I_5 for the Kerr and Kerr-(anti-)de Sitter metrics:
We computed the explicit algebraic expression for the invariant I_6 for the Kerr-(anti-)de Sitter black hole. The result is:
I_6=-1354752m^4 a^2/(r^2+a^2cos(θ)^2)^15(Λcos(θ)^16 a^14 r^2-48 (163 Λ r^4/16+(Λ a^2-3) r^2-m r/8+a^2/16) a^12cos(θ)^14/49
+64/7(211 Λ r^4/64+(Λ a^2-3) r^2-9 m r/16-3 a^2/64) r^2 a^10cos(θ)^12-144 r^4/7[139 Λ r^4/144+(Λ a^2-3) r^2-73 m r/24
+a^2/16] a^8cos(θ)^10
-15 r^6(139/15Λ r^4+424/5 m r +a^2) a^6cos(θ)^8/7+144 r^8 a^4/7[211 Λ r^4/144+(Λ a^2-3) r^2+217 m r/24
-5 a^2/48] cos(θ)^6
-64 r^10(489 Λ r^4/448+(Λ a^2-3) r^2+105 m r/16+9 a^2/64) a^2cos(θ)^4/7
+(Λ r^16+(-144/49+48 Λ a^2/49) r^14+6 m r^13-3 a^2 r^12/7) cos(θ)^2-3 r^14/49).
For the Kerr metric (Λ=0) we find:
I_6=82944m^4 a^2/(r^2+a^2cos(θ)^2)^15(a^12(a^2-2 m r -48 r^2) cos(θ)^14+7 a^10 r^2(a^2+12 m r +64 r^2) cos(θ)^12
+21 r^4(a^2-146/3 m r -48 r^2) a^8cos(θ)^10+(35 a^8 r^6+2968 a^6 m r^7) cos(θ)^8+[35 r^8 a^6-3038 a^4 m r^9
+1008 r^10 a^4] cos(θ)^6+(21 r^10 a^4+980 a^2 m r^11-448 a^2 r^12) cos(θ)^4
+(7 a^2 r^12-98 m r^13+48 r^14) cos(θ)^2+r^14).
We computed the following explicit analytic expression for the differential invariant I_5 for the spacetime of the Kerr-(anti-)de Sitter black hole:
I_5=-27648 m^4/(r^2+a^2cos(θ)^2)^15(Λcos(θ)^18 a^18-a^16(Λ a^2+42 Λ r^2-3) cos(θ)^16+42 a^14((Λ r^2-1/14) a^2
+73 Λ r^4/6-3 r^2) cos(θ)^14-462 ((Λ r^2+1/22) a^2-7 (-205/42Λ r^3+m +33/7 r ) r/11) r^2 a^12cos(θ)^12
+994 ((Λ r^2-9/142) a^2-210 (-7/20Λ r^3+m +71/70 r ) r/71) r^4 a^10cos(θ)^10
+(-105 a^10 r^6+[1029 Λ r^10 +9114 m r^7] a^8) cos(θ)^8-994 [(Λ r^2+15/142) a^2+205 Λ r^4/142+636 m r/71
-3 r^2] r^8 a^6cos(θ)^6+462 r^10 a^4((Λ r^2-3/22) a^2+73 r (1/6Λ r^3+m -33/73 r )/11) cos(θ)^4
-42 r^12((Λ r^2+1/2) a^2+Λ r^4+6 m r -3 r^2) a^2cos(θ)^2+r^14((Λ r^2-3) a^2+Λ r^4+6 m r -3 r^2))
For Λ=0 (i.e. the Kerr metric), the invariant I_5 becomes:
I_5=-82944 m^4/(r^2+a^2cos(θ)^2)^15(a^16cos(θ)^16+(-a^16-42 a^14 r^2) cos(θ)^14-7 (a^2-14 m r -66 r^2) r^2 a^12cos(θ)^12
+(-21 r^4 a^12+(-980 m r^5-994 r^6) a^10) cos(θ)^10+(-35 a^10 r^6+3038 m r^7 a^8) cos(θ)^8
+(-35 r^8 a^8+(-2968 m r^9+994 r^10) a^6) cos(θ)^6-21 (a^2-146/3 m r +22 r^2) r^10 a^4cos(θ)^4
-7 r^12(a^2+12 m r -6 r^2) a^2cos(θ)^2-a^2 r^14+2 m r^15-r^16)
In Fig. <ref> we display contour plots of the differential invariant I_6, eqn.(<ref>), for the Kerr-(anti-)de Sitter black hole.
We calculated an exact algebraic expression for the invariant I_3 in the case of the Kerr-(anti-)de Sitter black hole. The result is:
I_3=240 m^2/(r^2+a^2cos(θ)^2)^9(a^4cos(θ)^4-4 cos(θ)^3 a^3 r -6 a^2cos(θ)^2 r^2+4 cos(θ) a r^3+r^4)
×(a^4cos(θ)^4+4 cos(θ)^3 a^3 r -6 a^2cos(θ)^2 r^2-4 cos(θ) a r^3+r^4)
×(Λ a^4cos(θ)^4-Λcos(θ)^2 a^4-Λ a^2 r^2-Λ r^4+3 a^2cos(θ)^2-6 m r +3 r^2)
The closed form analytic solution for the curvature invariant I_4 in the Kerr-(anti-)de Sitter spacetime is:
I_4=1920 cos(θ)r m^2 a/(r^2+a^2cos(θ)^2)^9(Λcos(θ)^4 a^4-Λcos(θ)^2 a^4-Λ a^2 r^2-Λ r^4+3 a^2cos(θ)^2-6 m r +3 r^2)
×(a^2cos(θ)^2-2 cos(θ) a r -r^2) (a^2cos(θ)^2+2 cos(θ) a r -r^2) (a^2cos(θ)^2-r^2)
In the following theorem we computed the invariant I_7≡ k_μl^μ:
The closed form analytic expression for the invariant I_7 in the Kerr-(anti-)de Sitter spacetime is given by:
I_7=-27648 m^4 r cos(θ) a/(r^2+a^2cos(θ)^2)^15(7 cos(θ)^6 a^6-35 a^4cos(θ)^4 r^2+21 cos(θ)^2 a^2 r^4-r^6)
×(cos(θ)^6 a^6-21 a^4cos(θ)^4 r^2+35 cos(θ)^2 a^2 r^4-7 r^6)
×(Λcos(θ)^4 a^4-Λcos(θ)^2 a^4-Λ a^2 r^2-Λ r^4+3 a^2cos(θ)^2-6 m r +3 r^2)
The analytic computation of the invariant I_4, for the case of the Kerr-Newman-(anti-)de Sitter black hole yields the result:
I_4^KN(a)dS=1920 a cos(θ) /(r^2+a^2cos(θ)^2)^9(a^10 m Λ(m r -3 q^2/10) cos(θ)^10-a^8(7 Λ m^2 r^3-57 Λ m q^2 r^2/10+[a^2 m^2Λ +29/30 q^4Λ
-3 m^2] r -3 m q^2(Λ a^2-3)/10) cos(θ)^8+6 a^6(Λ m^2 r^5-83 m q^2 r^4Λ/60+(a^2 m^2Λ +37/90 q^4Λ -3 m^2) r^3
-11 (a^2Λ q^2+12/11 m^2-3 q^2) m r^2/12+29 (a^2Λ q^2+126/29 m^2-3 q^2) q^2 r/180+m q^2(a^2-2 q^2)/20) cos(θ)^6
+37 a^4 r /10(60 Λ m^2 r^6/37-33 r^5 m q^2Λ/37+m (a^2Λ q^2+420/37 m^2-3 q^2) r^3-19 (a^2Λ q^2+498/19 m^2-3 q^2) q^2 r^2/37
-27 (a^2-178 q^2/27) m q^2 r/37+12 q^4 a^2/37-17 q^6/37) cos(θ)^4-6 a^2 r^3(7 Λ m^2 r^6/6-3 r^5 m q^2Λ/2+[a^2 m^2Λ +37/90 q^4Λ
-3 m^2] r^4-5 m (a^2Λ q^2-28/5 m^2-3 q^2) r^3/4+19 (a^2Λ q^2-750/19 m^2-3 q^2) q^2 r^2/60+(a^2+418 q^2/15) m q^2 r/4
-37 q^6/30) cos(θ)^2+r^5(Λ m^2 r^6-2 r^5 m q^2Λ +(a^2 m^2Λ +29/30 q^4Λ -3 m^2) r^4-2 m (a^2Λ q^2-3 m^2-3 q^2) r^3
+(29/30 a^2 q^4Λ -15 m^2 q^2-29/10 q^4) r^2+3 m q^2(a^2+118 q^2/15) r/2-6 q^4 a^2/5-29 q^6/10)).
In <cit.>, the following syzygy was discovered for the Kerr metric:
I_6-I_5+12/5(I_1I_3-I_2I_4)=0.
We have discovered. using our explicit algebraic expressions for the curvature invariants, that Eqn.(<ref>) also holds for the Kerr-(anti-) de Sitter black hole.
Moreover, we find that the following syzygy is valid for the case of the Kerr-(anti-)de Sitter spacetime [This syzygy was discovered in the Kerr case in <cit.>. Our result is that this syzygy is also satisfied for the Kerr-(anti-)de Sitter black hole. ]:
I_7=6/5(I_1I_4+I_2I_3)
The syzygies (<ref>) and (<ref>) may be expressed as the real and imaginary part of the complex syzygy <cit.>:
∇_μ(I_1+iI_2)∇^μ(I_1+iI_2)=12/5(I_1+iI_2)(I_3+iI_4).
We also note from (<ref>) that the invariant I_7 vanishes on the boundary of the ergosphere region.
Indeed the stationary limit surfaces of the rotational Kerr-(anti-)de Sitter black hole are defined by g_tt=0. The metric element g_tt in this case is given by:
g_tt=-a^2+q^2-2 m r +r^2-Λ(a^2+r^2) r^2/3/(r^2+a^2cos(θ)^2) (1+a^2Λ/3)+(1+a^2Λcos(θ)^2/3) sin(θ)^2 a^2/(r^2+a^2cos(θ)^2) (1+a^2Λ/3)
=-1/3ρ^2Ξ^2(Λcos(θ)^4 a^4-Λcos(θ)^2 a^4-Λ a^2 r^2-Λ r^4+3 a^2cos(θ)^2-6 m r +3 r^2)=0.
The exact analytic expression for the differential curvature invariant Q_1 for the Kerr-(anti-) de Sitter black hole is the following:
Q_1=-(Λcos(θ)^4 a^4-Λcos(θ)^2 a^4-Λ a^2 r^2-Λ r^4+3 a^2cos(θ)^2-6 m r +3 r^2) (a^2cos(θ)^2-r^2)/3 (r^2+a^2cos(θ)^2)^3/2 m
The invariant Q_1 vanishes on the boundary of the ergosphere region.
For Λ=0, Eqn.(<ref>) reduces to:
Q_1=-(3 a^2cos(θ)^2-6 m r +3 r^2) (a^2cos(θ)^2-r^2)/3 (r^2+a^2cos(θ)^2)^3/2 m
This agrees with the result for the invariant Q_1 for the Kerr black hole, derived in <cit.>.
The exact explicit algebraic expression for the differential curvature invariant Q_3 for the Kerr-(anti-) de Sitter black hole is the following:
Q_3=-Λcos(θ)^4 a^4-3 a^2cos(θ)^2+Λcos(θ)^2 a^4-Λ a^2 r^2+6 a^2-Λ r^4-6 m r +3 r^2/6 m √(r^2+a^2cos(θ)^2).
For zero cosmological constant, Eqn.(<ref>) reduces to:
Q_3=-a^2cos(θ)^2+2 a^2-2 m r +r^2/2 m √(r^2+a^2cos(θ)^2).
This agrees with the result for the local invariant Q_3 for the Kerr black hole, derived in <cit.>.
Returning to the syzygies (<ref>) and (<ref>) we comment the following:
Equations (<ref>) and (<ref>) do not hold in the case of Kerr-Newman-(anti-) de Sitter black hole.
Abdelqader and Lake defined the following dimensionless invariant χ, in order to construct an invariant measure of the Kerrness of a spacetime locally <cit.>:
χ≡I_6-I_5+12/5(I_1I_3-I_2I_4)/(I_1^2+I_2^2)^5/4.
Indeed, we computed this invariant for the Kerr-Newman-(anti-) de Sitter black hole.
We present our result for the Kerr-Newman black hole:
The invariant χ for the Kerr-Newman black hole takes the form:
χΛ=0=44√(3) q^2/5 (cos(θ)^2 a^2 m^2+(r m -q^2)^2)^5/2(r^2+a^2cos(θ)^2)^4(a^12 m^2(r m -19 q^2/66) cos(θ)^12-2 [-7 q^6/132
-17 m (m -176 r/17) q^4/132-3 m^2(a^2-25/9 r m +427/18 r^2) q^2/11+m^3 r (a^2-r m +8 r^2)] a^10cos(θ)^10+30 a^8(-q^8/396
+(-1/165 a^2+79/990 r m -17/180 r^2) q^6+9 (a^2-1079/324 r m +143/27 r^2) m r q^4/55-9 m^2 r^2(a^2-71/45 r m +307/108 r^2) q^2/11
+m^3 r^3(a^2-r m +19/10 r^2)) cos(θ)^8-84 r^2[-q^8/36+(-2/33 a^2+7/22 r m -41/396 r^2) q^6
+6 m (a^2-475/216 r m +11/9 r^2) r q^4/11-15 m^2(a^2-61/45 r m +89/90 r^2) r^2 q^2/11+m^3 r^3(a^2-r m +6/7 r^2)] a^6cos(θ)^6
+60 r^4(-7 q^8/66+(-7/33 a^2+392/495 r m -287/1980 r^2) q^6+63 (a^2-601/324 r m +121/189 r^2) m r q^4/55
-21 m^2(a^2-401/315 r m +307/504 r^2) r^2 q^2/11+m^3 r^3(a^2-r m +7/12 r^2)) a^4cos(θ)^4-10 [-91 q^8/330
+(-28/55 a^2+263/165 r m -17/60 r^2) q^6+108 m (a^2-2213/1296 r m +44/81 r^2) r q^4/55-27 m^2(a^2-167/135 r m +427/810 r^2) r^2 q^2/11
+m^3 r^3(a^2-r m +28/55 r^2)] r^6 a^2cos(θ)^2
+2 (r m -q^2)^2 r^8(-7 q^4/12+(-a^2+5/3 r m -7/12 r^2) q^2+m r (a^2-r m +1/2 r^2))/11).
In Fig.<ref> we display contour plots for the scalar invariant χ, eqn.(<ref>), for the Kerr-Newman black hole.
§ DIFFERENTIAL INVARIANTS FOR ACCELERATING KERR-NEWMAN BLACK HOLES IN (ANTI-)DE SITTER SPACETIME
§.§ Accelerating and rotating charged black holes with non-zero cosmological constant Λ
The Plebański-Demiański metric covers a large family of solutions which include the physically most significant case: that of an accelerating, rotating and charged black hole with a non-zero cosmological constant <cit.>. We focus on the following metric that describes an accelerating Kerr-Newman black hole in (anti-)de Sitter spacetime <cit.>,<cit.>:
ds^2 =1/Ω^2{-Q/ρ^2[dt-asin^2θdϕ]^2+
ρ^2/Qdr^2+ρ^2/Pdθ^2
+P/ρ^2sin^2θ[adt-(r^2+a^2)dϕ]^2},
where
Ω= 1-α r cosθ,
P= 1-2α m cosθ+(α^2(a^2+q^2)+1/3Λ a^2)cos^2θ,
Q= ((a^2+q^2)-2mr+r^2)(1-α^2r^2)
-1/3Λ(a^2+r^2)r^2,
and α is the acceleration of the black hole.
The metric (<ref>) becomes singular at the roots of Ω, ρ^2,Q,P. Some of them are pseudosingularities (mere coordinate singularities) while others are true (curvature) singularities detected by the curvature invariants.
We shall discuss the influence of the acceleration parameter α on these singularities.
Ω becomes zero if:
r=1/αcosθ.
As the metric blows up if Ω→ 0, Eq. (<ref>) determines the boundary of the spacetime, thus we have to restrict to regions where Ω>0.
For α=0 there is no restriction because Ω=1.
ρ^2 becomes zero at the ring singularity:
r=0 and cosθ=0
The ring singularity r=0,θ=π/2 is a curvature singularity for (m≠0) and is unaffected by α.
The real roots of Q, yield coordinate singularities which correspond to the up to 4 horizons of the spacetime. We investigated these pseudosingularities in <cit.>.
In general, at the roots of P would be coordinate singularities, too. These would indicate further horizons where the vector field ∂_θ would change its causal character, just as the vector field ∂_r does at the roots of Q. Nevertheless, since these horizons would lie on cones θ= constant instead of on spheres r= constant would be hardly of any physical relevance <cit.>. The equation P=0 is a quadratic equation for cos(θ):
P=0⇔ a_1cos^2(θ)+b_1 cos(θ)+c_1=0,
where a_1≡(α^2(a^2+q^2)+1/3Λ a^2),b_1≡-2α m,c_1=1.
The roots of the quadratic equation are given by the formula:
cosθ_±=-b_1±√(b_1^2-4 a_1)/2 a_1.
If the radicand (i.e. the discriminant) in (<ref>) is negative P≠0 is guaranteed ∀θ∈ℝ. In fact in this case, for Λ>0 we have P>0 for θ∈[0,π], since the leading coefficient in (<ref>) is positive.
For positive radicand (discriminant) from the theory of quadratic algebraic equations we know that P has the opposite sign of a_1 for values of cosθ between the two real roots and has the same sign as the sign of the leading coefficient a_1 for values of cosθ outside the two roots
[Nevertheless for the values of the physical black hole parameters we investigated the real roots of P occur for θ∉ℝ.].
For Λ=0, if m^2≥ a^2+q^2 the expression for Q factorises as:
Q=(r_--r)(r_+-r)(1-α^2 r^2),
where
r_±=m±√(m^2-a^2-q^2).
The expressions for the radii r_± are identical to those for the location of the outer and inner horizons of the nonaccelerating Kerr-Newman black hole. However, in the present case there is another horizon at r=α^-1 known in the context of the C- metric as an acceleration horizon.
When Λ≠0, the location of all horizons is modified.
We computed in closed analytic form the invariant I_3≡∇_μC_αβγδ∇^μC^αβγδ for an accelerating Kerr black hole. Our result is:
∇_μC_αβγδ∇^μC^αβγδ
=-720 (α r cos(θ)-1)^7 m^2/(r^2+a^2cos(θ)^2)^9(α a^10(a^4α^4 r -2 α^2(α^2 m r^2-m -r ) a^2+2 α^2 m r^2-2 m +r ) cos(θ)^11
+(a^6α^4+(-2 α^4 m r +2 α^2) a^4+(34 α^4 m r^3-38 α^2 m r +1) a^2
-2 m r^3α^2) a^8cos(θ)^10
-2 [α^2(27 r^3α^2/2+m ) a^4+(-28 m r^4α^4+31 α^2 m r^2
+27 r^3α^2-m ) a^2+44 α^2 m r^4-43 m r^2+27 r^3/2] α a^8cos(θ)^9-27 (a^6 r α^4-22 (α^2 m r^2+17/11 m -27/11 r ) α^2 a^4/27
+(280/27 m r^4α^4-394/27α^2 m r^2+2/27 m +r ) a^2-56 α^2 m r^4/27) a^6 r cos(θ)^8+56 (α^2(3 r^3α^2/4+m ) a^4
+(-5/2 m r^4α^4+85/14α^2 m r^2+3/2 r^3α^2-11/7 m ) a^2+13 (α^2 m r^2-11/13 m +3/26 r ) r^2/2) α a^6 r^2cos(θ)^7+42 a^4 r^3(
a^6 r α^4+10 (α^2 m r^2-2 m +3/5 r ) α^2 a^4/3+(26/3 m r^4α^4-50/3α^2 m r^2+4/3 m +r ) a^2-10 α^2 m r^4/3) cos(θ)^6
-140 [α^2(-3 r^3α^2/10+m ) a^4+(-2/5 m r^4α^4+5 α^2 m r^2-3/5 r^3α^2-13/5 m ) a^2+2 r^2(α^2 m r^2-1/2 m -3/20 r )
] α a^4 r^4cos(θ)^5+42 {a^6 r α^4-22 (α^2 m r^2-13/11 m -3/11 r ) α^2 a^4/3+(-44/21 m r^4α^4+170/21α^2 m r^2-10/3 m +r ) a^2
+4 α^2 m r^4/3} a^2 r^5cos(θ)^4+56 α a^2 r^6(α^2(-27 r^3α^2/56+m ) a^4+(-1/28 m r^4α^4+197/28α^2 m r^2-27/28 r^3α^2
-5 m ) a^2+17 (α^2 m r^2+11/17 m -27/34 r ) r^2/28) cos(θ)^3-27 (a^6 r α^4-86 (α^2 m r^2-44/43 m -27/43 r ) α^2 a^4/27
+(-2/27 m r^4α^4+62/27α^2 m r^2-56/27 m +r ) a^2+2 α^2 m r^4/27) r^7cos(θ)^2-2 α(α^2(-r^3α^2/2+m ) a^4
+(19 α^2 m r^2-r^3α^2-17 m ) a^2+r^2(m -r/2)) r^8cos(θ)+r^9(a^4α^4 r -2 α^2(α^2 m r^2-m -r ) a^2
+2 α^2 m r^2-2 m +r )).
We have computed the following analytic expression for the invariant I_4 for the accelerating Kerr black hole:
I_4=-5760 m^2(α r cos(θ)-1)^7 a/(r^2+a^2cos(θ)^2)^9((-1/2α^4 m r^2+1/2α^2 m ) a^10cos(θ)^11
+(a^4 r α^4-2 α^2(α^2 m r^2-m -r ) a^2+5 α^2 m r^2/2-5 m/2+r ) α a^8 r cos(θ)^10
+(a^6 r α^4-3 α^2(α^2 m r^2+1/3 m -4/3 r ) a^4/2+(16 α^4 m r^4-20 α^2 m r^2+r ) a^2-2 α^2 m r^4) a^6cos(θ)^9
-2 [α^2(3 α^2 r^3+m ) a^4+(-7 α^4 m r^4+(41/4 m r^2+6 r^3) α^2-5 m/4) a^2+14 α^2 m r^4-13 m r^2
+3 r^3] α a^6 r cos(θ)^8-6 (a^6 r α^4+α^2(α^2 m r^2-8 m +6 r ) a^4/3+(49/6α^4 m r^4-79/6α^2 m r^2+1/3 m +r ) a^2
-7 α^2 m r^4/3) a^4 r^2cos(θ)^7+14 (α^2 a^4+(-α^4 r^4+5 α^2 r^2-2) a^2+7 r^4α^2/2-5 r^2/2) α m a^4 r^3cos(θ)^6
+35 m a^2 r^4((α^4 r^2-7/5α^2) a^4+(4/5α^4 r^4-2 α^2 r^2+2/5) a^2-2 r^4α^2/5) cos(θ)^5-14 [α^2(-3 α^2 r^3/7+m ) a^4
+(-α^4 m r^4/7+79 (m -12 r/79) r^2α^2/14-7 m/2) a^2+8 (α^2 m r^2-1/8 m -3/8 r ) r^2/7] α a^2 r^5cos(θ)^4+6 (a^6 r α^4
-13 α^2(α^2 m r^2-14/13 m -6/13 r ) a^4/3+(-5/12α^4 m r^4+41/12α^2 m r^2-7/3 m +r ) a^2+α^2 m r^4/3) r^6cos(θ)^3
+2 (α^2(-α^2 r^3/2+m ) a^4+((10 m r^2-r^3) α^2-8 m ) a^2+r^2(α^2 m r^2+3 m -2 r )/4) α r^7cos(θ)^2
-(a^4 r α^4-5 α^2(α^2 m r^2-m -4/5 r ) a^2/2+2 α^2 m r^2-2 m +r ) r^8cos(θ)-α^3 m r^11/2+α m r^9/2).
For zero acceleration α=0 the invariant I_4 takes the form:
I_4 =5760 m^2 a r cos(θ)/(r^2+a^2cos(θ)^2)^9(a^2cos(θ)^2-2 cos(θ) a r -r^2)
×(a^2cos(θ)^2+2 cos(θ) a r -r^2) (a^2cos(θ)^2-2 m r +r^2) (a^2cos(θ)^2-r^2)
The analytic computation of the invariant I_5 for the accelerating Kerr black hole yields the result:
I_5=-82944 m^4(α r cos(θ)-1)^13/(r^2+a^2cos(θ)^2)^15∑_i=0^17T_i (cos(θ))^i,
where the coefficients T_i are given below:
T_0=-((α^6 r^2-α^4) a^6+(-2 m r^3α^6+2 m α^4 r +α^4 r^2-2 α^2) a^4+(4 m r^3α^4-4 m r α^2-α^2 r^2-1) a^2
-2 (α^2 r^2 m -m +1/2 r ) r ) r^14,
T_1=-((α^6 r^3+13 α^4 r ) a^6+(-62 m r^2α^4+r^3α^4+64 m α^2+26 r α^2) a^4+[60 α^2 r^2 m -α^2 r^3-56 m
+13 r ] a^2+2 m r^2-r^3) α r^14,
T_2=+42 ((α^6 r^2+1/6α^4) a^8+(-8/3 m r^3α^6-13/42 r^4α^6+10/3 m α^4 r +α^4 r^2+1/3α^2) a^6
+(-1/21 m r^5α^6+16 m r^3α^4-13/21α^4 r^4-44/3 m r α^2-α^2 r^2+1/6) a^4+2/21[m r^4α^4-16 α^2 r^2 m -13/4α^2 r^3
+21 m -21/2 r ] r a^2
-α^2 m r^5/21) r^12,
T_3=-98 a^2α(α^4(-3/7α^2 r^3+m -51/14 r ) a^6+148 (-1/2072 r^5α^4+α^2 r^2 m -3/148α^2 r^3-71/74 m -51/148 r ) α^2 a^4/7
+(32/49 m r^4α^4-1/49 r^5α^4-872/49α^2 r^2 m +3/7α^2 r^3+111/7 m -51/14 r ) a^2-4 (α^2 r^2 m +1/56α^2 r^3+m -3/4 r ) r^2/7) r^12,
T_4=-462 a^2((α^6 r^2-1/22α^4) a^8+(-118/33 m r^3α^6-17/22 r^4α^6+59/11 m α^4 r +α^4 r^2-1/11α^2) a^6
+(-10/33 m r^5α^6+652/33 m r^3α^4-17/11α^4 r^4-530/33 m r α^2-α^2 r^2-1/22) a^4
+4 (m r^4α^4-35/22α^2 r^2 m -51/88α^2 r^3+73/44 m -3/4 r ) r a^2/3-2 α^2 m r^5/11) r^10,
T_5=+980 a^4α(α^4(-33/70α^2 r^3+m -33/20 r ) a^6-1/10[m r^4α^4-1/14 r^5α^4-1026/7α^2 r^2 m +33/7α^2 r^3+1004/7 m
+33 r ] α^2 a^4+(71/35 m r^4α^4+1/70 r^5α^4-482/35α^2 r^2 m +33/70α^2 r^3+321/35 m -33/20 r ) a^2
-111 (α^2 r^2 m -1/222α^2 r^3+14/111 m -11/37 r ) r^2/70) r^10,
T_6=+994 ((α^6 r^2+5/142α^4) a^8+(-428/71 m r^3α^6-231/142 r^4α^6+784/71 m α^4 r +α^4 r^2+5/71α^2) a^6
+(-177/71 m r^5α^6+2680/71 m r^3α^4-231/71α^4 r^4-1864/71 m r α^2-α^2 r^2+5/142) a^4+530/71[m r^4α^4-266/265α^2 r^2 m
-231/1060α^2 r^3+2/5 m -71/530 r ] r a^2-73 α^2 m r^5/71) a^4 r^8,
T_7=-3038 (α^4(-71/217α^2 r^3+m -1465/3038 r ) a^6-10/31[m r^4α^4+3/140 r^5α^4-1588/49α^2 r^2 m +71/70α^2 r^3+7919/245 m
+293/98 r ] α^2 a^4+(1004/217 m r^4α^4-3/217 r^5α^4-20864/1519α^2 r^2 m +71/217α^2 r^3+8383/1519 m -1465/3038 r ) a^2
-642 (α^2 r^2 m +1/428α^2 r^3-24/107 m -71/642 r ) r^2/217) a^6α r^8,
T_8=+35 (a^8α^4+(6864/35 m r^3α^6+293/7 r^4α^6-16766/35 m α^4 r +2 α^2) a^6+[1568/5 m r^5α^6-59488/35 m r^3α^4
+586/7α^4 r^4+31676/35 m r α^2+1] a^4+(-3728/5 m r^5α^4+25168/35α^2 r^3 m +293/7α^2 r^4-434/5 m r ) a^2
+424 α^2 m r^5/5) a^6 r^6,
T_9=+2968 a^8α(α^4(1465 r/2968+m ) a^6-217 (m r^4α^4-5/434 r^5α^4-12584/1519α^2 r^2 m +1864/217 m -1465/1519 r ) α^2 a^4/212
+(7919/742 m r^4α^4+5/212 r^5α^4-7436/371α^2 r^2 m +196/53 m +1465/2968 r ) a^2-8383 α^2 m r^4/1484+5 α^2 r^5/424+858 m r^2/371) r^6,
T_10=-994 a^8((α^6 r^2-3/142α^4) a^8+(144/71 m r^3α^6-1465/994 r^4α^6-642/71 m α^4 r +α^4 r^2-3/71α^2) a^6
+(8383/497 m r^5α^6-20864/497 m r^3α^4-1465/497α^4 r^4+1004/71 m r α^2-α^2 r^2-3/142) a^4+[-15838/497 m r^5α^4
+15880/497α^2 r^3 m -1465/994α^2 r^4-70/71 m r -r^2] a^2+217 α^2 m r^5/71) r^4,
T_11=-1022 a^10α(α^4(71/73α^2 r^3+m +231/146 r ) a^6-212 α^2/73[m r^4α^4+5/424 r^5α^4-133/53α^2 r^2 m -71/212α^2 r^3
+5/2 m -231/212 r ] a^4+(1864/73 m r^4α^4-5/73 r^5α^4-2680/73α^2 r^2 m -71/73α^2 r^3+177/73 m +231/146 r ) a^2
-784 (α^2 r^2 m +5/1568α^2 r^3-107/196 m +71/784 r ) r^2/73) r^4,
T_12=+462 a^10((α^6 r^2+1/66α^4) a^8+(-14/33 m r^3α^6-7/2 r^4α^6-37/11 m α^4 r +α^4 r^2+1/33α^2) a^6
+(214/11 m r^5α^6-964/33 m r^3α^4-7 α^4 r^4+142/33 m r α^2-α^2 r^2+1/66) a^4+[-1004/33 m r^5α^4+342/11α^2 r^3 m -7/2α^2 r^4
-7/33 m r -r^2] a^2+70 α^2 m r^5/33) r^2,
T_13=+84 (α^4(11/2α^2 r^3+m +17/4 r ) a^6-73 (m r^4α^4-3/146 r^5α^4-70/73α^2 r^2 m -33/73α^2 r^3+44/73 m -51/73 r ) α^2 a^4/6
+(265/3 m r^4α^4+1/2 r^5α^4-326/3α^2 r^2 m -11/2α^2 r^3+5/3 m +17/4 r ) a^2
-59 (α^2 r^2 m -1/118α^2 r^3-2/3 m +11/59 r ) r^2/2) a^12α r^2,
T_14=-42 ((α^6 r^2-1/42α^4) a^8-4 α^2(m r^3α^4+51/8α^4 r^4+m r α^2-3/4α^2 r^2+1/28) a^6/3
+(37 m r^5α^6-872/21 m r^3α^4-17 α^4 r^4+32/21 m r α^2-α^2 r^2-1/42) a^4+[-142/3 m r^5α^4+148/3α^2 r^3 m -17/2α^2 r^4
-r^2] a^2+7 α^2 m r^5/3) a^12,
T_15=-2 (α^4(21 α^2 r^3+m +13/2 r ) a^6+(-42 m r^4α^6-7/2 r^5α^6+32 m r^2α^4+21 r^3α^4-2 m α^2+13 r α^2) a^4
+(308 m r^4α^4-7 r^5α^4-336 α^2 r^2 m -21 α^2 r^3+m +13/2 r ) a^2-70 α^2 m r^4-7 α^2 r^5/2+56 m r^2-21 r^3) a^14α,
T_16=+(a^6α^6+(-2 α^6 m r -13 α^6 r^2+α^4) a^4+(56 m r^3α^6-60 m α^4 r -26 α^4 r^2-α^2) a^2
-64 m r^3α^4+62 m r α^2-13 α^2 r^2-1) a^16,
T_17=a^16α(a^6α^6 r -2 (α^2 r^2 m -1/2α^2 r^3-m -1/2 r ) α^4 a^4+4 (α^2 r^2 m +1/2α^2 r^3-m -1/4 r ) α^2 a^2
-2 α^2 r^2 m +α^2 r^3+2 m -r )
We computed the invariant I_6 for the accelerating Kerr black hole with the result:
I_6=-82944 a^2 m^4(α r cos(θ)-1)^13/(r^2+a^2cos(θ)^2)^15∑_i=0^17𝒞_i (cos(θ))^i,
where the coefficients 𝒞_i are:
𝒞_0=+r^14(8 α^4 m r^3-8 m r α^2+(α^2 a^2+1)^2),
𝒞_1=-2 r^14α((32 a^2 m α^4-28 m α^2) r^2-15 (α^2 a^2+1)^2 r/2+a^4 m α^4-30 a^2 m α^2+29 m ),
𝒞_2=-48 r^12(m r^5α^4/6-5 α^2(α^2 a^2+1)^2 r^4/16-31 α^2(a^4α^4-162/31α^2 a^2+19/31) m r^3/12
+(a^6α^6+a^4α^4-α^2 a^2-1) r^2+77 (a^4α^4-42/11α^2 a^2+7/11) m r/24-7 a^2(α^2 a^2+1)^2/48),
𝒞_3=+84 r^12(α^2(α^2 a^2+1)^2 r^5/84+(-1/42 a^4 m α^6+5/7 a^2 m α^4-29/42 m α^2) r^4
+(-4/7 a^6α^6-4/7 a^4α^4+4/7α^2 a^2+4/7) r^3+515 (a^4α^4-442/515α^2 a^2-17/515) m r^2/21-49 a^2(α^2 a^2+1)^2 r/12
+a^2 m (a^4α^4-24 α^2 a^2+55/3)) α,
𝒞_4=+448 r^10(-11 α^2(a^4α^4-42/11α^2 a^2+7/11) m r^5/32-49 a^2α^2(α^2 a^2+1)^2 r^4/64
-29 (a^4α^4-164/29α^2 a^2+17/29) a^2α^2 m r^3/8+(a^8α^6+a^6α^4-a^4α^2-a^2) r^2
+87 a^2 m (a^4α^4-268/87α^2 a^2+35/87) r/16+3 a^4(α^2 a^2+1)^2/64),
𝒞_5=-1022 (-α^2(α^2 a^2+1)^2 r^5/146-6 (a^4α^4-24 α^2 a^2+55/3) α^2 m r^4/73
-32 (α a -1) (α a +1) (α^2 a^2+1)^2 r^3/73+1028 (a^4α^4-240/257α^2 a^2-3/257) m r^2/73
-237 a^2(α^2 a^2+1)^2 r/146+a^2 m (a^4α^4-998/73α^2 a^2+645/73)) r^10 a^2α,
𝒞_6=-1008 r^8 a^2(-29 α^2 m (a^4α^4-268/87α^2 a^2+35/87) r^5/12-79 a^2α^2(α^2 a^2+1)^2 r^4/48
-215 a^2α^2(a^4α^4-1338/215α^2 a^2+267/215) m r^3/36+(a^8α^6+a^6α^4-a^4α^2-a^2) r^2
+263 (a^4α^4-618/263α^2 a^2+217/789) a^2 m r/24-5 a^4(α^2 a^2+1)^2/144),
𝒞_7=+2968 r^8 a^4α(3 α^2(α^2 a^2+1)^2 r^5/424-73 (a^4α^4-998/73α^2 a^2+645/73) α^2 m r^4/212
-18 (α a -1) (α a +1) (α^2 a^2+1)^2 r^3/53+7933 (a^4α^4-10446/7933α^2 a^2+497/7933) m r^2/742
-1395 a^2(α^2 a^2+1)^2 r/2968+a^2 m (a^4α^4-3977/371α^2 a^2+2087/371)),
𝒞_8=+35 r^6 a^4(-1578 (a^4α^4-618/263α^2 a^2+217/789) α^2 m r^5/5-279 a^2α^2(α^2 a^2+1)^2 r^4/7
-6864 a^2α^2 m (a^4α^4-26/3α^2 a^2+11/3) r^3/35+16696 a^2 m (a^4α^4-3977/2087α^2 a^2+371/2087) r/35+a^4(α^2 a^2+1)^2),
𝒞_9=-3038 r^6 a^6α(-5 α^2(α^2 a^2+1)^2 r^5/434-212 α^2(a^4α^4-3977/371α^2 a^2+2087/371) m r^4/217
+12584 (a^4α^4-26/11α^2 a^2+3/11) m r^2/1519+45 a^2(α^2 a^2+1)^2 r/98+a^2 m (a^4α^4-1854/217α^2 a^2+789/217)),
𝒞_10=+1008 r^4 a^6(2087 α^2 m (a^4α^4-3977/2087α^2 a^2+371/2087) r^5/126-155 a^2α^2(α^2 a^2+1)^2 r^4/112
+71 a^2α^2(a^4α^4-10446/497α^2 a^2+7933/497) m r^3/36+(a^8α^6+a^6α^4-a^4α^2-a^2) r^2
-215 a^2(a^4α^4-998/645α^2 a^2+73/645) m r/24+a^4(α^2 a^2+1)^2/48),
𝒞_11=+980 r^4 a^8α(α^2(α^2 a^2+1)^2 r^5/28-31 (a^4α^4-1854/217α^2 a^2+789/217) α^2 m r^4/10
+36 (α a -1) (α a +1) (α^2 a^2+1)^2 r^3/35+267 (a^4α^4-446/89α^2 a^2+215/267) m r^2/35+237 a^2(α^2 a^2+1)^2 r/140
+a^2 m (a^4α^4-268/35α^2 a^2+87/35)),
𝒞_12=-448 r^2 a^8(645 α^2(a^4α^4-998/645α^2 a^2+73/645) m r^5/32-237 a^2α^2(α^2 a^2+1)^2 r^4/64
-3 a^2(a^4α^4+80 α^2 a^2-257/3) α^2 m r^3/8+(a^8α^6+a^6α^4-a^4α^2-a^2) r^2-55 (a^4α^4-72/55α^2 a^2+3/55) a^2 m r/16
-a^4(α^2 a^2+1)^2/64),
𝒞_13=-98 (-3 α^2(α^2 a^2+1)^2 r^5/14-10 α^2(a^4α^4-268/35α^2 a^2+87/35) m r^4+[32/7 a^6α^6+32/7 a^4α^4-32/7α^2 a^2
-32/7] r^3+68 (a^4α^4-164/17α^2 a^2+29/17) m r^2/7+7 a^2(α^2 a^2+1)^2 r/2+a^2 m (a^4α^4-6 α^2 a^2+11/7)) r^2 a^10α,
𝒞_14=+48 a^10(385 (a^4α^4-72/55α^2 a^2+3/55) α^2 m r^5/12-343 a^2α^2(α^2 a^2+1)^2 r^4/48
-17 (a^4α^4+26 α^2 a^2-515/17) a^2α^2 m r^3/12+(a^8α^6+a^6α^4-a^4α^2-a^2) r^2
+(-29/24 a^6 m α^4+5/4 a^4 m α^2-1/24 a^2 m ) r +a^4(α^2 a^2+1)^2/48),
𝒞_15=+48 a^12α(7 α^2(α^2 a^2+1)^2 r^5/48-49 (a^4α^4-6 α^2 a^2+11/7) α^2 m r^4/24+(a^6α^6+a^4α^4-α^2 a^2-1) r^3
+19 (a^4α^4-162/19α^2 a^2+31/19) m r^2/12+5 a^2(α^2 a^2+1)^2 r/16-a^4 m α^2/6),
𝒞_16=+15 r ((-58/15 a^4 m α^4+4 a^2 m α^2-2/15 m ) r^2+a^2(α^2 a^2+1)^2 r +56 a^2(α^2 a^2-8/7) m/15) a^12α^2,
𝒞_17=((α^2 a^2+1)^2 r^3-8 a^2 m α^2 r^2+8 a^2 m ) a^14α^3,
We computed the invariant I_7 for an accelerating Kerr black hole:
I_7=-580608 (α r cos(θ)-1)^13 a m^4/(r^2+a^2cos(θ)^2)^15∑_i=0^17ℱ_i (cos(θ))^i,
where
ℱ_0=-r^15α((-4 a^2 m α^4+4 m α^2) r^2+(a^2α^2+1)^2 r +4 a^2α^2 m -4 m )/7,
ℱ_1=+r^14(-α^2(a^2α^2+1)^2 r^3/7-16 m α^2(a^4α^4-23/4 a^2α^2+3/4) r^2/7+(a^6α^6+a^4α^4-a^2α^2-1) r
+18 m (a^4α^4-44/9 a^2α^2+7/9)/7),
ℱ_2=-2 r^13((2/7 a^2 m α^4-2/7 m α^2) r^4+(-1/2 a^6α^6-1/2 a^4α^4+1/2 a^2α^2+1/2) r^3
+230 m (a^4α^4-20/23 a^2α^2-3/115) r^2/7-45 a^2(a^2α^2+1)^2 r/7+a^2 m (a^4α^4-32 a^2α^2+27)) α,
ℱ_3=-25 r^12(-18 m α^2(a^4α^4-44/9 a^2α^2+7/9) r^4/175-18 a^2α^2(a^2α^2+1)^2 r^3/35,
-76 a^2(a^4α^4-736/133 a^2α^2+71/133) m α^2 r^2/25+(a^8α^6+a^6α^4-a^4α^2-a^2) r +104 (a^4α^4-7/2 a^2α^2+1/2) a^2 m/25),
ℱ_4=+52 r^11 a^2(-m α^2(a^4α^4-32 a^2α^2+27) r^4/26-25 (a α -1) (a α +1) (a^2α^2+1)^2 r^3/52
+228 (a^4α^4-33/38 a^2α^2-1/38) m r^2/13-5 a^2(a^2α^2+1)^2 r/2+a^2 m (a^4α^4-17 a^2α^2+12)) α
ℱ_5=117 r^10(-8 (a^4α^4-7/2 a^2α^2+1/2) m α^2 r^4/9-10 a^2α^2(a^2α^2+1)^2 r^3/9-40 a^2 m (a^4α^4-29/5 a^2α^2+4/5) α^2 r^2/9
+(a^8α^6+a^6α^4-a^4α^2-a^2) r +22 a^2(a^4α^4-8/3 a^2α^2+1/3) m/3) a^2,
ℱ_6=-286 r^9 a^4(-2 m α^2(a^4α^4-17 a^2α^2+12) r^4/11+(-9/22 a^6α^6-9/22 a^4α^4+9/22 a^2α^2+9/22) r^3
+134 m (a^4α^4-72/67 a^2α^2+1/67) r^2/11-a^2(a^2α^2+1)^2 r +a^2 m (a^4α^4-12 a^2α^2+7)) α,
ℱ_7=-715 r^8 a^4/7(-42 (a^4α^4-8/3 a^2α^2+1/3) m α^2 r^4/5-14 a^2α^2(a^2α^2+1)^2 r^3/5
-52 a^2(a^4α^4-92/13 a^2α^2+27/13) m α^2 r^2/5+(a^8α^6+a^6α^4-a^4α^2-a^2) r +108 a^6α^4 m/5-228 a^4α^2 m/5+24 a^2 m/5),
ℱ_8=+3432 r^7 a^6α/7(-7 m α^2(a^4α^4-12 a^2α^2+7) r^4/12+(-5/24 a^6α^6-5/24 a^4α^4+5/24 a^2α^2+5/24) r^3
+28 m (a^4α^4-12/7 a^2α^2+1/7) r^2/3+a^6α^4 m -19 a^4α^2 m/2+9 a^2 m/2),
ℱ_9=-715 r^6 a^6/7((108/5 a^4α^6 m -228/5 a^2 m α^4+24/5 m α^2) r^4+32 a^2α^2 m (a^4α^4-12 a^2α^2+7) r^2/5
+(a^8α^6+a^6α^4-a^4α^2-a^2) r -98 a^2 m (a^4α^4-12/7 a^2α^2+1/7)/5),
ℱ_10=-286 r^5 a^8((-12/7 a^4α^6 m +114/7 a^2 m α^4-54/7 m α^2) r^4+(5/14 a^6α^6+5/14 a^4α^4-5/14 a^2α^2-5/14) r^3
+54 m (a^4α^4-92/27 a^2α^2+13/27) r^2/7+a^2(a^2α^2+1)^2 r +a^2 m (a^4α^4-8 a^2α^2+3)) α,
ℱ_11=117 r^4 a^8(154 m α^2(a^4α^4-12/7 a^2α^2+1/7) r^4/9-22 a^2α^2(a^2α^2+1)^2 r^3/9+4 a^2α^2 m (a^4α^4-72 a^2α^2+67) r^2/9
+(a^8α^6+a^6α^4-a^4α^2-a^2) r -16 a^2(a^4α^4-17/12 a^2α^2+1/12) m/3),
ℱ_12=+52 r^3(-11 m α^2(a^4α^4-8 a^2α^2+3) r^4/2+(9/4 a^6α^6+9/4 a^4α^4-9/4 a^2α^2-9/4) r^3+[8 a^4α^4 m
-58 a^2α^2 m +10 m ] r^2+5 a^2(a^2α^2+1)^2 r/2+a^2 m (a^4α^4-7 a^2α^2+2)) a^10α,
ℱ_13=-25 (624 (a^4α^4-17/12 a^2α^2+1/12) m α^2 r^4/25-26 a^2α^2(a^2α^2+1)^2 r^3/5-24 a^2α^2 m (a^4α^4+33 a^2α^2-38) r^2/25
+(a^8α^6+a^6α^4-a^4α^2-a^2) r -54 a^2 m (a^4α^4-32/27 a^2α^2+1/27)/25) r^2 a^10,
ℱ_14=-2 r a^12(-26 m α^2(a^4α^4-7 a^2α^2+2) r^4+(25/2 a^6α^6+25/2 a^4α^4-25/2 a^2α^2-25/2) r^3
+142 m (a^4α^4-736/71 a^2α^2+133/71) r^2/7+45 a^2(a^2α^2+1)^2 r/7+a^2 m (a^4α^4-44/7 a^2α^2+9/7)) α,
ℱ_15=(54 m α^2(a^4α^4-32/27 a^2α^2+1/27) r^4-90 a^2α^2(a^2α^2+1)^2 r^3/7-12 a^2 m α^2(a^4α^4+100/3 a^2α^2-115/3) r^2/7
+(a^8α^6+a^6α^4-a^4α^2-a^2) r +4 a^4α^2 m/7-4 a^6α^4 m/7) a^12,
ℱ_16=(-2 (a^4α^4-44/7 a^2α^2+9/7) m α^2 r^3+(a^6α^6+a^4α^4-a^2α^2-1) r^2+12 m (a^4α^4-23/3 a^2α^2+4/3) r/7
+a^2(a^2α^2+1)^2/7) a^14α,
ℱ_17=a^16α^2((-4 a^2 m α^4+4 m α^2) r^2+(a^2α^2+1)^2 r +4 a^2α^2 m -4 m )/7
We now compute for the first time the invariant Q_2≡1/27I_5 I_6-I_7^2/(I_1^2+I_2^2)^5/2 for the case of an accelerating Kerr black hole.
The exact analytic expression for the invariant Q_2 for the accelerating Kerr black hole is the following:
Q_2=-(a^2-2 m r +r^2) (cos(θ)^2 a^2α^2-2 cos(θ) α m +1) sin(θ)^2 a^2(α r cos(θ)+1)^2(α^2 r^2-1)/(α r cos(θ)-1)^4(r^2+cos(θ)^2 a^2) m^2(a^2α^2+1)
The generalisation of Theorem (<ref>) in the presence of the cosmological constant is:
The exact analytic expression for the invariant Q_2 for the accelerating Kerr black hole in (anti)de-Sitter spacetime is the following:
Q_2 =-1/(α r cos(θ)-1)^4(r^2+a^2cos(θ)^2) m^2(a^2α^2+1)[a^2(α r cos(θ)+1)^2sin(θ)^2((Λ/3+α^2) r^4
-2 m α^2 r^3+(a^2α^2+1/3 a^2Λ -1) r^2+2 m r -a^2) (1+a^2(Λ/3+α^2) cos(θ)^2-2 cos(θ) α m )].
We note that the differential invariant Q_2, as given in Eqn. (<ref>), vanishes at the horizons of the accelerating rotating black hole with Λ≠0.
For zero acceleration α=0 we derive the following corollary for the invariant Q_2 in Kerr-(anti)de Sitter spacetime:
Q_2=-a^2sin(θ)^2(r^4Λ/3+(-1+a^2Λ/3) r^2+2 m r -a^2) (1+a^2cos(θ)^2Λ/3)/(r^2+a^2cos(θ)^2) m^2.
In Fig.<ref> we display the region plot for the invariant Q_2, eqn.(<ref>) of Corollary <ref>. We observe that Q_2 changes sign on the event and Cauchy horizons of the Kerr-de Sitter black hole.
In Fig.<ref> we exhibit the regions of negative and positive sign for the curvature invariant Q_2,eqn.(<ref>) of Theorem <ref> for the choice of values for the parameters: a=0.52,Λ=3.6× 10^-33,α=0.01,m=1. We observe that Q_2 changes sign on the event and Cauchy horizons of the accelerating Kerr-de Sitter black hole.
In the region plot, Fig.<ref>, we determine the sign of the curvature invariant Q_2, for the choice of values for the parameters: a=0.52,Λ=3.6× 10^-33,α=0.1,m=1. It is evident from the graph that Q_2 vanishes and changes sign at the event, Cauchy and acceleration horizons of the accelerating, rotating black hole in (anti-)de Sitter spacetime. We repeated the analysis for a higher spin value of the black hole, in Fig.(<ref>) and Fig.(<ref>). For the choice of values for the parameters: a=0.9939,Λ=3.6× 10^-33,α=0.1,m=1, Fig.(<ref>) and Fig.(<ref>), exhibit the sign change of the invariant Q_2 at the horizons radii.
The most general explicit expression for the invariant Q_2 for the accelerating Kerr-Newman black hole in (anti-)de Sitter spacetime is given in Theorem <ref> and eqn.(<ref>).
As regards the invariant Q_3 for an accelerating Kerr black hole in (anti-)de Sitter spacetime our analytic computation yields the following explicit algebraic expression:
Q_3=1/6 √((α r cos(θ)-1)^6 m^2(a^2α^2+1)/(r^2+a^2cos(θ)^2)^3)(r^2+a^2cos(θ)^2)^2[-3 a^2((α^2(Λ/3+α^2) r^2+Λ/3) a^2
-2 α^2 r ((-α^2-Λ/3) r^3+α^2 m r^2+r/2-m )) cos(θ)^4+6 m α(α^2 r^4+a^2) cos(θ)^3
+3 (-1+(Λ/3+α^2) a^2) (α^2 r^4+a^2) cos(θ)^2-6 m α(α^2 r^4+a^2) cos(θ)
+(6+(-3 α^2-Λ) r^2) a^2+6 α^2 m r^3-Λ r^4-6 m r +3 r^2].
§ THE NEWMAN-PENROSE FORMALISM AND DIFFERENTIAL CURVATURE INVARIANTS
In this section we shall apply the Newman-Penrose formalism <cit.> and compute in an independent way the differential local invariants for the accelerating Kerr-Newman black hole in (anti-)de Sitter spacetime.
It is gratifying that our results in NP formalism agree with those obtained for the corresponding curvature invariants in the previous sections.
From the relation I_7=k_μl^μ, and the equation:
I_2=24 i(Ψ_2^2-Ψ_2^2),
we obtain:
I_7 =48^2 i(Ψ_2^2∇_μΨ_2∇^μΨ_2-Ψ_2^2∇_μΨ_2∇^μΨ_2)
=-48^2 2 (Ψ_2^2∇_μΨ_2∇^μΨ_2),
where the Weyl scalar Ψ_2 is given by <cit.>:
Ψ_2=-(1-α r cos(θ))^3(m (i a α +1) (r +i a cos(θ))-q^2(1+α r cos(θ)))/(r -i a cos(θ))^3(r +i a cos(θ))
The functional relationship between invariants, the syzygy in Eqn.(<ref>), that we discovered in this work also holds for the Kerr-de Sitter black hole spacetime may be expressed as the real part of the complex syzygy:
∇_μ(I_1+iI_2)∇^μ(I_1+iI_2)=12/5(I_1+iI_2)(I_3+iI_4),
On the other hand, in the NP formalism we have that:
I_1+iI_2=48Ψ_2^2-48i Ψ_2^2=48Ψ_2^2,
thus we derive the following covariant derivative:
∇_μ(I_1+iI_2)=48× 2Ψ_2∇_μΨ_2,
and from (<ref>) we derive:
(96Ψ_2)^2∇_μΨ_2∇^μΨ_2=
12·48/5Ψ_2^2(I_3+iI_4).
Thus we conclude that:
I_3 =80∇_μΨ_2∇^μΨ_2,
I_4 =80∇_μΨ_2∇^μΨ_2.
The computation of the differential invariants I_5 and I_6 in the NP formalism yields the result:
I_5 =∇_μI_1∇^μI_1
=48^2 [Ψ_2^2 (∇_μΨ_2∇^μΨ_2)+2Ψ_2Ψ_2∇_μΨ_2∇^μΨ_2+Ψ_2^2∇_μΨ_2∇^μΨ_2],
I_6 =∇_μI_2∇^μI_2 ,
=-48^2 (Ψ_2^2 (∇_μΨ_2∇^μΨ_2)-2Ψ_2Ψ_2∇_μΨ_2∇^μΨ_2
+Ψ_2^2∇_μΨ_2∇^μΨ_2) .
The invariant Q_3 as defined in Eqn.(<ref>), in the NP formalism and taking into account Eqns.(<ref>)-(<ref>), acquires the form:
Q_3 =4· 48^2∇_μΨ_2∇^μΨ_2/6√(3) 48^5/2(Ψ_2Ψ_2)^3/2
=1/18∇_μΨ_2∇^μΨ_2/(Ψ_2Ψ_2)^3/2,
whereas
Q_1=2(Ψ_2^2 ∇_μΨ_2∇^μΨ_2)/9(Ψ_2Ψ_2)^5/2.
Likewise, using the observation in Eqn.(<ref>) and the theorem derived in <cit.> we can derive an expression of the curvature invariant Q_2 in terms of the norm of the wedge product that involves exterior derivatives of the Weyl scalar Ψ_2 <cit.>:
Q_2=-2‖∇Ψ_2∧∇Ψ_2‖^2/18^2(Ψ_2Ψ_2)^3.
We calculated the exact algebraic expression for the invariant Q_2 for the accelerating Kerr-Newman black hole in (anti-)de Sitter spacetime. Our result is:
Q_2=-1/𝒜(α((a^4 m^2 r +2 a^2 m q^2 r^2+8/9 q^4 r^3) α^2+a^2 m (m r -2 q^2/3)) cos(θ)^3
+((a^4 m^2+2 a^2 m q^2 r -2 m q^2 r^3+16/9 q^4 r^2) α^2+a^2 m^2) cos(θ)^2+α(a^2α^2 m^2 r^3+2 a^2 m q^2+r^3 m^2
-2 m q^2 r^2+16/9 q^4 r ) cos(θ)+m r^2(a^2 m +2 q^2 r/3) α^2+r^2 m^2-2 m q^2 r +8 q^4/9)^2[r^2(a^2-2 m r +q^2+r^2) α^2
+Λ r^4/3+(Λ a^2/3-1) r^2+2 m r -a^2-q^2] a^2sin(θ)^2(1-2 α m cos(θ)+(α^2(a^2+q^2)+Λ a^2/3) cos(θ)^2).
where
𝒜 ≡(α r cos(θ)-1)^4(((a^2 m +q^2 r )^2α^2+a^2 m^2) cos(θ)^2+2 q^2α(a^2 m -m r^2+q^2 r ) cos(θ)
+r^2 m^2α^2 a^2+(m r -q^2)^2)^3.
We observe that Eqn.(<ref>) vanishes at the radii of the horizons of the accelerating Kerr-Newman black hole in (anti-) de Sitter spacetime. Thus, the differential curvature invariant Q_2 can serve as horizon detector for the most general class of accelerating rotating and charged black holes with non-zero cosmological constant.
§ DISCUSSION AND CONCLUSIONS
In this work we have derived new explicit algebraic expressions for the Karlhede and Abdelqader-Lake differential curvature invariants for two of the most general black hole solutions. Namely, i) for the Kerr-Newman-(anti-)de Sitter black hole metric ii) for accelerating Kerr-Newman black hole in (anti-)de Sitter spacetime.
Despite the complexity of the computations involved using the tensorial method of calculation, our final expressions are reasonably compact and easy to use in applications. We showed explicitly that some of the computed invariants vanish at the horizon and ergosurfaces radii of the type of the black holes we investigated.
In particular, the differential invariant Q_2 vanishes at the horizons radii of the accelerating rotating and charged black holes with non-zero cosmological constant.
This result adds further impetus on the program of using scalar curvature invariants for the identification of black hole horizons.
We have also confirmed our results obtained via the tensorial method, with the aid of the NP formalism in which the differential curvature invariants are expressed in terms of covariant derivatives of the Weyl scalar Ψ_2.
An interesting application of our results will be to investigate binary black hole mergers using scalar curvature invariants. A recent study investigated a quasi-circular orbit of two merging, equal mass and non-spinning BHs <cit.>.
Another fundamental research avenue of our results, would be to investigate gravitational lensing, black hole shadow and superradiance effects for accelerating, rotating and charged black holes with Λ≠0 <cit.>.
99
Duggal K. L. Duggal, A. Bejancu, Lightlike Submanifolds of Semi[Riemannian Manifolds and Applications, Kluwer Academic Publishers, 1996
Katsuno K. Katsuno, Null hypersurfaces in Lorentzian Manifolds:I Math.Proc.Camb.Phil.Soc.(1980),88,175
KrishamAshtekar A. Ashtekar and B. Krishnan, Dynamical horizons and their properties, Phys.Rev.D. 68,104030 (2003)
EHT The Event Horizon Telescope Collaboration, First M87 Event Horizon Telescope Results. I. The Shadow of the Supermassive Black Hole, The Astrophysical Journal Letters, 875: L1 (2019) April 10
Zahkary E. Zakhary and. C. B. G. McIntosh, A Complete Set of Riemann Invariants, Gen.Rel.Grav.29 (1997),539-581
GeheDebr J. Géhéniau and R. Debrever, Les quatorze invariants de courbure de l'espace riemannien á quatre dimensions,Helvetica Physica Acta 29,(1956),101-105
LWitten L. Witten, Invariants of General Relativity and the Classification of Spaces, Phys.Rev. 113 (1959),pp 357-362
HarveyAL A. Harvey,On the algebraic invariants of the four-dimensional Riemann tensor,Class.Quan.Grav 7(1990),715-716
CarminatiMcL J. Carminati and R. G. McLenaghan,Algebraic invariants of the Riemann tensor in a four-dimensional Lorentzian space, J. Math.Phys.32, (1991) pp 3135-3140
Petrov A Z Petrov, The Classification of Spaces Defining Gravitational Fields, Gen.Rel.Grav.32 (2000),1665-1685, Original title: Klassifikacya prostranstv opredelyayushchikh polya tyagoteniya. Uchenye Zapiski
Kazanskogo Gosudarstvennogo Universiteta im. V. I. Ulyanova-Lenina [Scientific Proceedings
of Kazan State University, named after V.I. Ulyanov-Lenin], 114 (8), 55–69 (1954).
Ciufolini I. Ciufolini, Dragging of Inertial Frames, Gravitomagnetism, and Mach's Principle,in Einstein Studies 6,Mach's Principle, From Newton's Bucket to Quantum Gravity,Birkhäuser,(1995),pp 386-402
BakerJMC J. Baker and M. Campanelli, Making use of geometrical invariants in black hole collisions, Phys.Rev.D 62 (2000) 127501
Filipe L. Filipe et al,Gravitomagnetism and the significance of the curvature scalar invariants,Phys.Rev.D 104(2021)084081
agulrionavsa I Agullo, A. del Rio and J. Navarro-Salas,Electromagnetic duality anomaly in curved spacetime, Phys.Rev.Let.118 (2017)111301
GalaGabriel M. Galaverni and G. S.J. Gabriele,Photon helicity and quantum anomalies in curved spacetimes, Gen.Rel.Gravit. (2021)53:46
KraniotisCurvature G. V. Kraniotis, Curvature Invariants for accelerating Kerr-Newman black holes in (anti-)de Sitter spacetime, Class.Quant.Grav. 39 (2022) 145002
frolov V.P. Frolov, A. Koek, A. Zelnikov, Chiral anomalies in black hole spacetimes, Phys.Rev.D 107 (2023) 4, 045009
KarlhedeAA. Karlhede, U. Lindstrom and J.E. Aman, A Note on a Local Effect at the Schwarzschild Sphere, Gen.Rel.Grav.14 (1982), pp 569-571
LakeeinK. Lake, Differential Invariants of the Kerr Vaccum,
Gen.Rel.Grav.36 (2004),pp 1159-1169
LakeZwei M. Abdelqader and K. Lake,Invariant characterization of the Kerr spacetime: Locating the horizon and measuring the mass and spin of rotating black holes using curvature invariants, Phys.Rev.D 91, 084017 (2015)
PageD D. N. Page and A.A. Shoom, Local Invariants Vanishing on Stationary Horizons: A diagnostic for Locating Black Holes, Phys.Rev.Let.114, (2015) 141102
GrifPod J. B. Griffiths and Jiří Podolský,
Exact spacetimes in Einstein's General Relativity, Cambridge
Monographs on Mathematical Physics, Cambirdge University Press (2009)
PlebanskiDemianskiJ. F. Plebanski, M. Demianski, Rotating, Charged, and Uniformly Accelerating Mass in General Relativity, Annals of Physics 98,(1976) 98-127
Supern S. Perlmutter et al, Astrophys.Journal
517(1999) 565; A. V. Filippenko et al
Astron.J.116 1009
Jones D. O. Jones et al, The Foundation Supernova Survey: Measuring Cosmological Parameters with Supernovae from a Single Telescope Astrophys. J. 881, (2019) 19
Aubourg E. Aubourg et al, Cosmological implications of baryon acoustic oscillation measurements, Phys.Rev.D 92, (2015) 123516
AbbottTMC T.M.C. Abbott et al,Dark Energy Survey Year 3 results: Cosmological constraints from galaxy clustering and weak lensing, Phys.Rev.D 105, (2022) 023520
GVKSWB G. V. Kraniotis and S. B. Whitehouse, General relativity, the
cosmological constant and modular forms Class. Quantum Grav.
19 (2002), 5073-5100
Zajacek M. Zajac̆ek, A.Tursunov, A. Eckart and S. Britzen, On the charge of the Galactic centre black hole, Mon.Not.Roy.Astron.Soc. 480, (2018) 4408-4423
Britzen A. Tursunov, M Zajac̆ek, A. Eckart, M. Kolos, S. Britzen, Z. Stuchlík, B. Czerny, and V. Karas, Effect of Electromagnetic Interaction on Galactic Center Flare Components, Astrophys.J. 897 (2020) 1, 99
universemdpi Z. Stuchlík, M. Kolos̆,
J. Kovár̆, P. Slaný, and A. Tursunov, Influence of Cosmic Repulsion and Magnetic Fields on Accretion Disks Rotating around Kerr Black Holes, Universe 6 (2020) 2, 26
waldcharge A. Tursunov, Z. Stuchlík, M. Kolos̆, N. Dadlich, and B. Ahmedov, Supermassive Black Holes as Possible Sources of Ultrahigh-energy Cosmic Rays, Astrophys.J. 895 (2020) 1, 14
genzelr R. Genzel et al, Near-infrared flares from accreting gas around the supermassive black hole at the Galactic
Centre Nature 425 (2003) 934
aschenbach B. Aschenbach et al, X-ray flares reveal mass and angular momentum of
the Galactic Centre black hole, Astron. Astrophys.417 (2004), 71-78
KraniotisStars1 G. V. Kraniotis, Periapsis and gravitomagnetic precessions of stellar orbits in Kerr and Kerr-de Sitter black hole spacetimes, Class.Quant.Grav. 24 (2007) 1775-1808
KraniotisSstars2 G. V. Kraniotis,Gravitational redshift/blueshift of light emitted by geodesic test particles, frame-dragging and pericentre-shift effects, in the Kerr–Newman–de Sitter and Kerr–Newman black hole geometries, Eur.Phys.J.C 81 (2021) 2, 147
Newman Newman E. T.; Couch E.; Chinnapared K.; Exton A.; Prakash A.;
Torrence R. Metric of a Rotating, Charged Mass. J. Math. Phys. 1965,6,918
KerrR Kerr R. P. Gravitational field of a spinning mass as an
example of algebraically special metrics. Phys. Rev. Lett. 1963,11,237
Stuchlik1Z. Stuchlík, G. Bao, E. Østgaard and S.
Hledík, Kerr-Newman-de Sitter black holes with a restricted
repulsive barrier of equatorial photon motion, Phys. Rev. D. 58
(1998) 084003
BCARB. Carter, Global structure of the Kerr family of
gravitational fields Phys.Rev.174 (1968)1559-71
ZdeStu Z. Stuchlík and S.Hledík,
Equatorial photon motion in the Kerr-Newman spacetimes
with a non-zero cosmological constant, Class. Quantum Grav.
17 (2000) 4541-4576
ZST Z. Stuchlík, The motion of test particles in black-hole backgrounds
with non-zero cosmological constant, Bull. of the Astronomical
Institute of Chechoslovakia 34 (1983) 129-149
szekeres P. Szekeres, The Gravitational Compass, J.Math.Phys. 6 (1965) 1387
PenroseRogerErgoRegion R. Penrose,Gravitational collapse: the role of general relativity,Riv.Nuovo Cim.1,252-276 (1969),Gen.Rel.Grav.34,1141-1165 (2002)
Israel W. Israel, Event Horizons in Static Vacuum Space-TimesPhys.Rev. 164,1776 (1967)
PodolskyGrif J. Podolský and J. B. Griffiths, Acccelerating Kerr-Newman black holes in (anti)-de Sitter spacetime, Phys.Rev.D 73 (2006),044018
NPformalismCI Newman E.; Penrose R. An Approach to Gravitational Radiation by a Method of Spin Coefficients. J. Math.Phys. 1962 ,3,566
MAHMacCallum D. Brooks, P.C. Chavy-Waddy, A.A. Coley, A. Forget, D. Gregoris, M.A.H. MacCallum, D.D. McNutt,Cartan Invariants and event horizon detection, Gen.Relativ.Gravit(2018)50:37
arneGrezen A. Grenzebach, V. Perlick and C. Lämmerzahl,Photon regions and Shadows of accelerated black holes, Int.J.Mod.Phys.D 24 (2015) 09, 1542024
JeremyAlanErik J. M. Peters, A. Coley and E. Schnetter,Curvature invariants in a binary black hole merger Gen. Rel.Gravit.(2022)54:65
GVKraniotisAccelBH G. V. Kraniotis, Work in Progress
|
http://arxiv.org/abs/2307.01739v1
|
20230704141611
|
Spatial organization of slit-confined melts of ring polymers with non-conserved topology: A lattice Monte Carlo study
|
[
"Mattia Alberto Ubertini",
"Angelo Rosa"
] |
cond-mat.soft
|
[
"cond-mat.soft"
] |
0.5 in
|
http://arxiv.org/abs/2307.02843v1
|
20230706081547
|
Local Modules in Braided Monoidal 2-Categories
|
[
"Thibault D. Décoppet",
"Hao Xu"
] |
math.CT
|
[
"math.CT",
"cond-mat.str-el",
"hep-th",
"math.QA",
"18M15, 18M20, 18N10"
] |
definition
DefinitionDefinition[subsection]
plain
Theorem[Definition]Theorem
plain
Proposition[Definition]Proposition
plain
Lemma[Definition]Lemma
plain
Corollary[Definition]Corollary
plain
Conjecture[Definition]Conjecture
definition
Example[Definition]Example
remark
Remark[Definition]Remark
plain
*genericthm*
namedthm*[1]
alpha
Local Modules in Braided Monoidal 2-Categories
Thibault D. Décoppet and Hao Xu
July 2023
==============================================
Given an algebra in a monoidal 2-category, one can construct a 2-category of right modules. Given a braided algebra in a braided monoidal 2-category, it is possible to refine the notion of right module to that of a local module. Under mild assumptions, we prove that the 2-category of local modules admits a braided monoidal structure. In addition, if the braided monoidal 2-category has duals, we go on to show that the 2-category of local modules also has duals. Furthermore, if it is a braided fusion 2-category, we establish that the 2-category of local modules is a braided multifusion 2-category. We examine various examples. For instance, working within the 2-category of 2-vector spaces, we find that the notion of local module recovers that of braided module 1-category. Finally, we examine the concept of a Lagrangian algebra, that is a braided algebra with trivial 2-category of local modules. In particular, we completely describe Lagrangian algebras in the Drinfeld centers of fusion 2-categories, and we discuss how this result is related to the classifications of topological boundaries of (3+1)d topological phases of matter.
§ INTRODUCTION
It is well-know that the 1-category of modules over a commutative algebra is symmeric monoidal. The notion of a module over an algebra can be internalized to any monoidal 1-category 𝒞. Further, provided we work in a braided monoidal 1-category ℬ, it is also sensible to consider a commutative algebra B in ℬ. Under mild assumptions on ℬ, the 1-category of B-modules in ℬ, which we denote by 𝐌𝐨𝐝_ℬ(B), admits a canonical monoidal structure given by the relative tensor product over B. However, 𝐌𝐨𝐝_ℬ(B) is not braided unless ℬ is symmetric. It was nevertheless observed in <cit.> that the full sub-1-category 𝐌𝐨𝐝_ℬ^loc(B) of 𝐌𝐨𝐝_ℬ(B) on the local modules, also known as dyslectic modules, admits a braiding.
One noteworthy application of local modules comes from its relation with Drinfeld centers <cit.>. More precisely, given a commutative algebra B in 𝒵(𝒞), the Drinfeld center of the monoidal 1-category 𝒞, it is shown under mild assumptions on 𝒞 that 𝐌𝐨𝐝_𝒵(𝒞)^loc(B) is equivalent as a braided monoidal 1-category to 𝒵(𝐌𝐨𝐝_𝒞(B)).
Local modules were also used in <cit.> as a way to produce new modular tensor 1-categories from the known examples. We note that this last problem was initially undertaken using subfactors <cit.>. Subsequently, the relation between 1-categories of local modules and non-degenerate braided fusion 1-categories was explored much further in <cit.>. For instance, it was established that if ℬ is a non-degenerate braided fusion 1-category, and B is a connected separable commutative algebra, also called a connected étale algebra, in ℬ, then 𝐌𝐨𝐝^loc_ℬ(B) is a non-degenerate braided fusion 1-category. These results were then generalized in <cit.>.
Categorifying the notion of a fusion 1-category, fusion 2-categories were introduced in <cit.>. The theory of algebras in fusion 2-categories was then extensively developed in <cit.> by the first author. Further, it was established in <cit.> that the 2-category 𝐌𝐨𝐝_𝔅(B) of modules over a braided separable algebra B in a braided fusion 2-category 𝔅 is a multifusion 2-category. In this context, braided separable algebras played a central role in the proof of the minimal non-degenerate extension conjecture <cit.>.
Motivations for developing the theory of local modules in braided fusion 1-categories also come from condensed matter Physics. Specifically, the theory of local modules plays a crucial role in the physical theory of anyon condensations, a program which was initiated in <cit.> and was fully realized in <cit.>. More precisely, a modular tensor 1-category ℬ describes a (2+1)-dimensional topological order (up to invertible ones), a connected étale algebra B in ℬ represents a combination of anyons that condense to the vacuum in a new phase. The modular tensor 1-category associated to the new phase, also called the 1-category of deconfined particles, is precisely 𝐌𝐨𝐝_ℬ^loc(B). A more detailed account of the historical development of anyon condensation theory can be found in <cit.>.
Going up in dimension, local modules in braided fusion 2-categories are expected to play a role in the study of (3+1)-dimensional anyon condensation. Some work has already been dome in this direction, specifically in the (3+1)-dimensional toric code model <cit.>, where examples of Lagrangian algebras are used to describe its topological boundaries.
§.§ Results
Our first objective is to categorify the main result of <cit.>. We fix a braided monoidal 2-category 𝔅 and a braided algebra B in 𝔅. We review the definition of local right B-modules in 𝔅 introduced in <cit.> using the variant of the graphical calculus of <cit.> introduced in <cit.>. More precisely, a local right B-module is a right B-module equipped with a 2-isomorphism, called a holonomy. Relying on <cit.> and <cit.>, we establish the following result.
Theorem <ref>
Let 𝔅 be a braided monoidal 2-category, and B a braided algebra in 𝔅. If 𝔅 has relative tensor products over B and they are preserved by the monoidal product of 𝔅, then the 2-category 𝐌𝐨𝐝^loc_𝔅(B) of local B-modules admits a braided monoidal structure.
We note that a similar result was obtained independently in <cit.> using multi-2-categories.
We go on to study more specifically the case when 𝔅 is a braided multifusion 2-category. Under these hypotheses, the relative tensor product exists provided we consider separable algebras as introduced in <cit.> (see also <cit.> for a thorough discussion). Thus, we take B to be an étale algebra in 𝔅, that is a braided separable algebra in 𝔅. We show that the underlying 2-category of 𝐌𝐨𝐝^loc_𝔅(B) is finite semisimple. Further, we prove that the dual of a right B-module admits a compatible holonomy. Putting these facts together yield the next theorem, which categorifies a number of results of <cit.>.
Theorem <ref>
Let B be an étale algebra in a braided multifusion 2-category 𝔅. Then, 𝐌𝐨𝐝_𝔅^loc(B) is a braided multifusion 2-category.
Further, if 𝔅 is a fusion 2-category, and B is a connected étale algebra, we find that 𝐌𝐨𝐝_𝔅^loc(B) is a braided fusion 2-category.
Then, we identify the braided fusion 2-category 𝐌𝐨𝐝_𝔅^loc(B) when 𝔅 is a braided fusion 2-category of interest. Firstly, when 𝔅=2𝐕𝐞𝐜𝐭, connected étale algebras are precisely braided fusion 1-categories. Given a braided fusion 1-category ℬ, we show that 𝐌𝐨𝐝_2𝐕𝐞𝐜𝐭^loc(ℬ) is equivalent as a braided monoidal 2-category to 𝒵(𝐌𝐨𝐝(ℬ)), the Drinfeld center of ℬ. We do so by proving that local right ℬ-modules in 2𝐕𝐞𝐜𝐭 correspond exactly to the finite semisimple braided ℬ-module 1-categories of <cit.>. Secondly, we prove that taking local modules is invariant under base change, i.e. given a 1-morphism between étale algebras A → B in a braided fusion 2-category 𝔅, one has that the 2-category of local B-modules in 𝔅 is equivalent to the 2-category of local B-modules in 𝐌𝐨𝐝_𝔅^loc(A).
We use our previous results to study Lagrangian algebras, that is connected étale algebras whose associated 2-category of local modules is trivial. Conceptually, this last condition should be thought of as a categorical non-degeneracy condition. Namely, Lagrangian algebras in 2𝐕𝐞𝐜𝐭 are exactly given by non-degenerate braided fusion 1-categories. We compare our notion of Lagrangian algebra with that introduced in <cit.>, and show that, given any braided fusion 1-category ℬ, Lagrangian algebras in 𝒵(𝐌𝐨𝐝(ℬ)) correspond exactly to non-degenerate braided fusion 1-categories equipped with a braided functor from ℬ. Finally, we end by discussing the relation between our abstract work on Lagrangian algebras, and the classification of topological boundaries of (3+1)d topological phases of matter. In the (3+1)d toric code model, we compare our results with those obtained from lattice model considerations in <cit.>.
§.§ Acknowledgements
We would like to thank Liang Kong and Matthew Yu for discussions, as well as feedback on a draft of this manuscript. H.X. was supported by DAAD Graduate School Scholarship Programme (57572629) and DFG Project 398436923.
§ PRELIMINARIES
§.§ Graphical Calculus
We work within a monoidal 2-category ℭ with monoidal product and monoidal unit I in the sense of <cit.>. Thanks to the coherence theorem of <cit.>, we may assume without loss of generality that ℭ is strict cubical in the sense of definition 2.26 of <cit.>. We use the graphical calculus of <cit.>, as described in <cit.> and <cit.>. We often omit the symbol from our notations, and we use the symbol 1 to denote an identity 1-morphism. The interchanger is depicted using the string diagram below on the left, and its inverse by that on the right:
< g r a p h i c s >
,
< g r a p h i c s >
.
In particular, the lines correspond to 1-morphisms, and the coupons to 2-morphisms. The regions represent objects, which are uniquely determined by the 1-morphisms. Further, our string diagrams are read from top to bottom, which yields the compositions of 1-morphisms, and then from left to right.
For our purposes, it is also necessary to recall the graphical conventions related to 2-natural transformations from <cit.>. These will only be used for the braiding, which will be introduced below. Let F,G:𝔄→𝔅 be two (weak) 2-functors, and let τ:F⇒ G be 2-natural transformation. That is, for every object A in 𝔄, we have a 1-morphism τ_A:F(A)→ G(A), and for every 1-morphism f:A→ B in 𝔄, we have a 2-isomorphism
[sep=tiny]
F(A) [ddd, "F(f)"'][rrr, "τ_A"] G(A) [ddd, "G(f)"]
F(B)[rrr, "τ_B"'][Rightarrow, rrruuu, "τ_f", shorten > = 2ex, shorten < = 2ex] G(B),
These 2-isomorphisms have to satisfy obvious coherence relations. In our graphical language, we will depict the 2-isomorphism τ_f using the following diagram on the left, and its inverse using the diagram on the right:
< g r a p h i c s >
,
< g r a p h i c s >
.
§.§ Braided Monoidal 2-Categories
In the present article, we will for the most part work within 𝔅 a braided monoidal 2-category in the sense of <cit.>. Thanks to the coherence theorem of <cit.>, we may assume that 𝔅 is a semi-strict braided monoidal 2-category. In particular, 𝔅 comes equipped with a braiding b, which is an adjoint 2-natural equivalence given on objects A,B in 𝔅 by
b_A,B:A B→ B A.
Its pseudo-inverse will be denoted by b^∙. Further, there are two invertible modifications R and S, which are given on the objects A,B,C of 𝔅 by
ABC [rr, "b"] [rd, "b1"'] [d, Rightarrow, "R"] BCA,
BAC [ru, "1b"']
ABC [rr, "b_2"] [rd, "1b"'] [d, Rightarrow, "S"] CAB
ACB [ru, "b1"']
where the subscript in b_2 records were the braiding occurs. To avoid any possible confusion, we will systematically write b instead of a would be b_1 as this can too easily be confused with b1. Further, these modifications are subject to the following relations, which are taken from section 2.1.1 of <cit.>:
a. For every objects A,B,C,D in 𝔅, we have
< g r a p h i c s >
< g r a p h i c s >
0.45=
< g r a p h i c s >
in Hom_𝔅(ABCD, BCDA),
b. For every objects A,B,C,D in 𝔅, we have
< g r a p h i c s >
< g r a p h i c s >
0.45=
< g r a p h i c s >
in Hom_𝔅(ABCD, DABC),
c. For every objects A,B,C,D in 𝔅, we have
< g r a p h i c s >
< g r a p h i c s >
0.45=
< g r a p h i c s >
in Hom_𝔅(ABCD, CDAB),
d. For every objects A,B,C in 𝔅, we have
< g r a p h i c s >
< g r a p h i c s >
0.45=
< g r a p h i c s >
in Hom_𝔅(ABC, CBA),
e. For every object A in 𝔅, the adjoint 2-natural equivalences
b_A,I:A I→ I A and b_I,A:I A→ A I
are the identity adjoint 2-natural equivalences,
f. For every objects A,B,C in 𝔅, the 2-isomorphisms R_A,B,C and S_A,B,C are the identity 2-isomorphism whenever either A, B, or C is equal to I.
Let us now examine some examples. For simplicity, we work over an algebraically closed field k of characteristic zero, but this is not necessary <cit.>.
We write 2𝐕𝐞𝐜𝐭 for the 2-category of finite semisimple 1-categories. The Deligne tensor product endows 2𝐕𝐞𝐜𝐭 with a symmetric monoidal structure. This is the most fundamental example of a symmetric fusion 2-category as introduced by <cit.> (see also <cit.> for a slightly different perspective). More generally, we can also consider 𝐅𝐢𝐧𝐂𝐚𝐭, the 2-category of finite 1-categories, and right exact functors. The Deligne tensor product also endows 𝐅𝐢𝐧𝐂𝐚𝐭 with a symmetric monoidal structure.
Let ℰ be a symmetric fusion 1-category over k. Recall from <cit.> that the 2-category 𝐌𝐨𝐝(ℰ) of finite semisimple right ℰ-module 1-categories is a fusion 2-category with monoidal structure given by the relative Deligne tensor product. Further, it admits a symmetric monoidal structure <cit.>.
Let G be a finite group. We can consider the symmetric fusion 2-category 2𝐑𝐞𝐩(G) of finite semisimple 1-categories with a G-action. In fact, it was shown in <cit.> that 2𝐑𝐞𝐩(G)≃𝐌𝐨𝐝(𝐑𝐞𝐩(G)) as symmetric fusion 2-categories. More generally, one can consider the 2-category of 2-representations of a finite 2-group.
Let ℭ be a fusion 2-category. Then, the Drinfeld center of ℭ, which we denote by 𝒵(ℭ), is a braided monoidal 2-category <cit.>. Further, it was shown in <cit.> that 𝒵(ℭ) is a fusion 2-category. For instance, given a finite group G, one can consider the braided fusion 2-category 𝒵(2𝐕𝐞𝐜𝐭_G), which was described explicitly in <cit.>. We will be particularly interested in the case when ℭ is connected, i.e. ℭ≃𝐌𝐨𝐝(ℬ) for some braided fusion 1-category ℬ. Thanks to the results of <cit.>, this is not a loss of generality as the Drinfeld center of any fusion 2-category is of this form.
§.§ Algebras and Modules
Let ℭ be a strict cubical monoidal 2-category. We recall the definition of an algebra in ℭ from <cit.>. The definition of an algebra in an arbitrary monoidal 2-category using our graphical conventions may be found in <cit.>.
An algebra in ℭ consists of:
* An object A of ℭ;
* Two 1-morphisms m:A A→ A and i:I→ A;
* Three 2-isomorphisms
[sep=small]
A [rrrr, equal] [rrdd, "i1"'] [dd, Rightarrow, "λ"', near start, shorten > = 1ex] A
AA, [rruu, "m"']
[sep=small]
AAA [dd, "1m"'] [rr, "m1"] AA [dd, "m"]
AA [rr, "m"'] [rruu, Rightarrow, "μ", shorten > = 2.5ex, shorten < = 2.5ex] A,
[sep=small]
AA [rrdd, "m"] [dd, Rightarrow, "ρ", shorten > = 1ex, shorten < = 2ex]
A [rruu, "1i"] [rrrr,equal] A,
satisfying:
a. We have:
< g r a p h i c s >
< g r a p h i c s >
0.45=
< g r a p h i c s >
,
b. We have:
< g r a p h i c s >
< g r a p h i c s >
0.45=
< g r a p h i c s >
.
Let us now recall the definition of a right A-module in ℭ from definition 1.2.3 of <cit.>. We invite the reader to consult definition 3.2.1 of <cit.> for the definition in a general monoidal 2-category.
A right A-module in ℭ consists of:
* An object M of ℭ;
* A 1-morphism n^M:M A→ M;
* Two 2-isomorphisms
[sep=small]
MAA [dd, "1m"'] [rr, "n^M1"] MA [dd, "n^M"]
MA [rr, "n^M"'] [rruu, Rightarrow, "ν^M", shorten > = 2.5ex, shorten < = 2.5ex] M,
[sep=small]
MA [rrdd, "n^M"] [dd, Rightarrow, "ρ^M", shorten > = 1ex, shorten < = 2ex]
M [rruu, "1i"] [rrrr,equal] M,
satisfying:
a. We have:
< g r a p h i c s >
< g r a p h i c s >
0.45=
< g r a p h i c s >
,
b. We have:
< g r a p h i c s >
< g r a p h i c s >
0.45=
< g r a p h i c s >
.
Finally, let us recall definitions 3.2.6 and 3.2.7 of <cit.>.
Let M and N be two right A-modules. A right A-module 1-morphism consists of a 1-morphism f:M→ N in ℭ together with an invertible 2-morphism
[sep=small]
MA [dd, "f1"'] [rr, "n^M"] M [dd, "f"]
NA [rr, "n^N"'] [rruu, Rightarrow, "ψ^f", shorten > = 2.5ex, shorten < = 2.5ex] N,
subject to the coherence relations:
a. We have:
< g r a p h i c s >
< g r a p h i c s >
0.45=
< g r a p h i c s >
,
b. We have:
< g r a p h i c s >
< g r a p h i c s >
0.45=
< g r a p h i c s >
.
Let M and N be two right A-modules, and f,g:M→ M two right A-module 1-morphisms. A right A-module 2-morphism f⇒ g is a 2-morphism γ:f⇒ g in ℭ that satisfies the following equality:
< g r a p h i c s >
< g r a p h i c s >
0.45=
< g r a p h i c s >
.
These object assemble to form a 2-category as was proven in <cit.>.
Let A be an algebra in a monoidal 2-category ℭ. Right A-modules, right A-module 1-morphisms, and right A-module 2-morphisms form a 2-category, which we denote by 𝐌𝐨𝐝_ℭ(A).
In 2𝐕𝐞𝐜𝐭, algebras correspond exactly to finite semisimple monoidal 1-categories. Given a finite semisimple monoidal 1-category 𝒞, right 𝒞-modules in 2𝐕𝐞𝐜𝐭 are precisely finite semisimple right 𝒞-module 1-categories. A similar observation holds for module morphisms. More generally, algebras 𝐅𝐢𝐧𝐂𝐚𝐭 are precisely finite monoidal 1-categories, whose monoidal product is right exact in both variables. Fixing such a monoidal 1-category 𝒞, right 𝒞-modules in 𝐅𝐢𝐧𝐂𝐚𝐭 correspond exactly to finite right 𝒞-module 1-categories, for which the action is right exact in both variables.
Let us also recall the following definition from <cit.>.
Let A and B be two algebras in ℭ. An algebra 1-homomorphism f:A→ B consists of a 1-morphism f:A→ B in ℭ, together with two invertible 2-morphisms
[sep=tiny]
AA [rrr, "f1"] [dddd, "m^A"'] BA [rrr, "1f"] BB [dddd, "m^B"]
[rrrr, "κ^f", Rightarrow, shorten <=4ex, shorten >=4ex]
A [rrrrrr, "f"'] B,
[sep=tiny]
I [rrr, equal] [ddd, "i^A"'] I [ddd, "i_B"]
A [rrr, "f"'] [uuurrr, "ι^f", Rightarrow, shorten <=4ex, shorten >=4ex] B,
satisfying:
a. We have:
< g r a p h i c s >
< g r a p h i c s >
0.45=
< g r a p h i c s >
,
b. We have:
< g r a p h i c s >
< g r a p h i c s >
0.45=
< g r a p h i c s >
,
c. We have:
< g r a p h i c s >
< g r a p h i c s >
0.45=
< g r a p h i c s >
.
§.§ Relative Tensor Product
Recall that we are working with a strict cubical monoidal 2-category ℭ. Let us fix an algebra A in ℭ. In addition, let M be a right A-module in ℭ, and N be a left A-module N in ℭ (for which we use the notations of <cit.>). Following <cit.>, we can examine whether the relative tensor product of M and N over A exists. This is determined by a 2-universal property, which we now recall for later use.
Let C be an object of ℭ. An A-balanced 1-morphism (M,N)→ C consists of:
* A 1-morphism f:M N→ C in ℭ;
* A 2-isomorphism
[sep=small]
MAN [dd, "1l^N"'] [rr, "n^M1"] MN [dd, "f"]
MN [rr, "f"'] [rruu, Rightarrow, "τ^f", shorten > = 2.5ex, shorten < = 2.5ex] C,
satisfying:
a.
< g r a p h i c s >
< g r a p h i c s >
0.45=
< g r a p h i c s >
,
b.
< g r a p h i c s >
< g r a p h i c s >
0.45=
< g r a p h i c s >
.
Let C be an object of ℭ, and f,g:(M,N)→ C be two A-balanced 1-morphisms. An A-balanced 2-morphism f⇒ g is a 2-morphism γ:f⇒ g in ℭ such that
< g r a p h i c s >
< g r a p h i c s >
0.45=
< g r a p h i c s >
.
The relative tensor product of M and N over A, if it exists, is an object M_A N of ℭ together with an A-balanced 1-morphism t_A:(M,N)→ M_A N satisfying the following 2-universal property:
* For every A-balanced 1-morphism f:(M,N)→ C, there exists a 1-morphism f:M_A N→ C in ℭ and an A-balanced 2-isomorphism ξ:f∘ t_A≅ f.
* For any 1-morphisms g,h:M_A N→ C in ℭ, and any A-balanced 2-morphism γ:g∘ t_A⇒ h∘ t_A, there exists a unique 2-morphism ζ:g⇒ h such that ζ∘ t_A = γ.
Let us fix an algebra in 2𝐕𝐞𝐜𝐭, that is a finite semisimple monoidal 1-category 𝒞. Recall that we are working over an algebraically closed field k of characteristic zero for simplicity. Further, let us fix a finite semisimple right 𝒞-module 1-category ℳ, and a finite semisimple left 𝒞-module 1-category 𝒩. We view ℳ as a right 𝒞-module in 2𝐕𝐞𝐜𝐭, and 𝒩 as a left 𝒞-module. In this case, the relative tensor product ℳ⊠_𝒞𝒩 is precisely the relative Deligne tensor product of <cit.>. Namely, the above definitions correspond exactly to the definitions of section 3.1 of <cit.>. The aforementioned reference shows that the relative tensor product exists if we assume that 𝒞 has duals, i.e. 𝒞 is a multifusion 1-category. For completeness, let us also note that this example can be generalized to 𝐅𝐢𝐧𝐂𝐚𝐭 (see <cit.>).
For later use, it is convenient to compare in more detail the explicit construction of the relative Deligne tensor product given in section 3.2 of <cit.> with the abstract categorical definition above. More precisely, an object of ℳ⊠_𝒞𝒩 is a pair (V,γ) consisting of an object V of ℳ⊠𝒩 together with natural isomorphisms
γ_C: V⊗(C⊠ I)→ V⊗(I⊠ C),
for every C in 𝒞 satisfying the obvious coherence relations. Then, the canonical 𝒞-balanced functor t:ℳ⊠𝒩→ℳ⊠_𝒞𝒩 is the right adjoint of the forgetful functor. Moreover, given objects M in ℳ and N in 𝒩, and writing t(M⊠ N) = (V,γ), the balancing natural isomorphism τ^t on M⊠ N is given by
V⊗(I⊠ C) V⊗(I⊠ C).
It follows from the definitions that this construction satisfies the desired 2-universal property.
Generalizing the above example, it is interesting to ask when the relative tensor product of M and N over A exists. We will be particularly interested in the case when ℭ is a multifusion 2-category in the sense of <cit.>. Under this hypothesis, it was shown in <cit.> that a sufficient condition for the existence of the relative tensor product is that the algebra A be separable, a notion which has its origin in <cit.>. This recovers the previous example as separable algebra in the fusion 2-category 2𝐕𝐞𝐜𝐭 are exactly multifusion 1-categories. For completeness, we also recall the intermediate definition of a rigid algebra due to <cit.>. Both of these notions were studied extensively in <cit.>, to which we refer the reader for further discussion and examples.
A rigid algebra in a monoidal 2-category is an algebra A such that the 1-morphism m:A A→ A admits a right adjoint m^* as an A-A-bimodule 1-morphism.
A separable algebra in a monoidal 2-category is a rigid algebra A such that the counit ϵ^m:m∘ m^*⇒ Id_A splits as an A-A-bimodule 2-morphism.
Provided that the relative tensor product over A of any two modules exists in ℭ and commutes with the monoidal structure, the 2-category 𝐁𝐢𝐦𝐨𝐝_ℭ(A) of A-A-bimodules in ℭ inherits a monoidal structure given by _A, the relative tensor product over A <cit.>.
In particular, this is the case if A is a separable algebra in a fusion 2-category ℭ. In this case, the 2-category 𝐁𝐢𝐦𝐨𝐝_ℭ(A) of A-A-bimodules in ℭ has a monoidal structure provided by _A. In particular, we note that all of the coherence data is supplied by the 2-universal property of the relative tensor product.
§.§ Braided and Étale Algebras
Let 𝔅 be a semi-strict braided monoidal 2-category. We recall the following definition from <cit.>.
A braided algebra in 𝔅 consists of:
* An algebra B in 𝔅;
* A 2-isomorphisms
[sep=small]
BB [rrdd, "m"] [dd, Rightarrow, "β", shorten > = 1ex, shorten < = 2ex]
BB [rruu, "b"] [rrrr, "m"'] B,
satisfying:
a. We have:
< g r a p h i c s >
< g r a p h i c s >
0.45=
< g r a p h i c s >
,
b. We have:
< g r a p h i c s >
< g r a p h i c s >
0.45=
< g r a p h i c s >
,
c. We have:
< g r a p h i c s >
< g r a p h i c s >
0.45=
< g r a p h i c s >
.
Let A and B be two braided algebras in 𝔅. A braided algebra 1-homomorphism f:A→ B is an algebra 1-homomorphism f:A→ B that satisfies:
< g r a p h i c s >
< g r a p h i c s >
0.45=
< g r a p h i c s >
.
Let us now assume that 𝔅 is a braided fusion 2-category over algebraically closed of characteristic zero. We note that the assumptions on the ground field can be relaxed <cit.>. We will be particularly interested in the following objects.
An étale algebra in a braided multifusion 2-category 𝔅 is a separable braided algebra. An étale algebra B is called connected provided that its unit 1-morphism i:I→ B is simple, i.e. End_𝔅(i)≅k.
Braided rigid algebras in 𝐅𝐢𝐧𝐂𝐚𝐭 are exactly finite braided multitensor 1-categories in the sense of <cit.>. Étale algebras in 2𝐕𝐞𝐜𝐭 are given by separable braided multifusion 1-categories, and connected étale algebras are braided fusion 1-categories.
Let ℰ be a symmetric fusion 1-category. It follows from <cit.> that étale algebras in 𝐌𝐨𝐝(ℰ) are exactly braided multifusion 1-categories equipped with a symmetric functor from ℰ to their symmetric center.
Let G be a finite group. Braided rigid algebras in 𝒵(2𝐕𝐞𝐜𝐭_G) are exactly G-crossed braided multifusion 1-categories. Thanks to corollary 5.1.2 of <cit.>, all such algebras are étale. We note that a G-crossed braided multifusion 1-category ℬ yields a connected étale algebra in 𝒵(2𝐕𝐞𝐜𝐭_G) if and only if the canonical G-action on the monoidal unit of ℬ permutes all the simple summands transitively.
Let ℬ be a braided fusion 1-category. It follows from <cit.> and <cit.> that étale algebras in 𝒵(𝐌𝐨𝐝(ℬ)) are braided multifusion 1-categories equipped with a braided functor from ℬ. Further, connected étale algebras are braided fusion 1-categories equipped with a braided functor from ℬ. Such objects played a key role in the proof of the minimal non-degenerate extension conjecture obtained in <cit.>.
§.§ Induction 2-Functors
Let us now fix a braided algebra B in a braided monoidal 2-category 𝔅, which we assume is semi-strict without loss of generality. Further, we will assume that relative tensor products over B exist in 𝔅, and are preserved by the monoidal product of 𝔅. Under these hypotheses, and building upon <cit.>, it was shown in section 3.2 of <cit.> that the 2-category of right B-modules in 𝔅 admits a monoidal structure. We denote this monoidal 2-category by 𝐌𝐨𝐝^+_𝔅(B), and its monoidal product by _B^+. For later use, let us recall more precisely how this monoidal structure is constructed. Given a right B-module M, we can define a B-B-bimodule Ind^+(M) by endowing the right B-module M with a compatible left B-module structure using the left action l^M_+:=n^M∘ b:B M→ M, and the following 2-isomorphisms
< g r a p h i c s >
15pt0.45λ^M:=
< g r a p h i c s >
,
< g r a p h i c s >
0.45κ^M:=
< g r a p h i c s >
,
< g r a p h i c s >
0.45β^M:=
< g r a p h i c s >
.
Here, we are using the notations of <cit.> for left modules, and bimodules. Then, it was shown in lemma 3.3 of <cit.> that the above assignment extends to a 2-functor
Ind^+:𝐌𝐨𝐝^+_𝔅(B)→𝐁𝐢𝐦𝐨𝐝_𝔅(B),
which is fully faithful on 2-morphisms. In particular, we may view 𝐌𝐨𝐝^+_𝔅(B) as a sub-2-category of 𝐁𝐢𝐦𝐨𝐝_𝔅(B). But, the relative tensor product over B endows the 2-category 𝐁𝐢𝐦𝐨𝐝_𝔅(B) with a monoidal structure <cit.>. Further, it was established in proposition 3.4 of <cit.> that the sub-2-category 𝐌𝐨𝐝^+_𝔅(B) is closed under the relative tensor product over B. In particular, it admits a monoidal structure, with monoidal product denoted using _B^+. For our current purposes, it is convenient to rephrase this result in the following way.
The 2-functor Ind^+:𝐌𝐨𝐝^+_𝔅(B)→𝐁𝐢𝐦𝐨𝐝_𝔅(B) is monoidal.
Given M, N two right B-modules, we will need to have an explicit description of the equivalence
X_M,N^+:Ind^+(M)_BInd^+(N)→ Ind^+(M_B^+N),
witnessing that Ind^+ is monoidal. It follows from the proof of proposition 3.4 of <cit.> that the underlying right B-module 1-morphism is the identity on M_B^+N. Further, the left B-module structure is constructed using the 2-universal property of the relative tensor product over B. More precisely, if t:M N→ M^+_B N denotes the 2-universal B-balanced right B-module 1-morphism with balancing τ^t_+:t∘ (M l^N_+)⇒ t∘ (n^M N), then the 2-isomorphism
(t∘ 1n^N ∘ R^-1)·(τ^t^-1_+∘ b1):t∘ (n^M N) ∘ (b N)≅ t∘ (M n^N)∘ b
is B-balanced. Thus, there exists a 2-isomorphism χ^X_M,N^+:l^M^+_BN≅ n^M^+_BN∘ b ensuring that X_M,N^+ is compatible with the left B-module structures.
As indicated in <cit.>, the above construction admits a variant that uses b^∙ instead of b. Succinctly, given a right B-module M, we can define a B-B-bimodule Ind^-(M) by endowing the right B-module M with a compatible left B-module structure using the left action l^M_-:=n^M∘ b^∙:B M→ M, and appropriate 2-isomorphisms. This assignment extends to a 2-functor
Ind^-:𝐌𝐨𝐝^-_𝔅(B)→𝐁𝐢𝐦𝐨𝐝_𝔅(B).
Further, there is a monoidal structure on the 2-category 𝐌𝐨𝐝^-_𝔅(B) with monoidal product _B^-, and such that the 2-functor Ind^- is monoidal. Given M, N two right B-modules, the equivalence
X^-_M,N:Ind^-(M)_BInd^-(N)→ Ind^-(M_B^-N)
has underlying right B-module 1-morphism the identity on M_B^-N, and its left B-module structure is constructed using the 2-universal property of the relative tensor product over B.
§ LOCAL MODULES AND THEIR PROPERTIES
§.§ Definition
Let us fix B a braided algebra in a braided monoidal 2-category 𝔅. Without loss of generality, we will assume that 𝔅 is semi-strict. We begin by recalling the definition of a local B-module, which has already appeared in appendix B of <cit.> (see also <cit.> for a related definition in a slightly different context).
A local B-module in 𝔅 consists of:
* A right B-module M in 𝔅,
< g r a p h i c s >
2. A 2-isomorphism, called a holonomy,
[sep=small]
BM [rr, "b"] [dd, Rightarrow, "h^M", shorten > = 2ex, shorten < = 1ex, near start] MB [dd, "n^M"]
MB [uu, "b"] [rr, "n^M"'] M,
satisfying:
a. We have that equation (<ref>) is satisfied,
b. We have:
< g r a p h i c s >
< g r a p h i c s >
0.45=
< g r a p h i c s >
,
c. We have:
< g r a p h i c s >
< g r a p h i c s >
0.45=
< g r a p h i c s >
.
Given M and N two local B-modules. A 1-morphism of local B-modules is a right B-module 1-morphism f in 𝔅 satisfying the following equation
< g r a p h i c s >
< g r a p h i c s >
0.45=
< g r a p h i c s >
.
Given f,g:M→ N two 1-morphisms of local B-modules. A 2-morphisms of local B-modules is a right B-module 2-morphism f⇒ g.
Local B-modules in 𝔅, 1-morphisms of local B-modules, and 2-morphisms of local B-modules form a 2-category, which we denote by 𝐌𝐨𝐝^loc_𝔅(B).
The following technical results will play a key role in all of the subsequent constructions. The first lemma is based on an observation given in remark C.8 of <cit.>. It allows us to give an equivalent perspective on holonomies, which extremely useful in practice.
Let M be a right B-module. The data of a holonomy on M corresponds exactly to the data necessary to upgrade the canonical right B-module 1-morphism Id_M:M→ M to a 1-morphism of B-B-bimodules ħ^M:Ind^+(M)→ Ind^-(M).
In the notations of section 1.3 of <cit.>, the relevant coherence 2-isomorphism witnessing that the ħ^M is compatible with the left B-module structures is of the form
[sep=small]
BM [dd, "b^∙"'] [rr, "b"] MB [dd, "n^M"]
MB [rr, "n^M"'] [rruu, Rightarrow, "χ^ħ^M", shorten > = 2.5ex, shorten < = 2.5ex] B,
In particular, χ^ħ^M is obtained from (h^M)^-1 by currying, i.e. we have
< g r a p h i c s >
0.45χ^ħ^M:=
< g r a p h i c s >
.
It is easy to check that the equations required of h^M to define a holonomy on M correspond exactly to the equations required of χ^ħ^M to define a compatible left B-module structure on Id_M as in the statement of the lemma.
Let M and N be two local right B-modules, and let f:M→ N be a right B-module 1-morphism. Then, f is a 1-morphism of local B-modules if and only if the following square of B-B-bimodules 1-morphisms commutes
[sep=small]
Ind^+(M) [rr, "ħ^M"] [dd, "Ind^+(f)"'] Ind^-(M) [dd, "Ind^-(f)"]
Ind^+(N) [rr, "ħ^N"'] Ind^-(N).
Provided the square commutes strictly, the proof follows readily by direct inspection. It therefore only remains to explain why it is enough to check that the square commutes weakly. To see this, note that the underlying diagram of right B-modules commutes strictly. Moreover, the B-B-bimodule 1-morphisms ħ^M and ħ^N are isomorphisms. Thus, provided that the above square commutes weakly, the choices of filling correspond to the invertible elements in the algebra End_B-B(Ind^+(f)). But, this algebra is isomorphic to End_B(f) as the 2-functor Ind^+ is fully faithful on 2-morphisms by lemma 3.3. of <cit.>. In particular, if the square commutes up to any B-B-bimodule 2-isomorphism, then it necessarily commutes strictly.
§.§ The Braided Monoidal 2-Category of Local Modules
We show that the 2-category of local modules admits a canonical braided monoidal structure. Our construction proceeds by enhancing the construction of a monoidal 2-category of right modules over a braided algebra given in <cit.>. We also point the reader towards <cit.> for related results in a different context. Throughout, we let 𝔅 be a braided monoidal 2-category, and B a braided algebra in 𝔅 such that the relative tensor product over B of any two modules exists. Further, we assume that these relative tensor products are preserved by the monoidal product of 𝔅.
The 2-category 𝐌𝐨𝐝^loc_𝔅(B) of local B-modules inherits a monoidal structure from 𝐌𝐨𝐝^+_𝔅(B).
Without loss of generality, we may assume that 𝔅 is semi-strict. As was already recalled in section <ref>, the 2-category 𝐌𝐨𝐝^+_𝔅(B) of right B-modules carries a monoidal structure given by the relative tensor product ^+_B. Thus, it is enough to enhance this construction to include holonomies. We do so using the perspective of lemma <ref>.
More precisely, let M and N be two local right B-modules with holonomies h^M and h^N respectively. We will equivalently think of h^M and h^N as B-B-bimodule 1-morphisms
ħ^M:Ind^+(M)→ Ind^-(M), ħ^N:Ind^+(N)→ Ind^-(N).
Then, the relative tensor product M^+_BN can be equipped with a natural holonomy, namely the one that corresponds to the B-B-bimodules 1-morphism
Ind^+(M_B^+N) Ind^+(M)_BInd^+(N)Ind^-(M)_BInd^+(N)
Ind^-(M)_BInd^-(N) Ind^-(M^+_BN).
In order to appeal to lemma <ref>, we have to make sure that the underlying right B-module 1-morphism is the identity. In order to ensure this, we make the convention that M_B^+N = M_B^-N as right B-modules. Namely, the B-B-bimodules Ind^+(N) and Ind^-(N) are isomorphic via ħ^N. Then, the right B-modules M_B^+N equipped with the B-balanced right B-module 1-morphism t_M,N∘ (M (ħ^N)^-1) satisfies the 2-universal property of M_B^-N. We may therefore without loss of generality assume that they are equal. This convention ensures that the underlying right B-module 1-morphism of the composite given in equation (<ref>) is the identity as the underlying right B-module of Ind^-(M)_BInd^-(N) is M_B^-N by construction.
Now, let M, N, and P be right B-modules equipped with holonomies h^M, h^N, and h^P respectively. Thanks to proposition 3.4 of <cit.>, there is 2-natural equivalence
α_M,N,P:(M^+_BN)^+_BP≃ M^+_B(N^+_BP)
witnessing the associativity of the monoidal product of 𝐌𝐨𝐝_𝔅^+(B). In fact, this 2-natural equivalence is compatible with the holonomies, as can be readily seen from lemma <ref> and the fact that Ind^+ and Ind^- are monoidal 2-functors. A similar argument shows that the 2-natural equivalences
l_N:B^+_BN→ N, r_M:M^+_BB→ M
witnessing the unitality of the monoidal product of 𝐌𝐨𝐝_𝔅^+(B) are also compatible with the holonomies.
Finally, there is nothing to check at the level of invertible modifications. Namely, a 2-morphism of local right B-modules is nothing but a 2-morphism of right B-modules. They satisfy the necessary equations thanks to proposition 3.4 of <cit.>. This finishes the proof of the proposition.
It follows from the construction of the monoidal structure on 𝐌𝐨𝐝^loc_𝔅(B) that the canonical forgetful 2-functor
𝐓:𝐌𝐨𝐝^loc_𝔅(B)→𝐌𝐨𝐝^+_𝔅(B)
admits a monoidal structure. Further, this 2-functor is fully faithful on 2-morphisms. For later use, let us also record that 𝐓 is conservative on 1-morphisms, i.e. a 1-morphism f:M→ N is an equivalence in 𝐌𝐨𝐝^loc_𝔅(B) if and only if 𝐓(f) is an equivalence.
Let 𝔅 be a braided monoidal 2-category, and B a braided algebra in 𝔅. If 𝔅 has relative tensor products over B, and they are preserved by the monoidal product of 𝔅, then the 2-category 𝐌𝐨𝐝^loc_𝔅(B) of local B-modules admits a braided monoidal structure.
We focus on constructing the braiding. The remainder of the proof is very similar to that of theorem 3.8 of <cit.>. More precisely, let M and N be two local right B-modules, and write
t_M,N:M N→ M^+_B N, t_N,M:N M→ N^+_B M
for the 2-universal right B-balanced 1-morphisms. Then, the composite 1-morphism t_N,M∘ b_M,N:M N→ N^+_B M is upgraded to a B-balanced right B-module 1-morphism via
< g r a p h i c s >
0.45τ^t∘ b:=
< g r a p h i c s >
,
< g r a p h i c s >
0.45ψ^t∘ b:=
< g r a p h i c s >
.
Namely, the fact that ψ^t∘ b defines a right B-module structure on t_N,M∘ b_M,N follows readily from equation (<ref>). Further, the proof that τ^t∘ b defines a B-balancing follows the argument given in the proof of theorem 3.8 of <cit.>, except that we use equation (<ref>) in lieu of the axioms of their definitions 2.1.2 and 3.2.
Thus, appealing to the the 2-universal property of t_M,N, the solid arrow diagram below can be filled using a B-balanced right B-module 2-isomorphism:
[sep=small]
M N [dd, "b_M,N"'] [rr, "t_M,N"] M^+_B N [dd, "b_M,N", dotted]
N M [rr, "t_N,M"'] [rruu, Rightarrow, "ξ_M,N", shorten > = 3ex, shorten < = 4ex] N^+_B M.
In order to check that b is compatible with the holonomies, we will appeal to the criterion of lemma <ref>. Firstly, there is a commutative diagram of B-B-bimodules
[sep=small]
Ind^+(M)_B Ind^+(N) [rr, "b^+"] [dd, "X^+_M,N"'] Ind^+(N)_B Ind^+(M) [dd, "X^+_N,M"]
Ind^+(M_B^+N) [rr, "Ind^+(b)"'] Ind^+(N_B^+M).
More precisely, the B-B-bimodule 1-morphism b^+ is defined using a variant of the construction of b, in which the square (<ref>) is viewed in the 2-category of B-B-bimodules. In order to do this, it is enough to endow t∘ b with a compatible left B-module structure via
< g r a p h i c s >
0.45χ^t∘ b:=
< g r a p h i c s >
.
It follows easily from the definitions that the square (<ref>) commutes.
Secondly, also note that a square similar to (<ref>) with + replaced by - commutes. More precisely, we upgrade the 1-morphism
t_N,M∘ b_M,N:Ind^-(M) Ind^-(N)→ Ind^-(M)_B Ind^-(N)
to a right B-module B-balanced 1-morphism via
< g r a p h i c s >
0.45τ_-^t∘ b:=
< g r a p h i c s >
,
< g r a p h i c s >
0.45ψ_-^t∘ b:=
< g r a p h i c s >
.
We claim that this produces the right B-module 1-morphism b:M_B^-N→ N_B^-M. Namely, recall the convention that M_B^-N = M_B^+N, which we have adopted in the proof of proposition <ref>. In particular, the B-balanced structure of t:M N→ M_B^-N is given by τ^t_- = τ^t ∘ (Id (ħ^N)^-1). The claim then follows by checking that b satisfies the appropriate universal 2-property. Further, endowing t_N,M∘ b_M,N:Ind^-(M) Ind^-(N)→ Ind^-(N)_B Ind^-(M) with a compatible left B-module structure
< g r a p h i c s >
0.45χ_-^t∘ b:=
< g r a p h i c s >
.
and appealing to the 2-universal property produces a B-B-bimodule 1-morphism b^-. In summary, we have a commutative square of B-B-bimodules
[sep=small]
Ind^-(M)_B Ind^+(N) [rr, "b^-"] [dd, "X^-_M,N"'] Ind^+(N)_B Ind^-(M) [dd, "X^-_N,M"]
Ind^-(M_B^-N) [rr, "Ind^-(b)"'] Ind^-(N_B^-M).
Thirdly, we claim that there exists a 2-isomorphism of B-B-bimodules that makes the diagram below commute
[sep=small]
Ind^+(M)_B Ind^+(N) [dd, "ħ^M_B ħ^N"'] [rr, "b^+"] Ind^+(N)_B Ind^+(M) [dd, "ħ^N_B ħ^M"]
Ind^-(M)_B Ind^-(N) [rr, "b^-"'] Ind^-(N)_B Ind^-(M).
Thanks to the 2-universal property of the relative tensor product, it is enough to show that there exists a B-balanced B-B-bimodule 2-isomorphism that fits into the next diagram
[sep=small]
Ind^+(M) Ind^+(N) [rr, "b"] [dd, "ħ^Mħ^N"'] Ind^+(N) Ind^+(M) [rr, "ħ^Nħ^M"] Ind^-(N) Ind^-(M) [dd, "t"]
Ind^-(M) Ind^-(N) [rr, "b"'] Ind^-(N) Ind^-(M) [rr, "t"'] Ind^-(N)_B Ind^-(M).
We emphasize that b does not carry a B-B-bimodule structure. Rather, the bottom-left and top-right composite 1-morphisms do. The desired 2-isomorphism is then supplied by the naturality 2-isomorphism of the 2-natural transformation b. One checks by tracing through the definitions that this 2-isomorphism is B-balanced, and is compatible with the B-B-bimodule structures.
Putting the above discussion together, we do find that the right B-module 1-morphism b_M,N satisfies the criterion of lemma <ref>, so that it is compatible with the holonomies. Appealing to the 2-universal property of the relative tensor product _B, it is standard to check that the collection of local right B-module 1-morphisms b_M,N for varying M and N assemble into a 2-natural equivalence b.
Finally, analogously to what is done in the proof of theorem 3.8 of <cit.>, we can construct invertible modification R and S witnessing that b is appropriately coherent by using the 2-universal property of the relative tensor product. There is no further complication as 2-morphisms of local right B-modules are nothing but 2-morphisms of right B-modules. This endows 𝐌𝐨𝐝^loc_𝔅(B) with a braiding in the sense of definition 2.3 of <cit.> as desired.
It follows from the proof of the above results that the canonical forgetful 2-functor
𝐔:𝐌𝐨𝐝^loc_𝔅(B)→𝔅
admits a lax braided monoidal structure given on the objects M and N of 𝐌𝐨𝐝^loc_𝔅(B) by t_M,N:M N→ M^+_B N, the 1-morphism supplied by the 2-universal property of the relative tensor product over B. For later use, let us note that the 2-natural transformation t is strong, and that the relevant coherence modifications as in definition 2.5 of <cit.> are all invertible.
The construction of the braiding given in the proof of theorem <ref> does not use the fact that the B-module N is local. In fact, this construction can be upgraded to a braided monoidal 2-functor
𝐌𝐨𝐝_𝔅^loc(B)→𝒵(𝐌𝐨𝐝_𝔅(B)).
§.§ Local Modules in Braided Fusion 2-Categories
Let k be an algebraically closed field of characteristic zero. We will now specifically study the properties of the 2-category of local modules in a braided multifusion 2-category 𝔅. We note that the hypotheses on the characteristic of k can be dropped if desired, but we will keep it for simplicity.
Let B be an étale algebra in a braided multifusion 2-category 𝔅, then the 2-category 𝐌𝐨𝐝^loc_𝔅(B) of local B-modules is finite semisimple.
Let M and N be two local B-modules in 𝔅. By definition, the 1-category Hom_B^loc(M,N) of morphisms of local B-modules is a full sub-1-category Hom_B(M,N), which is a finite semisimple 1-category by <cit.>. Further, it is easy to check that Hom_B^loc(M,N) is closed under direct sums and splittings of idempotents, so that Hom_B^loc(M,N) is also a finite semisimple 1-category.
It is clear that the 2-category 𝐌𝐨𝐝^loc_𝔅(B) has direct sums for objects. Next, we show that 𝐌𝐨𝐝^loc_𝔅(B) is closed under the splitting of 2-condensation monads. Take a local B-module M, together with a 2-condensation monad (M,e,ξ,δ) in the 2-category 𝐌𝐨𝐝^loc_𝔅(B). By forgetting the holonomy, we obtain a 2-condensation monad on M viewed as a right B-module. By proposition 3.3.8 of <cit.>, all 2-condensation monads in 𝐌𝐨𝐝_𝔅(B) split. Thus, there exists a 2-condensation (M,N,f,g,ϕ,γ) in 𝐌𝐨𝐝_𝔅(B), together with a splitting θ:g ∘ f ≃ e, i.e. a right B-module 2-isomorphism satisfying
ξ = θ· (g ∘ϕ∘ f) · (θ^-1∘θ^-1),
δ = (θ∘θ) · (g ∘γ∘ f) ·θ^-1.
We claim that N can be endowed with a holonomoy h^N on N compatible with both f and g. This proves that 𝐌𝐨𝐝^loc_𝔅(B) is a Cauchy complete 2-category.
In order to prove the above claim, we let h^N be the 2-isomorphism given by:
< g r a p h i c s >
0.45h^N :=
< g r a p h i c s >
.
We now show that h^N does define a holonomy on N using the figures given in appendix <ref>. The left hand-side of equation (<ref>) is depicted in figure <ref>. To get to figure <ref>, we move the coupon labeled γ1 to the left along the blue arrow. Then, figure <ref> is obtained by moving the indicated string to the top along the blue arrows, as well as applying equation (<ref>) to the green coupons. Figure <ref> is attained by applying equation (<ref>) to the blue coupons. So as to obtain figure <ref>, we move the coupons labelled S^-1 and R^-1 to the left along the blue arrows, followed by moving the coupons labelled γ 11, ψ^f1, and ϕ1 to the top along the indicated green arrows, then apply equation (<ref>) on the red coupons. Figure <ref> is produced by both applying equation (<ref>) on the blue coupons and creating a pair of cancelling coupons labeled γ 11 and ϕ11 in the green region. We get to figure <ref> by moving the freshly created coupon labelled γ11 to the left along the blue arrow, and inserting a pair of cancelling coupons labelled θ11 and θ^-111 in the green region. Figure <ref> is obtained via the use of equation (<ref>) on the blue coupons. We then apply equation (<ref>) on the blue coupons, bringing us to figure <ref>. Figure <ref> is subsequently obtained by applying equation (<ref>) twice. Cancelling the two coupons in blue brings us to figure <ref>, which depicts the right hand-side of equation (<ref>). Further, equation (<ref>) for h^N follows from repeated application of equation (<ref>) together with equation (<ref>) for h^M. Finally, equation (<ref>) for h^N follows from two applications of (<ref>) and one application of equation (<ref>) for h^M. Likewise, it is not hard to show that f and g are local B-module 1-morphisms.
Using a similar type of argument, one shows that 𝐌𝐨𝐝^loc_𝔅(B) has adjoints for 1-morphisms. Namely, if f:M→ N is a 1-morphism of local right B-modules, then it is not hard to show that the 2-isomorphism
< g r a p h i c s >
0.45ξ^f=
< g r a p h i c s >
defined in the proof of proposition 2.2.5 of <cit.> satisfies the opposite of equation (<ref>). But, it follows from the same proof that ξ^f = (ψ^^*f)^-1, so that the left adjoint of f is a 1-morphism of local right B-modules.
Finally, it remains to prove that 𝐌𝐨𝐝^loc_𝔅(B) has finitely many equivalence classes of simple objects. To see this, recall that the forgetful 2-functor 𝐓:𝐌𝐨𝐝^loc_𝔅(B)→𝐌𝐨𝐝^+_𝔅(B) is fully faithful on 2-morphisms. In particular, it sends simple objects to simple objects. Thence, it is enough to show that every simple right B-module M only admits finitely many holonomies up to equivalence. It follows from lemma <ref> that a holonomy on M corresponds exactly to the data of an upgrade of the canonical right B-module 1-morphism Id_M:M→ M to a 1-morphism of B-B-bimodules Ind^+(M)→ Ind^-(M). This shows that the set of holonomies on M up to equivalence injects into the set of equivalence classes of invertible B-B-bimodules 1-morphisms Ind^+(M)→ Ind^-(M). The later set is finite thanks to theorem 3.1.6 of <cit.> and corollary 2.2.3 of <cit.>. This finishes the proof.
Let 𝔅 be a braided multifusion 2-category, and B an étale algebra in 𝔅, then 𝐌𝐨𝐝^loc_𝔅(B) has duals.
The proof follows by combining lemma <ref> with the construction of the dual 2-functor given in appendix <ref>. More precisely, let M be a local right B-module in 𝔅. Equivalently, we can think of the holonomy on M as the data necessary to upgrade the canonical right B-module 1-morphism Id_M:M→ M in 𝔅 to a 1-morphism of B-B-bimodules ħ^M:Ind^+(M)→ Ind^-(M). Let us also note that ħ^M is a 1-isomorphism, and write (ħ^M)^-1 for its inverse. Thanks to our assumptions, it follows from <cit.> and the proof of theorem 3.6 of <cit.> that the monoidal 2-categories 𝐌𝐨𝐝^+_𝔅(B) and 𝐁𝐢𝐦𝐨𝐝_𝔅(B) have duals. In particular, holonomies on M^♯ correspond to the data of an upgrade of the canonical right B-module 1-morphism Id_M^♯:M^♯→ M^♯ to a 1-morphism of B-B-bimodules Ind^+(M^♯)→ Ind^-(M^♯). By lemma <ref>, up to equivalence, this last B-B-bimodule 1-morphism is Ind^+(M)^♯→ Ind^-(M)^♯. Thus, we can endow M^♯ with the holonomy corresponding to the B-B-bimodule 1-morphism ((ħ^M)^-1)^♯:Ind^+(M^♯)→ Ind^-(M^♯), i.e. we have a commuting diagram of B-B-bimodule 1-morphisms
[sep=small]
Ind^+(M^♯) [rrr, "ħ^M^♯"] [dd, "90≃"'] Ind^-(M^♯) [dd, "-90≃"]
Ind^+(M)^♯[rrr,"((ħ^M)^-1)^♯"'] Ind^-(M)^♯.
It remains to check that M^♯ equipped with this holonomy is a right dual for M in 𝐌𝐨𝐝_𝔅^loc(B). It is enough to check that the evaluation and coevaluation 1-morphisms i_M:B→ M^♯^+_B M and e_M:M^+_B M^♯→ B in 𝐌𝐨𝐝^+_𝔅(B) are compatible with the holonomies. But, recall from the proof of proposition <ref> that the holonomy on the product M^+_B N of two local B-module M and N is, up to coherence 1-morphisms, the one corresponding to the B-B-bimodule 1-morphism ħ^M_B ħ^N. The claim therefore follows from lemma <ref> by unfolding the definitions and appealing to lemma <ref>.
More generally, it is not necessary to assume that 𝔅 be multifusion, and that B be separable. The proof of proposition <ref> continues to hold provided that 𝔅 has duals, and that the relative tensor product over B of any two B-modules exists in 𝔅 and commutes with the monoidal structure.
Combining together the two propositions above, we obtain our second main result.
Let B be an étale algebra in a braided multifusion 2-category 𝔅. Then, 𝐌𝐨𝐝_𝔅^loc(B) is a braided multifusion 2-category.
Let B be a connected étale algebra in a braided multifusion 2-category 𝔅. Then, 𝐌𝐨𝐝_𝔅^loc(B) is a braided fusion 2-category.
§ APPLICATIONS AND EXAMPLES
§.§ Braided Module 1-Categories
We begin by examining more precisely the notion of local module in the braided fusion 2-category 𝔅 = 2𝐕𝐞𝐜𝐭. We note that the next results hold more generally without the semisimplicity assumptions, but, for simplicity, we will focus on the semisimple case and work over an algebraically closed field of characteristic zero. Throughout, we work over a braided fusion 1-category ℬ, whose underlying fusion 1-category is assumed to be strict without loss of generality, and with braiding denoted by β. We will compare the notion of a local ℬ-module in 2𝐕𝐞𝐜𝐭 with that of a finite semisimple braided ℬ-module 1-category introduced in section 4 of <cit.>. We begin by unfolding our definition of a holonomy in the particular case under consideration.
A local right ℬ-module 1-category consists of:
* A right ℬ-module 1-category ℳ, with coherence natural isomorphism α given on M in ℳ, and B, C in ℬ by
α_M,B,C:(M⊗ B)⊗ C→ M⊗ (B⊗ C);
* A holonomy, that is a natural isomorphism h given on M in ℳ and B in ℬ by
h_M,B:M ⊗ B → M ⊗ B,
satisfying:
a. We have h_M,I = Id_M for all M in ℳ,
b. For every M in ℳ and B, C in ℬ, the following diagram commutes:
[sep=small]
(M ⊗ B) ⊗ C [rrrr, "h_M,B⊗ Id_C"] [dd, "α_M,B,C"'] (M ⊗ B) ⊗ C [dd, "h_M⊗ B,C"]
M ⊗ (B ⊗ C) [dd, "Id_M⊗β_B,C"'] (M ⊗ B) ⊗ C [dd, "α_M,B,C"]
M ⊗ (C ⊗ B) [rrdd, "Id_M⊗β_C,B"'] M ⊗ (B ⊗ C)
M ⊗ (B ⊗ C),[rruu, "h_M,B⊗ C"']
c. For every M in ℳ and B, C in ℬ, the following diagram commutes:
[sep=small]
(M ⊗ B) ⊗ C [rrr, "h_M ⊗ B,C"] [dd, "α_M,B,C"'] (M ⊗ B) ⊗ C [dd, "α_M,B,C"]
M ⊗ (B ⊗ C) [dd, "Id_M⊗β_B,C"'] M ⊗ (B ⊗ C) [dd, "Id_M⊗β_C,B^-1"]
M ⊗ (C ⊗ B) [dd, "α_M,C,B^-1"'] M ⊗ (C ⊗ B) [dd, "α_M,C,B^-1"]
(M ⊗ C) ⊗ B [rrr, "h_M,C⊗ Id_B"'] (M ⊗ C) ⊗ B.
A ℬ-module functor F:ℳ→𝒩 between two local right ℬ-module 1-categories ℳ and 𝒩 with coherence natural isomorphism s is braided if the diagram below commutes for all B in ℬ and M in ℳ
[sep=small]
B ⊗ F(M)[rr,"h_B,F(M)"]
[dd,"s_B,M"']
B ⊗ F(M)[dd,"s_B,M"]
F(B ⊗ M)[rr,"F(h_B,M)"']
F(B ⊗ M).
In section 4 of <cit.>, a notion of braiding on a left ℬ-module 1-category was introduced. Further, it is shown therein that the 2-category of finite semisimple braided left ℬ-module 1-categories admits a braided monoidal structure. Let us recall that the underlying monoidal structure is given by the relative tensor product over ℬ as in example <ref>. We write 𝐋𝐌𝐨𝐝^br(ℬ) for this braided monoidal 2-category.
Let ℬ be a braided fusion 1-category. There is an equivalence of braided monoidal 2-categories
𝐋𝐌𝐨𝐝^br(ℬ) ≃𝐌𝐨𝐝^loc(ℬ).
Firstly, we have to explain our convention for relating right ℬ-module 1-categories with left ℬ-module 1-categories. For any B, C in ℬ and M in ℳ, we set B⊙ M := M⊗ B, and use the natural isomorphism
(B⊗ C)⊙ M =M⊗ (B⊗ C) M⊗ (C⊗ B)
(M⊗ C)⊗ B=B⊙ (C⊙ M)
to witness the associativity of the left action. In the notation of the previous section, we are considering the underlying left ℬ-module of Ind^-(ℳ). Secondly, let h be a holonomy on the right ℬ-module 1-category ℳ. Then, σ:=h is a ℬ-module braiding on the left ℬ-module 1-category Ind^-(ℳ) in the sense of definition 4.1 of <cit.>. In particular, this assignment extends to an equivalence of 2-categories 𝐋𝐌𝐨𝐝^br(ℬ)≃𝐌𝐨𝐝^loc(ℬ).
It remains to show that this equivalence is compatible with the braided monoidal structures. Both of the monoidal structures are enhancements of the relative Deligne tensor product over ℬ. More precisely, let ℳ and 𝒩 be two finite semisimple local right ℬ-module 1-categories. On the one hand, the monoidal structure on 𝐌𝐨𝐝^loc(ℬ) is obtained by endowing ℳ⊠_ℬ^+𝒩 with a holonomy. Given objects M in ℳ and N in 𝒩, recall from example <ref> that the image of M⊠ N under the canonical ℬ-balanced functor t:ℳ⊠𝒩→ℳ⊠^+_ℬ𝒩 is given by t(M⊠ N)=(V,τ_+^-1) with V an object of ℳ⊠𝒩. Further, the ℬ-balanced structure is witnessed by the natural isomorphism τ_+. It then follows from the proof of proposition <ref> that the holonomy on ℳ⊠^+_ℬ𝒩 is completely characterized by the isomorphism
V⊗ (I⊠ B)V⊗ (I⊠ B) V⊗ (B⊠ I)
V⊗ (B⊠ I)V⊗ (I⊠ B)
for every B in ℬ. We emphasize that here we are describing the holonomy on ℳ⊠^+_ℬ𝒩. In particular, the above expression is the inverse of the one given in the proof of <ref> because of lemma <ref>.
On the other hand, the monoidal structure on 𝐋𝐌𝐨𝐝^br(ℬ) is obtained by endowing Ind^-(ℳ)⊠_ℬ^-Ind^-(𝒩) with a ℬ-braiding as in remark 4.13 of <cit.>. Given objects M in Ind^-(ℳ) and N in Ind^-(𝒩), it follows from the convention taken in the proof of proposition <ref> that the image of M⊠ N under the canonical ℬ-balanced functor t:Ind^-(ℳ)⊠ Ind^-(𝒩)→ Ind^-(ℳ)⊠_ℬ^-Ind^-(𝒩) is given by t(M⊠ N)=(V,τ_+^-1). However, the ℬ-balanced structure is witnessed by the natural isomorphism τ_-. Then, the ℬ-braiding is completely characterized by the isomorphism
V⊗ (B⊠ I)V⊗ (I⊠ B) V⊗ (I⊠ B)
V⊗ (B⊠ I)V⊗ (B⊠ I)
for every M in Ind^-(ℳ), N in Ind^-(𝒩), and B in ℬ.
These agree up to the left ℬ-module equivalence
Ind^-(ℳ⊠_ℬ^+𝒩) → Ind^-(ℳ)⊠_ℬ^-Ind^-(𝒩)
whose underlying functor is characterized by
V⊗ (I⊠ B)M⊠ V⊗ (I⊠ B),
and whose left ℬ-module structure is induced by τ_+. More precisely, equation (<ref>) holds with these assignments. This follows from the fact that the following diagram commutes
[sep = small]
V⊗ (B⊠ I) [rr, "τ_+^-1"] [rd, "τ_-^-1"'] V⊗ (I⊠ B) [ld, "Id⊠ h^𝒩"]
V⊗ (I⊠ B),
for every B in ℬ. This is a consequence of the convention that ℳ⊠_ℬ^+𝒩=ℳ⊠_ℬ^-𝒩 as right ℬ-module 1-categories, which we have imposed during the proof of proposition <ref>. Using this, it is straightforward to check that 𝐋𝐌𝐨𝐝^br(ℬ)≃𝐌𝐨𝐝^loc(ℬ) as monoidal 2-categories by appealing to the 2-universal property of the relative Deligne tensor product.
Finally, the two braidings are defined as follows. On the one hand, the proof of theorem <ref> constructs a braiding ℳ⊠^+_ℬ𝒩→𝒩⊠^+_ℬℳ that is completely characterized by the right ℬ-module isomorphism
V⊗(B⊠ I)V⊗(I⊠ B) V⊗(I⊠ B)
for every B in ℬ. On the other hand, by remark 4.13 of <cit.>, the braiding Ind^-(ℳ)⊠_ℬ^-Ind^-(𝒩)→ Ind^-(𝒩)⊠_ℬ^-Ind^-(ℳ) is characterised by
V⊗ (B⊠ I)V⊗ (B⊠ I) V⊗ (I⊠ B)
for every B in ℬ. Here τ_-^† refers to the transpose of τ_-. But, it follows from the 2-universal property of the relative Deligne tensor product that τ_-^† and τ_+^-1 are isomorphic ℬ-balancing. Using this along with the 2-universal property of the relative Deligne tensor product, one checks that 𝐋𝐌𝐨𝐝^br(ℬ)≃𝐌𝐨𝐝^loc(ℬ) as braided monoidal 2-categories, which concludes the proof.
One of the main motivation behind the study of braided ℬ-module 1-categories is that they can be used to model the Drinfeld center of the associated 2-category of ℬ-module 1-categories, as shown in theorem 4.11 of <cit.>. In particular, this readily gives the following corollary.
Let ℬ be a braided fusion 1-category. Then, we have an equivalence of braided fusion 2-categories
𝐌𝐨𝐝^loc(ℬ) ≃𝒵(𝐌𝐨𝐝(ℬ)).
§.§ Braided Algebras and Local Modules in the 2-Category of Local Modules
Let us now fix a braided monoidal 2-category 𝔅, and A a braided algebra in 𝔅 for which the relative tensor product of any two modules exists and commutes with the monoidal product. We have seen in theorem <ref> that the 2-category 𝐌𝐨𝐝_𝔅^loc(A) is braided monoidal. It is therefore natural to ask what are the braided algebras in this 2-category.
The data of a braided algebra in 𝐌𝐨𝐝^loc_𝔅(A) corresponds exactly to the data of a braided algebra B in 𝔅 equipped with a 1-homomorphism of braided algebras f:A → B in 𝔅.
It follows immediately from remark <ref> that the data of a braided algebra in 𝐌𝐨𝐝^loc_𝔅(A) gives a braided algebra in 𝔅 equipped with a 1-homomorphism of braided algebras from A. Conversely, given any braided algebra B in 𝔅 equipped with a 1-homomorphism of braided algebras f:A → B. Then, the braiding on B yields a canonical holonomy on B viewed as a right A-module. Further, it follows from the 2-universal property of the relative tensor product over A that the multiplication 1-morphism m^B:B B→ B factors as
B BB^+_A BB.
It is not difficult to check that m̂ is compatible with the canonical holonomy on B, the remaining data as well as the coherence conditions for a braided algebra in 𝐌𝐨𝐝^loc_𝔅(A) follow from those of B via the 2-universal property of ^+_A. This gives the desired result.
Let us now fix a 1-homomorphism of braided algebras f:A → B in 𝔅, and assume that the relative tensor products over both A and B exist in 𝔅 and commute with the monoidal structure. Using the underlying 1-homomorphism of algebras f:A→ B, we can view B as an algebra in 𝐁𝐢𝐦𝐨𝐝_𝔅(A). Our hypotheses guarantee that the relative tensor product over B exists in 𝐁𝐢𝐦𝐨𝐝_𝔅(A), and is in fact given by the relative tensor product over B in 𝔅 (see corollary 3.2.12 of <cit.>). More precisely, the canonical lax monoidal 2-functor
𝐁𝐢𝐦𝐨𝐝_𝐁𝐢𝐦𝐨𝐝_𝔅(A)(B)→𝐁𝐢𝐦𝐨𝐝_𝔅(B)
is in fact strongly monoidal as well as an equivalence. In particular, we find that the canonical lax monoidal 2-functor
𝐌𝐨𝐝^+_𝐌𝐨𝐝^+_𝔅(A)(B)→𝐌𝐨𝐝^+_𝔅(B)
is a (strongly) monoidal equivalence. Then, it is also natural to ask how the 2-category of local B-modules in 𝔅 and in 𝐌𝐨𝐝^loc_𝔅(A) compare. We will see some applications of this result in the next section.
There is an equivalence of braided monoidal 2-categories
𝐌𝐨𝐝^loc_𝐌𝐨𝐝^loc_𝔅(A)(B) ≃𝐌𝐨𝐝^loc_𝔅(B).
Note that there is a lax braided monoidal forgetful 2-functor
𝐕:𝐌𝐨𝐝^loc_𝐌𝐨𝐝^loc_𝔅(A)(B) →𝐌𝐨𝐝^loc_𝔅(B).
One way to see this is to recall from remark <ref> that the forgetful 2-functor 𝐔:𝐌𝐨𝐝^loc_𝔅(A)→𝔅 has a lax braided monoidal structure. More precisely, in the notation of definition 2.5 of <cit.>, we have strong 2-natural transformations χ^𝐔 and ι^𝐔, which are not necessarily equivalences, and invertible modifications ω^𝐔, γ^𝐔, δ^𝐔, and u^𝐔. Using f:A→ B, we may view B as a braided algebra in 𝐌𝐨𝐝^loc_𝔅(A), and we manifestly have 𝐔(B)=B. Then, we can take local modules over B in both 𝐌𝐨𝐝^loc_𝔅(A) and 𝔅. But, taking local modules is functorial, so that the lax braided monoidal 2-functor 𝐔 induces the lax braided monoidal 2-functor 𝐕 upon taking local modules over B.
We now show that 𝐕 induces an equivalence of 2-categories. Namely, by inspecting the definitions, we find that 𝐕 is fully faithful on 2-morphisms. Then, it is not difficult to check that it is essentially surjective on 1-morphisms. The fact that 𝐕 is essentially surjective on objects follows from a slight generalization of the argument used in the proof of the lemma above. We leave the details to the keen reader.
It remains to check that 𝐕 is compatible with the braided monoidal structures. As the 2-functor 𝐕 inherits a canonical lax braided monoidal structure from 𝐔, it is enough to show that this lax structure is actually strong. In fact, as all the modifications witnessing the coherence of 𝐔 are invertible, this is also true of the modifcations witnessing the coherence of 𝐕. Thus, it is enough to show that the (necessarily strong) 2-natural transformations χ^𝐕 and ι^𝐕 are equivalences. To see this, note that there is a commutative square of lax monoidal 2-functors
[sep=small]
𝐌𝐨𝐝^loc_𝐌𝐨𝐝^loc_𝔅(A)(B) [dd] [rr, "𝐕"] 𝐌𝐨𝐝^loc_𝔅(B) [dd]
𝐌𝐨𝐝^+_𝐌𝐨𝐝^+_𝔅(A)(B) [rr, "≃"] 𝐌𝐨𝐝^+_𝔅(B).
But, it follows from the discussion preceding the present lemma that the bottom horizontal arrow is strongly monoidal and an equivalence. Further, we have seen in remark <ref> that the vertical arrows are strongly monoidal, conservative on 1-morphisms, and fully faithful on 2-morphisms. This proves that χ^𝐕 and ι^𝐕 are 2-natural equivalences, so that the lax braided monoidal structure of 𝐕 is actually strong as desired.
§.§ Lagrangian Algebras
We work over an algebraically closed field of characteristic zero, and fix 𝔅 a braided fusion 2-category
A Lagrangian algebra in 𝔅 is a connected étale algebra B in 𝔅 such that 𝐌𝐨𝐝^loc_𝔅(B) ≃2𝐕𝐞𝐜𝐭.
A Lagrangian algebra in 2𝐕𝐞𝐜𝐭 is a non-degenerate braided fusion 1-category, as can be seen from proposition <ref> and proposition 4.17 of <cit.>.
It follows from the previous example that the property of being Lagrangian for a connected étale algebra is a categorical non-degeneracy condition.
Recall that connected étale algebras in 2𝐕𝐞𝐜𝐭 are braided fusion 1-categories. Further, given a braided fusion 1-category ℬ, we have seen in corollary <ref> that 𝐌𝐨𝐝^loc(ℬ) ≃𝒵(𝐌𝐨𝐝(ℬ)). The next corollary therefore follows by proposition <ref>.
Let ℬ be any braided fusion 1-category. Lagrangian algebras in 𝒵(𝐌𝐨𝐝(ℬ)) correspond exactly to non-degenerate braided fusion 1-categories 𝒞 equipped with a braided monoidal functor ℬ→𝒞.
Another notion of Lagrangian algebra was introduced in section 2.3 of <cit.>. We recall their definition below, and use the name alter-Lagrangian algebras to refer to such objects.
An alter-Lagrangian algebra in a braided fusion 2-category 𝔅 is an étale algebra B in 𝔅 satisfying:
a. It is strongly connected, i.e. the 1-morphism i:I→ B is the inclusion of a simple summand.
b. Its Müger center is trivial, i.e. the fusion 1-category Hom_B^loc(B,B) of local right B-module 1-morphisms B→ B, is trivial.
As explained in remark 2.27 of <cit.>, the condition of being strongly connected in the definition of an alter-Lagrangian algebra is simply too strong. This is why we have only insisted that Lagrangian algebras are connected. For instance, let G be a finite group, and consider the braided fusion 2-category 𝔅=𝒵(𝐌𝐨𝐝(𝐑𝐞𝐩(G))). We have seen that Lagrangian algebras in 𝔅 correspond exactly to braided fusion 1-categories 𝒞 equipped with a braided functor F:𝐑𝐞𝐩(G)→𝒞. The corresponding Lagrangian algebra is strongly connected if and only if F is fully faithful, in which case F is an inclusion. In particular, 𝐕𝐞𝐜𝐭 equipped with the forgetful functor 𝐑𝐞𝐩(G)→𝐕𝐞𝐜𝐭 defines a Lagrangian algebra in 𝒵(𝐌𝐨𝐝(𝐑𝐞𝐩(G))) that is not strongly connected, and therefore not an alter-Lagrangian algebra.
Let ℬ be a braided fusion 1-category. Then, every alter-Lagrangian algebra in 𝒵(𝐌𝐨𝐝(ℬ)) is Lagrangian.
Let 𝒞 be a braided fusion 1-category equipped with a fully faithful braided monoidal functor F:ℬ→𝒞. Then, it follows from proposition 2.28 of <cit.> that its Müger 𝒵_(2)(𝒞) is trivial, i.e. we have 𝒵_(2)(𝒞)≃𝐕𝐞𝐜𝐭, or, equivalently, 𝒞 is non-degenerate. The result then follows from proposition <ref>.
In fact, as was already noted in remark 2.27 of <cit.>, the above proposition should hold for any non-degenerate braided fusion 2-category 𝔅, that is, braided fusion 2-category with trivial sylleptic center in the sense of <cit.>. However, we point out that this is not true for an arbitrary braided fusion 2-category, as can be seen from the example below.
Let 𝔅:=2𝐕𝐞𝐜𝐭_A the braided fusion 2-category of A-graded 2-vector spaces for some finite abelian group A. Then, alter-Lagrangian algebras in 2𝐕𝐞𝐜𝐭_A are A-graded braided fusion 1-categories whose 0-graded part is a non-degenerate braided fusion 1-category. In particular, we can view any non-degenerate braided fusion 1-category ℬ as a connected étale algebra B in 2𝐕𝐞𝐜𝐭_A, and we have 𝐌𝐨𝐝^loc_2𝐕𝐞𝐜𝐭_A(B)≃2𝐕𝐞𝐜𝐭_A. In particular, B is not a Lagrangian algebra in 2𝐕𝐞𝐜𝐭_A.
§.§.§ Lagrangian Algebras in Physics
We would now like to discuss the significance of Lagrangian algebras in Physics. Using some bootstrap analysis <cit.>, (3+1)-dimensional topological phases of matter are believed to be characterized by non-degenerate braided fusion 2-categories, which can be thought of as a collection of topological excitations of all codimensions <cit.> or low-energy topological sectors of observables <cit.> in the given quantum many-body systems. It is then natural to ask what mathematical structure corresponds to the (2+1)-dimensional topological boundary conditions of such a (3+1)d topological phase in the bulk.[We emphasize that we require that a topological boundary satisfy a boundary-bulk correspondence <cit.>, i.e. that the Drinfeld center of the boundary is equivalent to the bulk. In other word, the condensed bulk phase consisting of deconfined particles is trivial.] Let us write ℨ for the non-degenerate braided fusion 2-category 𝒵(ℭ) for some fusion 2-category ℭ. Generalizing ideas from <cit.>, we find that the (2+1)d topological boundary conditions for the (3+1)d phase corresponding to ℨ are given by Lagrangian algebras in ℨ. In fact, given a Lagrangian algebra A in ℨ, the fusion 2-category 𝐌𝐨𝐝_ℨ(A) describes the collection of topological excitations on the boundary, and the canonical braided 2-functor ℨ→𝒵(𝐌𝐨𝐝_ℨ(A)) encodes the interaction between the bulk and the boundary.
To examine the validity of this assertion, it is instructive to consider the case when the bulk is the trivial (3+1)d topological phase, mathematically described by 2𝐕𝐞𝐜𝐭. By arguments from <cit.>, the bulk phase controls all gravitational anomalies of its topological boundary conditions. Thus (2+1)d topological boundary conditions for the trivial (3+1)d topological phase are nothing else but (2+1)d topological phases, which are classified by non-degenerate braided fusion 1-categories as established in <cit.> for point-like excitations and in <cit.> including all string-like excitations. Meanwhile, by corollary <ref>, we see that Lagrangian algebras in 2𝐕𝐞𝐜𝐭 are exactly non-degenerate braided fusion 1-categories. Here mathematics matches perfectly with the physical intuition.
In the case of the (3+1)d toric code model <cit.>, explicit computations were carried out using a microscopic realization on a 3d cubical lattice (see also <cit.>). Microscopically, they discovered three Lagrangian algebras A_e,A_1,A_2 in 𝐓𝐂 := 𝒵(2𝐕𝐞𝐜𝐭_ℤ/2), corresponding to a rough boundary condition 𝐌𝐨𝐝_𝐓𝐂(A_e), a smooth boundary condition 𝐌𝐨𝐝_𝐓𝐂(A_1) and a twisted smooth boundary condition 𝐌𝐨𝐝_𝐓𝐂(A_2). We note that 𝐌𝐨𝐝_𝐓𝐂(A_e)≃2𝐕𝐞𝐜𝐭_ℤ/2, and 𝐌𝐨𝐝_𝐓𝐂(A_2)≃𝐌𝐨𝐝_𝐓𝐂(A_1)≃2𝐑𝐞𝐩(ℤ/2). However, the boundaries provided by A_1 and A_2 are distinct as the braided 2-functors 𝐓𝐂→𝒵(2𝐑𝐞𝐩(ℤ/2)) are distinct. From these three elementary boundary conditions, one can construct infinitely many others by stacking with an anomaly-free (2+1)d topological phase. In mathematical language, this means that, given a Lagrangian algebra L in ℨ, and a Lagrangian algebra A in 2𝐕𝐞𝐜𝐭, that is a non-degenerate braided fusion 1-category, we obtain a new Lagrangian algebra L ⊠ A in ℨ⊠2𝐕𝐞𝐜𝐭≃ℨ.
At this point, it is natural to ask whether the above construction exhausts all the possible Lagrangian algebras in the (3+1)d toric code model. As a first step towards answering this question, let us fix a finite group G, and consider the braided equivalences between Drinfeld centers depicted in figure <ref>. The two braided fusion 2-categories 𝒵(2𝐕𝐞𝐜𝐭_G) and 𝒵(2𝐑𝐞𝐩(G)) are both modelled by the 2-category of finite semisimple G-crossed 1-categories <cit.>, and this is witnessed by the Morita equivalence between fusion 2-categories 2𝐕𝐞𝐜𝐭_G and 2𝐑𝐞𝐩(G) <cit.>. The equivalence between the braided fusion 2-categories 𝒵(2𝐑𝐞𝐩(G)) and 𝒵(𝐌𝐨𝐝(𝐑𝐞𝐩(G))) is induced by an equivalence of symmetric fusion 2-categories between 2𝐑𝐞𝐩(G) and 𝐌𝐨𝐝(𝐑𝐞𝐩(G)). More precisely, this equivalence is implemented by equivariantization for finite semisimple 1-categories with a G-action <cit.>. This procedure can be reversed via de-equivariantization, so no information is lost in these processes. In physical language, equivariantization corresponds to gauging a G-symmetry on a system, thereby obtaining an equivalent system, which is now equipped with the dual symmetry 𝐑𝐞𝐩(G). If G is abelian, the dual symmetry is given by 𝐑𝐞𝐩(G)≃𝐕𝐞𝐜𝐭_G, which is invertible, but this is not the case in general.
Furthermore, the equivalence 𝒵(2𝐑𝐞𝐩(G))≃𝒵(𝐌𝐨𝐝(𝐑𝐞𝐩(G))) is realized by equivariantization for finite semisimple G-crossed 1-categories, which produces a finite semisimple braided 𝐌𝐨𝐝(𝐑𝐞𝐩(G))-module 1-category. This process may also be reversed via de-equivariantization. In terms of étale algebras, we obtain an equivalence between G-crossed braided multifusion 1-categories, and braided multifusion 1-category equipped with a braided functor from 𝐑𝐞𝐩(G). This correspondence is well-known in the theory of braided fusion 1-categories <cit.>.
Then, thanks to corollary <ref>, Lagrangian algebras in 𝒵(𝐌𝐨𝐝(𝐑𝐞𝐩(G))) are non-degenerate braided fusion 1-categories equipped with a braided functor from 𝐑𝐞𝐩(G). Fixing such a 1-category ℬ, notice that the braided functor 𝐑𝐞𝐩(G) →ℬ must factorize as
𝐑𝐞𝐩(G) ↠𝐑𝐞𝐩(H) ↪ℬ
for some subgroup H⊆ G (determined up to conjugation). More precisely, the first braided functor is dominant or surjective, whereas the second is fully faithful or injective. In particular, we can view ℬ as a non-degenerate extension of 𝐑𝐞𝐩(H).
For example, as explained in remark 4.1 of <cit.>, in the (3+1)d toric code model 𝐓𝐂, i.e. with G=ℤ/2, we find that the three Lagrangian algebras A_e, A_1, A_2 can be described as follows:
* The Lagrangian algebra A_e corresponds to the forgetful functor 𝐑𝐞𝐩(ℤ_2) ↠𝐕𝐞𝐜𝐭.
* The Lagrangian algebra A_1 corresponds to the minimal non-degenerate extension 𝐑𝐞𝐩(ℤ_2) ↪𝒵(𝐕𝐞𝐜𝐭_ℤ/2).
* The Lagrangian algebra A_2 corresponds to the minimal non-degenerate extension 𝐑𝐞𝐩(ℤ_2) ↪𝒵(𝐕𝐞𝐜𝐭^ω_ℤ/2), where ω is a cocycle representing the non-trivial element in H^3(ℤ/2;ℂ^×) ≃ℤ/2.
More generally, given a minimal non-degenerate extension 𝐑𝐞𝐩(G) ↪ℳ viewed as a Lagrangian algebra in 𝒵(𝐌𝐨𝐝(𝐑𝐞𝐩(G))), and a non-degenerate braided fusion 1-category 𝒜 viewed as a Lagrangian algebra in 2𝐕𝐞𝐜𝐭, we can consider the Lagrangian algebra in 𝒵(𝐌𝐨𝐝(𝐑𝐞𝐩(G))) given by 𝐑𝐞𝐩(G) ↪ℳ⊠𝒜. It is clear that not all non-degenerate extensions of 𝐑𝐞𝐩(G) are of the this form.
In section 4 of <cit.>, it is proposed that all possible (2+1)d topological boundary conditions for the (3+1)d toric code model are given by first stacking the rough or smooth boundary with an anomaly-free (2+1)d topological order, and then introduce a twist or coupling between them, thereby obtaining a new boundary of 𝐓𝐂 from its condensation. Meanwhile, using the theory of Lagrangian algebras we have developed, these boundary conditions corresponds to de-equivariantization of either non-degenerate extensions 𝐑𝐞𝐩(ℤ_2) ↪ℳ, or forgetful functor 𝐑𝐞𝐩(ℤ_2) ↠𝐕𝐞𝐜𝐭 stacked with a non-degenerate braided fusion 1-category ℬ. In the second case, the boundary condition can always be obtained by stacking the Lagrangian algebra A_e in 𝐓𝐂 with a non-degenerate braided fusion 1-category. However, in the first case, this is not true. We believe it is an interesting question for physicists to explicitly realize the correspondence between Lagrangian algebras in 𝒵(𝐌𝐨𝐝(𝐑𝐞𝐩(G))) and topological boundary conditions of 𝐓𝐂 in the microscopic lattice model.
§ APPENDICES
§.§ Monoidality of the Dual 2-Functor
Let ℭ be a monoidal 2-category, which we will assume without loss of generality is strict cubical. Let us suppose that ℭ has right duals, that is, for every object C of ℭ, there exists an object C^♯ of ℭ together with 1-morphisms i_C:I→ C^♯ C, and e_C:C C^♯→ I that satisfy the snake equations up to 2-isomorphisms. In this case, it follows from corollary 2.8 of <cit.> that every objects has a coherent right duals, that is, there exists 2-isomorphisms
E_C:(e_C C)∘ (C i_C)⇒ Id_C,
F_C:(C^♯ e_C)∘ (i_C C^♯)⇒ Id_C^♯
such that
< g r a p h i c s >
< g r a p h i c s >
0.45=
< g r a p h i c s >
.
< g r a p h i c s >
0.45=
< g r a p h i c s >
.
It then follows from appendix A.2 of <cit.> that there is a 2-functor (-)^♯:ℭ→ℭ^1op that sends an object to its right dual. Here, we use ℭ^1op to denote the 2-category obtained from ℭ by reversing the direction of the 1-morphisms. We recall the construction of the dual 2-functor in more detail. For any object C of ℭ, we fix a coherent right dual (C, C^♯,i_C,e_C,E_C,F_C). Then, the 2-functor (-)^♯ sends the object C of ℭ to C^♯. Next, a 1-morphism f:C→ D is sent via (-)^♯ to the 1-morphism in ℭ given by
f^♯:=(C^♯ e_D)∘ (C^♯ f D^♯)∘ (i_C D^♯):D^♯→ C^♯,
and viewed as a 1-morphism in ℭ^1op. Further, a 2-morphism α:f⇒ g:C⇉ D is sent by (-)^♯ to
(C^♯ e_D)∘ (C^♯α D^♯)∘ (i_C D^♯):f^♯⇒ g^♯.
Finally, the coherence 2-isomorphisms witnessing unitality for the 2-functor (-)^♯ is given on the object C in ℭ by
ϕ^♯_C:=D_C,
and the coherence 2-isomorphisms witnessing that the 2-functor (-)^♯ is compatible with composition of 1-morphisms is given on f:B→ C and g:C→ D by
< g r a p h i c s >
0.45ϕ^♯_f,g:=
< g r a p h i c s >
.
The result below positively answers a question raised in section 1.2 of <cit.>. We use ℭ^mop,1op to denote the monoidal 2-category obtained from ℭ by both taking the opposite monoidal product and reversing the direction of 1-morphisms.
Let ℭ be a monoidal 2-category that has right duals. Then, the 2-functor (-)^♯:ℭ→ℭ^mop, 1op sending an object to its right dual admits a canonical monoidal structure.
We note that, as ℭ is strict cubical, then ℭ^mop, 1op is strict cubical. We follow the notations of <cit.> for the coherence data for (-)^♯. We begin by constructing a 1-equivalence ι witnessing that (-)^♯ is compatible with the monoidal units. Without loss of generality, we will assume that the chosen coherent right dual to I is I with the identity 1-morphisms and 2-morphisms. In particular, we can take ι to be the identity 2-natural transformation. We also construct the 2-natural equivalence χ witnessing that (-)^♯ is compatible with the monoidal structures. More precisely, given objects C and D in ℭ, we define a 1-morphism (C D)^♯→ D^♯ C^♯ in ℭ by
χ_C,D:=(D^♯ C^♯ e_C D)∘ (D^♯ i_C D (C D)^♯)∘ (i_D (C D)^♯),
which, when viewed as a 1-morphism in ℭ^mop,1op, provides the underlying 1-morphisms of the requisite coherence 2-natural transformation. The 2-naturality of χ is given on the 1-morphisms f:A→ B and g:C→ D in ℭ by
< g r a p h i c s >
0.45χ_f,g:=
< g r a p h i c s >
.
We now define three invertible modifications ω, γ, and δ. Given any objects C and D of ℭ, we set
γ_D:=F_D:χ_I,D⇒ Id_C, δ_C:=F_C^-1:Id_C⇒χ_C,I.
Finally, given any objects B, C, and D in ℭ, we define
< g r a p h i c s >
0.45ω_B,C,D:=
< g r a p h i c s >
.
It is easy to check that these invertible modifications satisfy the coherence conditions depicted in equations (HTA1) and (HTA2) of <cit.>.
Let ℭ and 𝔇 be monoidal 2-categories that have right duals, and let F:ℭ→𝔇 be a monoidal 2-functor between them. There is a canonical monoidal 2-natural equivalence that witnesses the commutativity of the square
[sep=small]
ℭ[rr, "(-)^♯"] [dd, "F"'] ℭ^mop, 1op[dd, "F^mop,1op"]
𝔇[rr, "(-)^♯"'] 𝔇^mop, 1op.
For any C in ℭ, the equivalence in 𝔇 witnessing the commutativity of the above square is given by
(F(C)^♯ F(e_C))∘(F(C)^♯χ^F_C,C^♯)∘ (i_F(C) F(C^♯)):F(C^♯)→ F(C)^♯.
The remainder of the proof uses the same ideas as that of the previous proposition, so we leave the remaining details to the keen reader.
§.§ Diagrams for the proof of proposition <ref>
|
http://arxiv.org/abs/2307.01761v1
|
20230704150408
|
Démélange, déconvolution et débruitage conjoints d'un modèle convolutif parcimonieux avec dérive instrumentale, par pénalisation de rapports de normes ou quasi-normes lissées (PENDANTSS)
|
[
"Paul Zheng",
"Emilie Chouzenoux",
"Laurent Duval"
] |
eess.SP
|
[
"eess.SP"
] |
Démélange, déconvolution et débruitage conjoints d'un modèle convolutif parcimonieux avec dérive instrumentale, par pénalisation de rapports de normes ou quasi-normes lissées ()
PaulZhengpaul.zheng@inda.rwth-aachen.de1
EmilieChouzenouxemilie.chouzenoux@centralesupelec.fr2
LaurentDuvallaurent.duval@ifpen.fr3
1Chair of Information Theory and Data Analytics, RWTH Aachen University, Allemagne
2Université Paris-Saclay, CentraleSupélec, CVN, Inria, 91190 Gif-sur-Yvette, France
3IFP Energies nouvelles, 92852 Rueil-Malmaison Cedex, France
Les tâches de débruitage, de filtrage de tendance et de déconvolution sont traditionnellement découplées. Les formulations conjointes se révèlent souvent des problèmes inverses compliqués, mal posés. Nous proposons PENDANTSS pour l'isolation conjointe de tendance et la déconvolution, pour des signaux parcimonieux formés de pics. Il fusionne un a priori de parcimonie avec l'hypothèse qu'une tendance lisse et le bruit de mesure peuvent être séparés par un filtrage passe-bas. Pour ce faire, nous combinons l'algorithme de séparation de sources ternaire BEADS à des pénalités SOOT/SPOQ en rapports de normes ou quasi-normes ℓ_p/ℓ_q, promotrices de parcimonie. Un nouvel algorithme efficace est proposé, de type forward-backward alterné par blocs à métrique variable avec région de confiance. Il se révèle supérieur à des approches concurrentes pour des mesures classiques appliquées à la chimie analytique. Le code associé est mis à disposition: <https://github.com/paulzhengfr/PENDANTSS>.
Denoising, detrending, deconvolution: usual restoration tasks, traditionally decoupled. Coupled formulations entail complex ill-posed inverse problems. We propose for joint trend removal and blind deconvolution of sparse peak-like signals. It blends a parsimonious prior with the hypothesis that smooth trend and noise can somewhat be separated by low-pass filtering. We combine the generalized pseudo-norm ratio SOOT/SPOQ sparse penalties ℓ_p/ℓ_q with the BEADS ternary assisted source separation algorithm. This results in a both convergent and efficient tool, with a novel Trust-Region block alternating variable metric forward-backward approach. It outperforms comparable methods, when applied to typically peaked analytical chemistry signals. Reproducible code is provided: <https://github.com/paulzhengfr/PENDANTSS>.
On evolution kernels of twist-two operators
Sven-Olaf Moch
August 1, 2023
=============================================
§ CONTEXTE
La base de ce travail prolonge des travaux antérieurs <cit.>. Récemment publiée dans une revue <cit.>, elle est ici soumise pour la première fois à une conférence.
Nous considérons le modèle discret de formation de signal suivant :
y
= ∗ + + .
Il vise à identifier trois composantes : 1) un train parcimonieux d'impulsions ∈ℝ^N, 2) un noyau de convolution en forme de pic ∈ℝ^L et 3) une composante de tendance ∈ℝ^N à variations relativement lentes, à partir d'une unique observation y bruitée par ∈ℝ^N.
Ce modèle concerne une classe courante de données potentiellement multidimensionnelles, dans leur domaine naturel ou après application d'une transformation favorisant la parcimonie <cit.>. Ce travail se concentre sur le cas de signaux monodimensionnels.
Il rappelle le problème de soustraction spectrale (en analyse de la parole) visant à séparer des composantes harmoniques (pics dans le domaine de Fourier <cit.>) d'un fond +. Il se retrouve également en analyse biomédicale (ECG, EEG, EMG) ou en astronomie, où les signaux x= ∗ peuvent être nommés lignes ou raies spectrales. La composante concentre des fluctuations lentes modifiant le niveau de référence (offset) des mesures sur x. Elle peut correspondre à des saisonalités, à des dérives instrumentales liées au viellisement de capteurs <cit.>, à des variations de calibration. Peu ou mal modélisées (notammment par des modèles paramétriques), un filtrage automatisé de ces tendances, sans altérer les pics, est souvent difficile.
Ce modèle est très courant en analyse physio-chimique (chromatographie, spectrométrie, spectroscopie), où prend la forme d'un mélange de pics positifs à support restreint (gaussiennes, lorentziennes, pseudo-fonction de Voigt). La composante de tendance peut s'appeler également ligne de base, continuum, excursion, fond…
Débruitage et suppression de tendance conjointes appartiennent à une classe de prétraitements courants de séries temporelles <cit.>, utilisant filtrage, régression paramétrique, remplissage ou désocclusion (inpainting). Pour l'analyse de données physico-chimiques, nous renvoyons aux méthodes backcor <cit.> et BEADS <cit.>.
Pour le débruitage et la déconvolution combinées, mentionnons notamment <cit.> pour des approches promouvant la parcimonie. Nous nous focalisons ici sur les méthodes
SOOT <cit.> et SPOQ <cit.>, employant des rapports de quasi-normes et de normes lissées, présentant une pénalisation avec propriété approchée d'invariance d'échelle.
Afin de résoudre le problème (<ref>), nous proposons une formulation conjointe non-convexe du problème (section <ref>). Nous présentons un algorithme efficace de séparation basé sur des méthodes forward-backward <cit.>, pourvu de preuves de convergence (section <ref>). Cet algorithme est évalué — dans le contexte expérimental décrit en section <ref> — de manière comparative en combinant backcor <cit.> et SOOT/SPOQ <cit.> pour différents niveaux de bruits et de promotion de parcimonie (section <ref>).
§ HYPOTHÈSES POUR LA RÉSOLUTION DU PROBLÈME CONJOINT DE DÉMÉLANGE
L'équation (<ref>) comprend plusieurs inconnues. Tenter de la résoudre impose des hypothèses supplémentaires.
Afin de coupler des différentes tâches, nous associons à la perte quadratique traditionnelle une régularisation combinée, incorporant certaines hypothèses a priori. Le cadre traité par PENDANTSS, pouvant être plus générique, est focalisé ici sur une classe de signaux observés en analyse physico-chimique.
Notons ι_A la fonction indicatrice de l'ensemble convexe non vide A, nulle si son argument appartient à A, et identifiée à +∞ sinon. La positivité des pics et du noyau, ainsi que la normalisation de l'intégrale de ce dernier permet de définir dans un premier temps les ensembles C_1 = [0,+∞[^N et C_2 =𝒮 = {= (π_ℓ)_1 ≤ℓ≤ L∈ [0,+∞[^L t.q. ∑_ℓ=1^L π_ℓ = 1 }, limitant (par leurs indicatrices) l'espace de recherche pour le signal parcimonieux et le noyau.
Nous supposons ensuite que la tendance possède des variations lentes en regards du bruit. Ainsi par un filtrage passe-bas, il devrait être possible d'en obtenir une estimation correcte. En d'autres termes : en faisant l'hypothèse d'une estimation du signal de pics que l'on puisse soustraire, le bruit résiduel à minimiser par l'attache quadratique aux données s'exprime par le biais d'un filtre passe-haut H:
(∀∈ℝ^N) (∀∈ℝ^L) ρ(,)
= 1/2H(- * )^2.
Cette fonction est Lipschitz différentiable par rapport à (resp. ) avec une constante notée Λ_1() (resp. Λ_2()).
La parcimonie du signal s est favorisée par la régularisation Ψ définie (pour β∈]0,+∞[)
par la fonction non-convexe :
(∀∈ℝ^N) Ψ(s) = log((ℓ_p,α^p(s) + β^p)^1/p/ℓ_q,η(s)),
avec les deux approximations paramétriques de normes ou quasi-normes (de paramètres (α, η)∈]0,+∞[) :
ℓ_p,α(s) = ( ∑_n=1^N((s_n^2 + α^2)^p/2-α^p) )^1/p,
et
ℓ_q,η(s) = ( η^q + ∑_n=1^N |s_n|^q)^1/q.
Si
q>2, ou q = 2 et η^2α^p-2 > β^p
(ce que nous supposerons),
alors Ψ est Lipschitz différentiable sur ℝ^N et 0_N (i.e., vecteur de taille N identiquement nul) en est un minimiseur local. Le couple solution (,) minimise :
(∀∈ℝ^N) (∀∈ℝ^L) Ω(,)= f(,) + g(,),
où l'on définit
(∀∈ℝ^N) (∀∈ℝ^L)
g(,)= ι_C_1() +ι_C_2(),
f(, ) = ρ(,) + λΨ().
La tendance est enfin estimée à partir de :
= (𝐈𝐝_N-H)( - *) ,
avec 𝐈𝐝_N la matrice identité de ℝ^N.
§ ALGORITHME PENDANTSS
La structure de (<ref>) suggère l'usage d'une méthode alternée par blocs, mettant à jour séquentiellement la séquence d'impulsions et le noyau . PENDANTSS s'appuie sur l'algorithme TR-BC-VMFB (Alg. <ref>), qui généralise le BC-VMFB <cit.> employé dans <cit.> en déconvolution aveugle.
Soient (_k,_k) les estimées de l'itération k ∈ℕ. Le calcul de _k+1 s'obtient par une étape de VMFB <cit.>, accélérée par un schéma de région de confiance.
Nous introduisons tout d'abord la métrique MM (de majoration-minimisation) :
A_1,ρ(s_k,_k) = (Λ_1(_k) + λχ_q,ρ) 𝐈𝐝_N +
λ/ℓ_p,α^p(_k) + β^pDiag((s_n,k^2+α^2)^p/2-1)_1≤ n≤ N,
avec la constante χ_q,ρ = q-1/(η^q+ρ^q)^2/q. On construit une majoration de (<ref>) par rapport
à la variable (voir <cit.>) :
(∀∈ℬ̅_q,ρ∩ C_1) Ω(,_k) ≤ f(_k,_k)
+ ( - _k)^⊤∇_1 f(_k,_k) + 1/2 - _k^2_A_1,ρ(_k,_k),
avec, pour tout z∈^N, z_A = (z^⊤Az)^1/2. Le domaine de validité de (<ref>) est limité par le complément de la boule ℓ_q,
ℬ̅_q,ρ ={s=(s_n)_1≤ n≤ N∈^N | ∑_n=1^N |s_n|^q ≥ρ^q }.
Nous introduisons donc un schéma de région de confiance (Trust-Region ou TR <cit.>), permettant de contrôler le domaine des itérés.
Soit ℐ >0, un nombre maximal de tests de régions de confiance, et (ρ_k,i)_1 ≤ i ≤ℐ une liste de rayons testés :
ρ_k,i = ∑_n=1^N|s_n,k|^q si i = 1 ,
θρ_k,i-1 si 2≤ i≤ℐ-1 ,
0 si i = ℐ .
On calcule alors la matrice MM associée A_1,ρ_k,i(s_k,_k), et l'on définit s_k,i comme minimiseur du terme de droite de (<ref>). La boucle TR s'interrompt dès que s_k,i∈ℬ̅_q,ρ_k,i, et définit ainsi _k+1. En général, la minimisation de la majorante (<ref>) n'admet pas de solution explicite. Néanmoins, par notre choix de C_1 et la structure diagonale de (<ref>), la résolution est directe, d'après <cit.> et <cit.>,
(∀ i ∈{1,…,ℐ})
_k,i=Proj_ C_1( _k -γ_s,kA_1,ρ_k,i(s_k,π_k)^-1∇_1 f(_k,_k) ).
La mise à jour du noyau s'exprime simplement, par une étape de descente de gradient projeté :
_k+1 = Proj_𝒮(
_k - γ_π,kΛ_2(s_k+1)^-1∇_2 f(_k+1,_k)
),
avec (Proj_𝒮) la projection sur le simplexe unité, pour laquelle il existe des méthodes de calcul rapides <cit.>. La méthode (incluant les domaines de validité des pas (γ_s,k,γ_π,k)_k ∈ℕ), est résumée dans l'algorithme <ref>, dont la convergence est assurée.
Soient (s_k,π_k)_k ∈ℕ générés par l'algorithme <ref>. Si C_1 et C_2 sont des ensembles semi-algébriques (ce qui est vérifié dans notre cas), alors la suite (s_k, π_k)_k ∈ℕ converge vers un point critique (s, π) de (<ref>).
§ CONTEXTE DE VALIDATION EXPÉRIMENTALE
Nous considérons les jeux de données nommés C et D. Les signaux parcimonieux originaux et les observations sont représentés dans la figure <ref>,de longueur N = 200. Les observations sont obtenues à partir de (<ref>), avec un noyau défini par une fonction gaussienne normalisée, d'écart-type 0,15 et de support tronqué de taille L = 21. Le bruit est blanc, gaussien, à moyenne nulle. Son niveau σ est fixé à un pourcentage variable de l'amplitude maximale x_max de x = ∗, convolution implémentée par bourrage de zéros.
Les paramètres de l'algorithme PENDANTSS ont été choisis constants par soucis de simplicité : γ_s,k≡ 1.9 et γ_π,k≡ 1.9 satisfont la contrainte d'intervalle. Nous avons choisi θ = 0.5 comme incrément du rayon de confiance, ainsi qu'un maximum de ℐ = 50 tests. L'initialisation pour toutes les méthodes suit celle proposée dans <cit.> : _0 ∈ C_1 est un signal constant, positif, _0 ∈ C_2 est un filtre gaussien de largeur 1. Les critères d'arrêt sont : ε = 10^-6√(N) et K_max = 3000.
Les hyperparamètres — régularisation pour backcor <cit.> et ceux de SPOQ/SOOT (λ, α,β,η) — ont été ajustés à partir d'une seule réalisation de référence, en employant une métrique composite, combinant les trois composantes-cibles (train d'impulsions, noyau, tendance) : 2SNR_ + SNR_ + SNR_. Suivant le même critère composite, laLa fréquence de coupure f_c du filtre passe-haut, dans (<ref>), résulte du choix du meilleur point, parmi les dix premiers picspremières valeurs du module du spectre de fréquence du signal. Parmi les hyperparamètres, α peut aisément être choisi constant (typiquement α=7× 10^-7 pour nos données).
Du fait de l'ambigüité de position classique en démélange, pour s'assurer que le noyau est correctement centré, nous appliquons au noyau estimé un post-traitement du décalage spatial entier, pour nous assurer qu'il est correctement centré. Une recherche par quadrillage (grid search) détermine le nombre de boucles permettant de maximiser le SNR_ du train d'impulsion parcimonieux.
§ RÉSULTATS NUMÉRIQUES
Nous comparons les performances de PENDANTSS par ablation vis-à-vis (i) de la référence backcor <cit.> en suppression de tendance avec une recherche par grille ou quadrillage pour optimiser l'ordre du polynôme et le seuil, suivie de la déconvolution aveugle proposée dans <cit.> pour estimer le signal et le noyau
, (ii) du flot complet de PENDANTSS avec les paramètres SPOQ (p,q) = (1,2) (soit SOOT) ou (p,q) = (0.75,10).
Les différentes estimations sont évaluées en rapport signal-sur-bruit (SNR) : signal parcimonieux (SNR_), noyau (SNR_) et tendance (SNR_). En particulier, SNR_ = 20log_10(_2 / - _2). Nous évaluons de surcroît le TSNR, correspondant à la même quantité, évaluée uniquement sur le support (supposé connu) du signal parcimonieux original. Ce support n'est pas connu en général, cependant cette mesure est utile pour quantifier la performance de tâches d'estimation de quantités ancillaires, calculées sur les pics (amplitude, largeur, surface). De telles mesures, importantes en analyse physico-chimique quantitative, sont sensibles au filtrage de tendance et à la déconvolution.
Les résultats sont résumés dans la table <ref>. Nous reportons les quantités moyennes, et leur écart-type (après le signe “±"), sur 30 réalisations.
Le meilleur résultat et le second sont identifiés par deux (**) ou une (*) étoiles. Dans la plupart des situations, PENDANTSS se révèle supérieur aux approches découplées. Il convient néanmoins de rester mesuré en regard des écarts-types parfois importants, ce qui n'est pas surprenant dans l'évaluation de métriques quadratiques pour des signaux de nature parcimonieuse.
§ CONCLUSION ET PERSPECTIVES
Nous proposons l'algorithme PENDANTSS, pour résoudre le problème compliqué de séparation de tendance, conjoint à une déconvolution aveugle parcimonieuse. La méthode prend en compte une hypothèse de dérive lisse "basse fréquence" en l'incorporant à un problème de déconvolution aveugle. Ce dernier s'appuie sur l'usage récent de pénalité en rapport de quasi-normes ou normes de type SOOT/SPOQ. La validation proposée indique un gain quantitatif par rapport aux méthodes de référence, pour des signaux parcimonieux positifs, comme l'on en rencontre en chimie analytique.
Il reste à étendre la validation à de plus larges classes de signaux parcimonieux, ainsi qu'à proposer des estimations plus intuitives des hyper-paramètres, en fonction de la nature des signaux parcimonieux analysés, et particulièrement en regard de critères de séparabilité des signaux de pics.
Un code en langage Matlab est mis à disposition <https://github.com/paulzhengfr/PENDANTSS>.
§ REMERCIEMENT
Ce travail a bénéficié de la bourse ERC (European Research Council, Starting Grant) MAJORIS ERC-2019-STG-850925.
10
fonteauteursBarthelme_S_2022_p-gretsi_correction_dimz
S. Barthelme,
F. Chatelain,
C. Cascales et
C. Herrier :
Correction de dérive pour l'interférométrie de Mach-Zehnder.
In Proc. GRETSI, 2022.
Baudais_J_2022_p-gretsi_estimation_lbchimlde
J.-Y. Baudais,
M. Leclerc et
C. Langrume :
Estimation de ligne de base de capteurs d’humectation :
intégration et minimums locaux à différentes échelles.
In Proc. GRETSI, 2022.
Bauschke_H_2011_book_convex_amoths
H. H. Bauschke et P. L.
Combettes :
Convex analysis and monotone operator theory in Hilbert
spaces.
CMS books in mathematics. Springer, 2e édition, 2011.
Becker_S_2012_p-nips_quasi-newton_psm
S. Becker et M. J.
Fadili :
A quasi-Newton proximal splitting method.
In Proc. Ann. Conf. Neur. Inform. Proc. Syst., volume 2,
pages 2618–2626, Dec. 3-6, 2012.
Boll_S_1979_j-ieee-tassp_sup_ansss
S. Boll :
Suppression of acoustic noise in speech using spectral subtraction.
IEEE Trans. Acoust. Speech Signal Process.,
27(2):500113–120, Apr. 1979.
Bolte_J_2014_j-math-programm_proximal_almnnp
J. Bolte,
S. Sabach et
M. Teboulle :
Proximal alternating linearized minimization for nonconvex and
nonsmooth problems.
Math. Programm., 146(1-2):500459–494,
Aug. 2014.
Chaudhuri_S_2014_incoll_blind_dmr
S. Chaudhuri,
R. Velmurugan et
R. Rameshan :
Blind deconvolution methods: A review.
In Blind Image Deconvolution. Methods and Convergence,
pages 37–60. Springer, 2014.
Cherni_A_2020_j-ieee-tsp_spoq_lpolqrssrams
A. Cherni,
E. Chouzenoux,
L. Duval et J.-C.
Pesquet :
SPOQ ℓ_p-over-ℓ_q regularization for sparse signal
recovery applied to mass spectrometry.
IEEE Trans. Signal Process., 68:5006070–6084, 2020.
Cherni_A_2019_p-gretsi_forme_lrnlplqspoqrspp
A. Cherni, E.
Chouzenoux,
L. Duval et J.-C.
Pesquet :
Forme lissée de rapports de normes ℓ_p/ℓ_q (SPOQ) pour
la reconstruction des signaux avec pénalisation parcimonieuse.
In Proc. GRETSI, 2019.
Chouzenoux_E_2014_j-optim-theory-appl_variable_mfbamsdfcf
E. Chouzenoux, J.-C.
Pesquet et
A. Repetti :
Variable metric forward-backward algorithm for minimizing the sum of
a differentiable function and a convex function.
J. Optim. Theory Appl., 162(1):500107–132, Jul. 2014.
Chouzenoux_E_2016_j-global-optim_block_cvmfba
E. Chouzenoux, J.-C.
Pesquet et A.
Repetti :
A block coordinate variable metric forward-backward algorithm.
J. Global Optim., 66(3):500457–485, Feb.
2016.
Condat_L_2016_j-math-programm_fast_psl1b
L. Condat :
Fast projection onto the simplex and the l_1 ball.
Math. Programm., 158(1-2):500575–585,
2016.
Conn_A_2000_book_trust-region_m
A. R. Conn, N. I. M.
Gould et P. L.
Toint :
Trust-Region Methods.
MOS-SIAM Series on Optimization. Society for Industrial Mathematics,
2000.
Duval_L_2015_p-gretsi_suppression_lbdcpapdp
L. Duval,
A. Pirayre,
X. Ning et
I. Selesnick :
Suppression de ligne de base et débruitage de chromatogrammes par
pénalisation asymétrique de positivité et dérivées parcimonieuses.
In Proc. GRETSI, 2015.
Gauthier_J_2009_j-ieee-tsp_optimization_socfb
J. Gauthier,
L. Duval et J.-C.
Pesquet :
Optimization of synthesis oversampled complex filter banks.
IEEE Trans. Signal Process., 57(10):5003827–3843, Oct. 2009.
Mazet_V_2005_j-chemometr-intell-lab-syst_background_rsdmnqcf
V. Mazet,
C. Carteret,
D. Brie,
J. Idier et
B. Humbert :
Background removal from spectra by designing and minimising a
non-quadratic cost function.
Chemometr. Intell. Lab. Syst., 76(2):500121–133, 2005.
Ning_X_2014_j-chemometr-intell-lab-syst_chromatogram_bedusbeads
X. Ning, I. W.
Selesnick et
L. Duval :
Chromatogram baseline estimation and denoising using sparsity
(BEADS).
Chemometr. Intell. Lab. Syst., 139:500156–167, Dec. 2014.
Repetti_A_2015_j-ieee-spl_euclid_tsbdsl1l2r
A. Repetti, M. Q.
Pham,
L. Duval,
E. Chouzenoux et J.-C.
Pesquet :
Euclid in a taxicab: Sparse blind deconvolution with smoothed
ℓ_1/ℓ_2 regularization.
IEEE Signal Process. Lett., 22(5):500539–543, May 2015.
Sun_Q_2021_PREPRINT_convex_sbd
Q. Sun et
D. Donoho :
Convex sparse blind deconvolution.
PREPRINT, juin 2021.
<https://arxiv.org/abs/2106.07053>.
Zheng_P_2023_j-ieee-spl_pendantss_pnrdantss
P. Zheng,
E. Chouzenoux et
L. Duval :
PENDANTSS: PEnalized Norm-ratios Disentangling Additive
Noise, Trend and Sparse Spikes.
IEEE Signal Process. Lett., 30:500215–219, 2023.
|
http://arxiv.org/abs/2307.01936v1
|
20230704214713
|
A quadratically enriched count of rational curves
|
[
"Jesse Leo Kass",
"Marc Levine",
"Jake P. Solomon",
"Kirsten Wickelgren"
] |
math.AG
|
[
"math.AG",
"math.AT",
"math.SG",
"Primary 14N35, 14F42, Secondary 53D45, 19G38"
] |
plain
Current: J. L. Kass, Dept. of Mathematics, University of California, Santa Cruz, 1156 High Street, Santa Cruz, CA 95064, United States of America
jelkass@ucsc.edu
https://www.math.ucsc.edu/people/faculty.php?uid=jelkass
Current: M. Levine, University of Duisburg-Essen, Germany
marc.levine@uni-due.de
https://www.esaga.uni-due.de/marc.levine/
Current: J. P. Solomon, Institute of Mathematics, Hebrew University, Givat Ram Jerusalem, 91904, Israel
jake@math.huji.ac.il
http://www.ma.huji.ac.il/ jake/
Current: K. Wickelgren, Department of Mathematics, Duke University, 120 Science Drive
Room 117 Physics, Box 90320, Durham, NC 27708-0320, USA
kirsten.wickelgren@duke.edu
https://services.math.duke.edu/ kgw/
[2020]Primary 14N35, 14F42; Secondary 53D45, 19G38.
We define a quadratically enriched count of rational curves in a given divisor class passing through a collection of points on a del Pezzo surface S of degree ≥ 3 over a perfect field k of characteristic ≠ 2,3. When S is 𝔸^1-connected, the count takes values in the Grothendieck-Witt group (k) of quadratic forms over k and depends only on the divisor class and the fields of definition of the points. More generally, the count is a morphism from the sheaf of connected components of tuples of points on S with given fields of definition to the Grothendieck-Witt sheaf. We also treat del Pezzo surfaces of degree 2 under certain conditions. The curve count defined in the present work recovers Gromov-Witten invariants when k = ℂ and Welschinger invariants when k = ℝ.
To obtain an invariant curve count, we define a quadratically enriched degree for an algebraic map f of n-dimensional smooth schemes over a field k under appropriate hypotheses. For example, f can be proper, generically finite and oriented over the complement of a subscheme of codimension 2. This degree is compatible with F. Morel's (k)-valued degree of an ^1-homotopy class of maps between spheres. For k ⊆ℂ, this produces an enrichment of the topological degree of a map between manifolds of the same dimension.
A quadratically enriched count of rational curves
Kirsten Wickelgren
July 2023
=================================================
=12pt
§ INTRODUCTION
§.§ Background
A degree d rational plane curve over C is a map u: P_C^1 →P_C^2 given by u([s:t]) = [u_0(s,t), u_1(s,t), u_2(s,t)] where the u_i are homogeneous polynomials of degree d. A dimension count shows that one expects to have finitely many degree d rational plane curves passing through 3d-1 points. For d=1, such curves are the lines through two points. For d=2, they are the conics through five. Over C, the number of such curves, N_d, is independent of the generally chosen points, and the first values are given by
N_1 = 1, N_2 = 1, N_3 = 12, N_4 = 620, N_5 = 87,304, …
The number N_4 = 620 was first computed by Zeuthen <cit.> in 1873. In the early 1990's, building on ideas of Gromov <cit.> and Witten <cit.>, a general approach to curve counting problems was formulated <cit.>, which has come to be known as Gromov-Witten theory. An early success of Gromov-Witten theory was a simple recursive formula giving N_d for d ≥ 5. Another road-mark was the virtual enumeration of rational curves on the quintic threefold in agreement with mirror symmetry <cit.>.
The power of Gromov-Witten theory stems from the topological interpretation of curve counts as intersection numbers. So, even if general position cannot be achieved, one can still make sense of the curve counts. This can be done either through symplectic or algebraic geometry. Here, we focus on the algebraic approach. Let X be a projective algebraic variety over of dimension r and let M̅_g,n(X,β) be the space of stable maps u : Σ→ X where Σ is a nodal curve of arithmetic genus g with n marked points p_1,…,p_n, and the degree is u_*[Σ] = β∈ H_2(X;). Let ev_i : M̅_g,n(X,β) → X be the evaluation map at the ith marked point, given by (u,Σ,p) ↦ u(p_i). In general M̅_g,n(X,β) is a singular Deligne-Mumford stack and the dimension of irreducible components can vary. However, it admits a virtual fundamental class [M̅_g,n(X,β)] of dimension (1-g)(r-3) + n + ∫_β c_1(TX). The Gromov-Witten invariant counting curves of genus g and degree β passing through cycles representing the Poincaré duals of A_i ∈ H^l_i(X) is defined by
GW_g,β(A_1,…,A_n) = ∫_[M̅_g,n(X,β)] ev_1^*A_1 ∪⋯∪ ev_n^*A_n.
So, in the special case that A_1,…,A_n = ∈ H^2r(X) are the Poincaré dual of the point class, the Gromov-Witten invariant GW_g,β(A_1,…,A_n) is the virtual degree of the total evaluation map
ev = ev_1 ×⋯× ev_n : M̅_g,n(X,β) → X^n.
If we take X = ^2 and ℓ∈ H_2(X;) the class of a line, then N_d = GW_0,dℓ(^⊗ n).
Over the real numbers R, it is no longer true that the number of real degree d rational plane curves passing through 3d-1 real points is independent of the general choice of points. For example, there can be 8, 10, or 12 real rational cubics passing through 8 real points <cit.>. However, Degtyarev-Kharlamov <cit.> showed that the number of such cubics with a node where two real branches intersect minus the number with a node where two complex conjugate branches intersect is always 8.
Welschinger showed the invariance of a signed count of rational plane curves over of degree d passing through 3d-1-2m real points and 2m pairs of complex conjugate points. The sign with which a curve contributes to the count is given by the parity of the number of nodes where two complex conjugate branches intersect. More generally, he showed <cit.> the invariance of analogous counts of real J-holomorphic spheres on real symplectic manifolds of dimensions 4 and 6. In algebraic geometry, this corresponds to counting real rational curves on real surfaces or threefolds.
A topological approach to Welschinger's invariants was developed in the context of open Gromov-Witten theory <cit.>. The terminology `open' comes from open string theory <cit.>. Let X be a symplectic manifold of dimension 2r, let L ⊂ X be a Lagrangian submanifold and let J be a tame almost complex structure on X. For example, take L to be a component of the real points of a projective algebraic variety P over , take X to be the complex points of the base change to and take J to be the standard complex structure on X. In this example, we have an anti-symplectic involution ϕ : X → X given by the action of Gal(/) such that L ⊂ Fix(ϕ). Let M̅_D,s,t(X/L,β) denote the space of J-holomorphic stable maps u:(Σ,∂Σ) → (X,L) where Σ is a nodal disk with s boundary marked points z_1,…,z_s and t interior marked points w_1,…,w_t of degree u_*[Σ,∂Σ] = β∈ H_2(X,L;). Let evb_i : M̅_D,s,t(X/L,β) → L denote the evaluation map at the ith boundary marked point given by (u,Σ,z,w) ↦ u(z_i). Let evi_j : M̅_D,s,t(X/L,β) → X denote the evaluation map at the jth interior marked point given by (u,Σ,z,w) ↦ u(w_j). In nice cases, the space M̅_D,s,t(X/L,β) is a manifold with corners of dimension μ(β)+r-3 + s + 2t where μ: H_2(X,L;) → is the Maslov index. In general, M̅_D,s,t(X/L,β) is singular but nonetheless admits a virtual fundamental class with dimension given by the same formula. Let
ev_D = evb_1 ×⋯× evb_s × evi_1 ×⋯× evi_t : M̅_D,s,t(X/L,β) → L^s × X^t
denote the total evaluation map. Recall that a relative orientation for a map of smooth manifolds f : M → N is an isomorphism (TM) ∼→ f^*(TN). It was shown in <cit.> that a Pin structure on L and an orientation if L is orientable determine a virtual relative orientation for the map ev_D when the dimensions of the domain and codomain coincide. Since M̅_D,s,t(X/L,β) is a manifold with corners, the degree of ev_D is not a priori defined. However, when X = 4,6, and there is an anti-symplectic involution of X that fixes L, it is possible to glue together certain boundary components of M̅_D,s,t(X/L,β) to obtain a new manifold with corners M_D,s,t(X/L,β) with the following two properties.
*
There is an induced evaluation map ev_D that is still relatively oriented.
*
The image of the boundary ev_D(∂M_D,s,t(X/L,β)) ⊂ L^s × X^t has codimension at least 2.
These two properties allow the degree of ev_D to be defined.
There is a natural doubling map ϖ : H_2(X,L;) → H_2(X;). The degree of ev_D coincides up to a factor of 2^1-t with the Welschinger invariant counting real J-holomorphic spheres in X representing the class ϖ(β) and passing through s real points and t complex conjugate pairs of points. Open Gromov-Witten theory leads to efficient recursive formulas for Welschinger invariants <cit.> and the enumeration of disks on the quintic threefold in agreement with mirror symmetry <cit.>. It also allows the definition of invariants in arbitrary dimension and for L not necessarily fixed by an anti-symplectic involution <cit.>.
Analogous results over the real and complex numbers may indicate the presence of a common generalization in the A^1-homotopy theory of F. Morel and V. Voevodsky <cit.> valid over more general fields or base rings. We show this to be the case here. A^1-homotopy theory adds homotopy colimits to smooth schemes, allowing one to glue or crush them. For example, one has spheres ℙ^n_k/ℙ^n-1_k, where k is a fixed base field. Morel's A^1-Brouwer degree theorem <cit.> identifies the 𝔸^1-stable homotopy classes of maps from the sphere ℙ^n_k/ℙ^n-1_k to itself with the Grothendieck–Witt group (k) of bilinear forms over k, recalled below in Section <ref>. More generally, the theorem computes the (0,0)-stable homotopy sheaf of the sphere spectrum in A^1-homotopy theory over k with a sheaf , described more in Section <ref>. Given polynomial equations for a map ℙ^n_k/ℙ^n-1_k →ℙ^n_k/ℙ^n-1_k the degree is computed as a sum of local degrees in <cit.>. Morel's A^1-Brouwer degree for maps between spheres identifies the target for the 𝔸^1-degrees that we develop here and apply to the above evaluation maps. Away from a codimension 1 locus, the degree is the sum over points of the fiber of the same local degrees present in the degree of a map of spheres.
§.§ Statement of results
The present paper aims to develop certain Gromov–Witten invariants and rational curve counts over perfect fields k of characteristic not 2 or 3, by recasting the arguments of <cit.> in A^1-homotopy theory. A relative orientation of a morphism f: M → N of smooth k-schemes is an invertible sheaf L on M together with an isomorphism ρ : Hom( TM, TN) → L^⊗ 2. Let S be a del Pezzo surface over k, in the sense that S is a geometrically connected, smooth, projective k-scheme of dimension 2 with ample anticanonical bundle -K_S. Let d_S = K_S · K_S denote the degree of S.
Let M̅_0,n(S,D) denote the space of genus zero stable maps with n marked points in the class D ∈ Pic(S) and consider the total evaluation map ev : M̅_0,n(S,D) → S^n. Let σ = (L_1,…,L_r) be an r-tuple of field extensions k ⊂ L_i ⊂k̅ such that ∑_i = 1^k [L_i:k] = n. For an L-scheme X, let _L/k X denote the restriction of scalars to k. We construct a corresponding Galois twist (see Section <ref>)
ev_σ: M̅_0,n(S,D)_σ→ (S^n)_σ = ∏_i=1^r _L_i/kS.
For the rest of the introduction, we fix n = d - 1 and work under the following hypothesis.
Assume that D is not an m-fold multiple of a -1-curve for m>1. Moreover, assume that d_S≥ 4, or d_S=3 and d:= -K_S · D≠ 6, or d_S = 2 and d≥ 7.
§.§.§ Characteristic zero
Assume first that k has characteristic zero.
In <cit.>, we identify a closed subset A ⊂ (S^n)_σ such that M̅_0,n(S,D)_σ^ : = M̅_0,n(S,D)_σ∖ ev^-1(A) has the following two properties analogous to properties (<ref>) and (<ref>) of M_D,s,t(X/L,β) above.
*
The restriction of the total evaluation map ev_σ^: M̅_0,n(S,D)_σ^→ (S^n)_σ is relatively oriented.
*
The codimension of A ⊂ (S^n)_σ is at least 2.
In the case k =, we can make the relation between M_D,s,t(X/L,β) and M̅_0,n(S,D)_σ^ precise as follows. Take L ⊂ X the Lagrangian submanifold corresponding to S, take s the number of i such that L_i = and t the number of i such that L_i = . There is a commutative diagram
M_D,s,t(X/L,β) [r]^(.65)ev_D[d] L^s × X^t [d]^≀
M̅_0,n(S,D)_σ^(k) [r]^(.65)ev_σ^ (S^n)_σ(k)
where the right vertical arrow is a bijection and the left vertical arrow is two-to-one onto a fundamental domain for an action of the group (/2)^t.
Properties <ref> and <ref> allow us to define the degree of _σ^.
However, the degree is no longer valued in the integers Z.
Rather, we build on F. Morel's 𝔸^1-degree <cit.> to define a degree in the Grothendieck–Witt ring (k). We recall the definition and basic properties of GW(k) in Section <ref> below. One of our main results is the following.
Let S,D,σ satisfy Hypothesis <ref> and assume that S is 𝔸^1-connected. Then there exists an invariant N_S,D,σ in the Grothendieck–Witt ring (k) given by the degree of ev_σ^.
§.§.§ Positive characteristic
We turn to the case when k has positive characteristic.
Let M^_0(S, D) ⊂M_0(S,D) be the open subscheme of maps u: P→ S from irreducible genus 0 curves such that P→ u(P) is birational. Such u is said to be unramified if u^* T^* S → T^* P is surjective.
In addition to Hypothesis <ref>, assume k is perfect of characteristic not 2 or 3. If d_S=2, assume additionally that for every effective D' ∈ Pic(S), there is a geometric point f in each irreducible component of M^_0(S, D') with f unramified.
Let Λ be a complete discrete valuation ring with reside field k and quotient field K of characteristic 0. In <cit.> we construct S̃→Λ a smooth del Pezzo surface equipped with an effective D̃∈ Pic(S̃) with special fibers S̃_k ≅ S and D̃_k ≅ D. We construct a Galois twist
ẽṽ_σ: M̅_0,n(S̃,D̃)_σ→ (S̃^n)_σ
that agrees with _σ on the special fiber.
Moreover, we identify a closed subset Ã⊂ (S̃^n)_σ such that M̅_0,n(S̃,D̃)_σ^ : = M̅_0,n(S̃,D̃)_σ∖ ev^-1(Ã) has the following two properties analogous to properties (<ref>) and (<ref>) of M_D,s,t(X/L,β) above.
*
The restriction of the total evaluation map ẽṽ_σ^: M̅_0,n(S̃,D̃)_σ^→ (S̃^n)_σ is relatively oriented.
*
The codimension of Ã⊂ (S̃^n)_σ is at least 2.
Properties <ref> and <ref> again allow us to define the degree of ẽṽ_σ^ in (k). See Section <ref> for the precise condition. Thus we obtain the following result.
Let S,D,σ satisfy Hypothesis <ref> and assume that S is 𝔸^1-connected. Then, there exists an invariant N_S,D,σ in the Grothendieck–Witt ring (k) given by the degree of ẽṽ_σ^. It is independent of the choice of S̃,D̃.
§.§.§ The Grothendieck–Witt ring
In order to explain the enumerative meaning of the invariants N_S,D,σ, we recall the definition and basic properties of the Grothendieck-Witt ring (k). The Grothendieck–Witt ring is defined as the group completion of the semi-ring of non-degenerate symmetric bilinear forms over k. Since symmetric bilinear forms over a field are stably diagonalizable, an arbitrary element of this group can be expressed as a sum of rank 1 bilinear forms. Let ⟨ a ⟩ denote the element of (k) corresponding the rank 1 bilinear form k × k → k given by (x,y) ↦ a xy for a in k^*. Replacing the basis {1 } of k by {b} for b in k^* gives the equality ⟨ a ⟩ = ⟨ a b^2⟩, and in particular for fields such that k^*/(k^*)^2 is trivial, (k) is isomorphic to Z by the homomorphism taking a bilinear form to its rank, i.e. the dimension of the underlying vector space. Applying the rank will result in the classical count of curves over the algebraic closure. For more general fields, (k) contains more information. For example,
(R) ≅Z⊕Z, (F_q) ≅Z×F_q^*/(F_q^*)^2, ( C((z)) ) ≅Z×C((z))^*/(C((z))^*)^2 ,
(Q_q) ≅(F_q) ⊕(F_q)/ (⟨ 1 ⟩ + ⟨ -1 ⟩ , -(⟨ 1 ⟩ + ⟨ -1 ⟩))Z for 2 ∤ q,
(Q) ≅Z⊕Z⊕Z/2 Z⊕⊕_p prime
p ≠ 2 (F_p)/(⟨ 1 ⟩ + ⟨ -1 ⟩) Z.
For finite rank field extensions L ⊆ E, there is an additive transfer map
_E/L: (E) →(L),
which has the following simple description when L ⊆ E is separable: for a symmetric, non-degenerate bilinear form β: V × V → E over E, we can view V as a vector space over L and consider the composition
V × V β→ E _E/L⟶ L
where _E/L is the sum of the Galois conjugates. Since L ⊆ E is separable, Tr_E/L∘β is a non-degenerate symmetric bilinear form over L. The value of the transfer map on the class [β] of the form β is given _E/L [β] = [Tr_E/L∘β].
The Milnor conjecture, proven by Voevodsky and Orlov–Vishik–Voevodsky, defines a sequence of invariants beginning with the rank, discriminant, Hasse–Witt invariant, Arason invariant, which for many fields (including finite fields, number fields, complete discretely valued fields, say in residue characteristic not 2 etc.) give a terminating algorithm for determining if two elements given by sums of rank 1 forms ⟨ a ⟩ are equal <cit.> <cit.> <cit.> <cit.>. There are many powerful tools for working with Grothendieck–Witt groups. See for example <cit.> <cit.> <cit.>.
§.§.§ Enumerative meaning
To see the enumerative meaning of the degree N_S,D,σ, we generalize the sign associated to a node with two complex conjugate branches over R. Suppose u: P_k(u)→ S is a rational curve on S defined over the field extension k(u) of k. Let p be a node of u(P_k(u)). The two tangent directions at p define a degree 2 field extension k(p)[√(D(p))] of k(p), for a unique element D(p) in k(p)^*/(k(p)^*)^2. By <cit.>, the extension k(u) ⊆ k(p) is separable. Let k(p)k(u): k(p)^* → k(u)^* denote the norm of the field extension k(u) ⊆ k(p) given by the product of the Galois conjugates.
The mass of p is defined by
(p) = ⟨k(p)k(u) D(p) ⟩ in (k(u)).
This makes sense because multiplying D(p) by a square in k(p) multiplies the norm by a square in k(u).
The following is valid under the same hypotheses as Theorem <ref> for k of characteristic zero and under the same hypotheses as Theorem <ref> for k of positive characteristic.
If there exist p_1,p_2,…,p_r points of S with k(p_i) ≅ L_i in general position, we have the equality
N_S,D,σ = ∑_u degree D
rational curve
through the points
p_1, …, p_r_k(u)/k∏_p node of u(P^1)mass(p).
in (k).
So the weighted count of degree D rational plane curves through the points p_1,p_2,…,p_r given on the right hand side is independent of the general choice of points. When k is an infinite field and S is rational over k, such a general choice of points exists.
Consequently, for k = C the rank of N_S,D,σ coincides with the corresponding Gromov–Witten invariant. For k=R, the signature of N_S,D,σ recovers the signed counts of real rational curves of Degtyarev-Kharlamov and Welschinger. For k = F_p, ℚ_p, ℚ etc., one obtains a new Gromov–Witten invariant. Andrés Jaramillo Puentes and Sabrina Pauli have work in progress giving an enriched count of rational curves of a fixed degree through rational points on a toric surface via a tropical correspondence theorem, building on their previous work <cit.>.
General position of the points p_1,p_2,…,p_r of S with k(p_i) ≅ L_i means the following. There is a dense open subset U of ∏_i=1^r _L_i/k S such that for any rational point of U, the theorem holds for the corresponding r-tuple of points p_1,p_2,…,p_r of S with k(p_i) ≅ L_i. The open subset U may not contain a rational point. Even for S=ℙ^2, this may happen over a finite field. Nonetheless, N_S,D,σ is a meaningful invariant. It is the 𝔸^1-degree of an evaluation map given in Section <ref> and an analogue of a Gromov–Witten invariant defined over perfect fields of characteristic not 2 or 3, including finite fields. Just as Gromov–Witten invariants make sense of curve counts when general position can not be achieved, these analogues give meaning to curve counts when rational points do not exist.
This degree also retains concrete enumerative significance: The open subset U will contain many points over finite extensions of k and our constructions behave well under base change. Pick a closed point of U with field of definition L. The list σ of field extensions corresponds to a permutation representation of the Galois group (k^s/k) →_n, where (k^s/k) denotes the Galois group of field isomorphisms of the separable closure k^s of k fixing k and _n denotes the symmetric group on n letters. This representation may be restricted to (k^s/L) giving rise to σ'. We have N_S⊗ L,D⊗ L , σ' = N_S,D, σ⊗ L and the equation of Theorem <ref> holds in (L) for the p_1', …, p_r'' corresponding to the chosen closed point of U. While base change to L may result in a loss of information, it frequently results in meaningful equalities. For example, ⊗ L: (k) →(L) is injective for k a finite field and [L:k] odd, resulting in infinitely many concrete enumerative equalities in the case of a finite field and S=ℙ^2.
§.§.§ Examples
A^1-connected del Pezzo surfaces include P^2, P^1 ×P^1, and _B P^1, where B is a set of closed points { p_1, …, p_r} considered as a subscheme defined over k satisfying | B | = ∑_i=1^r [k(p_i): k] ≤ 7. In this case, d_S = 9 - | B |. In particular, let k be a perfect field of characteristic not 2 or 3. Then, Theorems <ref> and <ref> give invariants N_P_k^2, D, σ and N_P_k^1 ×P_k^1, D, σ in (k) for all Picard classes D. Similarly, for | B |≤ 6 and S = _BP_k^2, we have N_S, D, σ for all D ∈ Pic(S) that are not m-fold multiples of a -1-curve.
Smooth, proper, k-rational surfaces are also A^1-connected <cit.>. So, Theorems <ref> and <ref> apply to rational del Pezzo surfaces. A smooth cubic surface over k containing two skew lines over k or two skew lines over a quadratic extension of k which are conjugate is k-rational <cit.>. Cubic surfaces are del Pezzo surfaces with d_S=3, so Theorems <ref> and <ref> give invariants N_S, D, σ in (k) for any D with d = -K_S · D ≠ 6 and D not an m-fold multiples of a -1-curve. For example, let S_0⊂P^3 be the smooth cubic surface given by the zero locus of x^2 y + y^2 z + z^2 w + w^2 x. Then S_0 is rational <cit.> giving invariants N_S_0, D, σ in (k).
We compute N_S,- K_S, σ = ⟨ -1 ⟩χ^^1(S) + ⟨ 1 ⟩ + _k(σ)/k⟨ 1 ⟩, where χ^A^1(S) denotes the 𝔸^1-Euler characteristic. See Example <ref>. For S_0 as in Example <ref>, χ^A^1(S_0) = ⟨ -5⟩ + 4(⟨ 1⟩ + ⟨ -1 ⟩) <cit.>, which gives N_S_0,- K_S_0, σ = ⟨ 5⟩ + ⟨ 1⟩ +4(⟨ 1⟩ + ⟨ -1 ⟩) + _k(σ)/k⟨ 1 ⟩.
§.§.§ Without the connectedness hypothesis
Theorems <ref>, <ref> and <ref> above are special cases of more general results that do not require that S be 𝔸^1-connected.
The Grothendieck–Witt groups (E) discussed above for E a finite type field extension of k, together with certain boundary maps, determine a sheaf of abelian groups on smooth k-schemes
: ^→,
which is unramified in the sense of e.g. <cit.>. For X a smooth k-scheme, (X) ⊂(k(X)) is the subset of the Grothendieck–Witt group of its field of rational functions which is in the kernel of boundary maps indexed by the codimension 1 points of X. See <cit.>. For a presheaf X: ^→, define (X):=_(^,)(X,). This will be discussed further in Section <ref>.
To formulate our general result, we use the sheaf of A^1-connected components, π_0^A^1. This sheaf arises naturally when considering the degree of a morphism to a scheme that is not A^1-connected, reflecting the classical phenomenon that a map to a disconnected manifold may have a different degree over different connected components.
Unlike in classical topology, it is not possible to decompose a smooth scheme into A^1-connected pieces; the sheaf of connected components is often a very complicated object. For a smooth scheme X, define π_0^A^1(X) to be the Nisnevich sheaf associated to the presheaf taking a smooth k-scheme U to [U,X]_A^1, where [U,X]_A^1 denotes the (unstable) A^1-homotopy classes of maps from U to X. A smooth scheme X is said to be A^1-connected when π_0^A^1(X) is trivial. This is discussed further in Section <ref>. As in topology, there is a natural map X →π_0^A^1(X). For a k-point x of X and a section N in (π_0^A^1(X)), let N(x) ∈(k) denote the pullback of N to x along the composition k x→ X →π^A^1_0(X).
For k of characteristic zero, we have the following result.
Let S,D,σ satisfy Hypothesis <ref>. Then there exists an invariant N_S,D,σ in (π_0^A^1(∏_i=1^r _L_i/k S)) given by the degree of ev_σ^.
For k of positive characteristic, let S̃,D̃ and ẽṽ_σ^ be as in Section <ref>.
Let S,D,σ satisfy Hypothesis <ref>. Then, there exists an invariant N_S,D,σ in (π_0^A^1(∏_i=1^r _L_i/k S)) given by the degree of ẽṽ_σ^. It is independent of the choice of S̃,D̃.
The following result is valid under the same hypotheses as Theorem <ref> for k of characteristic zero and under the same hypotheses as Theorem <ref> for k of positive characteristic.
If there exist p_1,p_2,…,p_r points of S with k(p_i) ≅ L_i in general position, we have the equality in (k),
N_S,D, σ(p_*) = ∑_u rational curve
in class D
through the points
p_1, …, p_r_k(u)/k∏_p node of u(P^1)mass(p),
where p_* is the k-point of ∏_i=1^r _L_i/k S given by p_* = (p_1, …, p_r).
When k is an infinite field and S is rational over k, such a general choice of points exists.
We may alternatively package the invariants N_S,D, σ into a single invariant as follows. Let ^n_0 S ⊂^n S be the complement of the union of pairwise diagonals in the n-fold symmetric product. Let Δ_σ⊂ (S^n)_σ = ∏_i=1^r _L_i/k S denote the union of pairwise diagonals. The following result is valid under the same hypotheses as Theorem <ref> for k of characteristic zero and under the same hypotheses as Theorem <ref> for k of positive characteristic.
There exists an invariant N_S,D^ in (π_0^A^1(^n_0 S)) that pulls back to the restriction of N_S,D,σ for each σ under the natural map ∏_i=1^r _L_i/k S ∖Δ_σ→^n_0 S.
Building on Example <ref>, let S be a twist of _BP_k^2 and let k be a perfect field of characteristic not 2 or 3. Then, Theorems <ref> and <ref> give us invariants N_S, D, σ in (π_0^A^1(∏_i=1^r _L_i/k S)) for all D ∈ Pic(S) that are not m-fold multiples of a -1-curve.
§.§ Acknowledgements
We thank Jean Fasel, Dan Freed, Fabien Morel, and Rahul Pandharipande for useful discussions. ML is supported by the ERC Grant QUADAG: this paper is part of a project that has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 832833).
< g r a p h i c s >
.
JS was partially supported by ERC Starting Grant 337560 as well as ISF Grants 569/18 and 1127/22. KW was partially supported by National Science Foundation Awards DMS-1552730, DMS-2001890, and DMS-2103838. She also thanks the Newton Institute and the organizers and participants of the special program Homotopy harnessing higher structures for hospitality while working on this paper, and the A Room of One's Own initiative for focused research time.
§ DEGREE
§.§ Orientations
Define a local complete intersection morphism f: X → Y as in <cit.>. For example, let i be a closed immersion locally determined by a regular sequence and let π be a smooth map. The composition f= π∘ i is then a local complete intersection morphism. A finite type map between regular schemes is also a local complete intersection morphism <cit.>. For f: X → Y a local complete intersection morphism, the cotangent complex L_f is perfect <cit.> and we may form its determinant, which is a line bundle on X. (We could view a shift of this line bundle by some integer as the determinant viewed as an element of the derived category, but we don't do this.) Define ω_f by
ω_f := L_f.
For f= π∘ i, there is a canonical isomorphism
ω_f ≅ i^* ω_π⊗ω_i
<cit.>. When we additionally assume that i is a closed immersion determined by a regular sequence and π is smooth as above, we have canonical isomorphisms
ω_i ≅ (/^2)^* ,
ω_π≅Ω_π,
where denotes the ideal sheaf associated to the closed immersion i, and Ω_π denotes the sheaf of Kähler differentials <cit.> <cit.>.[The references treat affine schemes, but the isomorphisms globalize.] A map between smooth k-schemes X and Y is a local complete intersection morphism <cit.> and ω_f = ( TX, f^* TY).
An orientation for a complete local intersection (lci) morphism f is the choice of an invertible sheaf L on X and an isomorphism ρ:ω_f→ L^⊗ 2.
§.§ Global degree of a finite, flat map
Suppose f:X → Y is a finite, flat, local complete intersection morphism with relative orientation ρ:ω_f≅→ L^⊗ 2. We construct the degree of f.
If X and Y are smooth n-dimensional schemes over k, and f: X → Y is a finite map with relative orientation ρ:ω_f≅→ L^⊗ 2, then f is flat by <cit.> and lci <cit.> and we will be able to construct the degree of f.
Grothendieck–Serre duality produces a canonical isomorphism
ω_f ≅__Y (_X, _Y),
given by identifying ω_f with f^! _Y (see for example <cit.> or <cit.>) and f^! _Y with __Y (_X, _Y) <cit.>. The associated trace map _f: f_* __Y (_X, _Y) →_Y is evaluation at 1, cf. <cit.>. Since f is flat, it follows that f_* L is locally free.
Suppose f:X → Y is a finite, flat, local complete intersection morphism with relative orientation ρ:ω_f≅→ L^⊗ 2. The degree f of f is the bilinear form f_* L ⊗ f_* L →_Y given by the composition
f_* L ⊗ f_* L → f_* L^⊗ 2f_*(ρ^-1)⟶ f_* ω_f _f⟶_Y
When we wish to make the orientation explicit, we also write (f,ρ).
f is symmetric and non-degenerate.
The swap map L^⊗ 2→ L^⊗ 2 defined by taking ℓ⊗ℓ' to ℓ' ⊗ℓ is equal to the identity map, and it follows that f is symmetric.
We now prove f is non-degenerate. Since ρ is an isomorphism, so is the induced map L →(L, ω_f) ≅(L, f^! _Y). Therefore the pushforward
f_* L → f_* (L, f^! _Y)
is an isomorphism. Since f is proper, coherent duality as in <cit.> gives a canonical isomorphism f_* (L,f^! _Y) ≅(f_* L, _Y) and the composite f_*L →(f_* L, _Y) is the desired isomorphism.
A finite étale map f: X → Y admits a canonical relative orientation ω_f ≅_X^⊗ 2 and the resulting A^1-degree is simply the classical trace form.
This degree commutes with base change. Let
X' [r]^g'[d]^f' X [d]^f
Y'[r]^g Y
be a pullback diagram with f a finite, flat, local complete intersection morphism oriented by ρ. If g is flat, then f' is automatically a local complete intersection morphism by <cit.>. However, in our discussion of basechange, we will not assume g to be flat, and instead assume that f' is a local complete intersection morphism. Since f is flat, the square (<ref>) and <cit.> define a canonical isomorphism
ω_f'≅ (g')^*ω_f.
Therefore (g')^*ρ determines an orientation of f'.
Let (<ref>) be a pullback square such that f is a finite, flat, local complete intersection morphism oriented by ρ. Suppose that f' is a local complete intersection morphism. Then we have the equality in (Y')
(f', (g')^*ρ) = g^* (f, ρ).
Let L denote the line bundle on X associated to the orientation ρ, i.e., the orientation of f is the isomorphism ρ: L^⊗ 2→ω_f. The natural map from cohomology and base change and the isomorphism (<ref>) determine the commutative diagram
g^* f_* L^⊗ 2[r]^≅[d]_g^* f_* ρ f'_* (g')^* L^⊗ 2[d]^f'_* (g')^*ρ[r]^≅ [d]^f'_* (g')^* ρ f'_* ((g')^* L)^⊗ 2
g^* f_* ω_f [r]^≅ f'_* (g')^*ω_f [r]^≅ f'_* ω_f',
where the horizontal morphisms are isomorphisms. The claim follows by the commutativity <cit.> of
g^* f_* ω_f [r] [d]_g^*_f f'_* ω_f'[d]^_f'
g^* _X [r]^≅ _X'
§.§ Global degree of an oriented map between smooth, proper n-dimensional schemes
We use the degree construction in Section <ref> to construct a degree for a map f: X → Y over a field k satisfying either Assumption <ref> or <ref>.
A map f: X → Y between integral schemes is said to be generically finite if f is dominant (meaning its set-theoretic image is dense) and the associated extension of function fields is finite. For example, if the differential df of a map f between connected, smooth n-dimensional k-schemes is injective at one point, then f is generically finite. If X and Y have more than one connected components, which are all integral, say that f is generically finite if f is dominant, only finitely many components of X map to each component of Y, and for each component of X, its function field is a finite extension of the function field of the component of Y containing its image. We include in the definition of f being generically finite that the connected components of X and Y are integral.
Let X and Y be proper schemes over a field k of dimension n, and suppose that Y/k is smooth. Let U ⊆ Y be an open subset such that Y-U has codimension greater than or equal to 2. Suppose f: X → Y is a generically finite map such that its restriction f|_f^-1(U): f^-1(U) → U is a local complete intersection morphism equipped with a relative orientation ρ: L^⊗ 2≅→ω_f |_f^-1(U).
Let f: X → Y be a generically finite map between proper, smooth n-dimensional k-schemes, and let U be an open subset of Y such that Y-U has codimension at least two. f is proper because X and Y are, whence the basechange f|_f^-1(U) is as well, so in particular, it is finite type. The map f|_f^-1(U): f^-1(U) → U is a local complete intersection morphism because it is a finite type map between regular schemes <cit.>, so it makes sense to speak of a relative orientation of f|_f^-1(U). Indeed, such a relative orientation is the data of a line bundle L on f|_f^-1(U) and an isomorphism L^⊗ 2≅( TX, f^* TY). A relative orientation of f|_f^1(U) equips f with the data to satisfy Assumption <ref>.
Moreover, a similar discussion holds if X is only assumed to be a regular, proper k-scheme of dimension n. Simply replace ( TX, f^* TY) by ω_f|_f^-1(U).
Suppose that Y is smooth of dimension n and proper over k, and that X is a geometrically normal, proper scheme over k of dimension n. Let f: X → Y a generically finite map. The assumption that X is geometrically normal implies that X/k is smooth at codimension 1 points <cit.>. Since the points of X where X → k is smooth is open <cit.>, contains the points of codimension 1, and X is regular at any point where X/k is smooth, there is necessarily an open subset U⊂ Y such that Y-U has codimension greater than or equal to 2 and f^-1(U) is regular. So that f satisfies Assumption <ref>, we need an orientation on some such restriction.
Generically finite maps are finite over a large open set under the following hypotheses. This will be useful to apply Section <ref> to define the degree of a map satisfying Assumption <ref>.
Let Y be a smooth k scheme, and let f: X → Y be a proper, generically finite map. Then there exists a codimension 2 subset Z of Y such that f is finite and flat over the complement of Z.
For any point x of X, the map _Y,f(x)→_X,x is injective because f is dominant. Let x be such that y=f(x) is codimension 1 in Y. Since y is codimension 1 and Y is smooth, the ring _Y,y is a discrete valuation ring. Since the components of X are integral (this is part of the definition of f being generically finite for us), _X,x is torsion free, and it follows that f is flat at x because _Y,y is a principle ideal domain.
Let U be the subset of points y of Y such that f is flat at all the points x in f^-1(y). We claim that U is open. Suppose y_0 specializes to y_1 in Y and that y_1 is in U. Let x_0 be a point of f^-1(y_0). Let x_0 denote the closure of x_0. Since f is proper, f(x_0) is closed. f(x_0) contains y_0 and therefore y_1 by construction. Thus we can choose x_1 such that x_0 specializes to x_1 and f(x_1) = y_1. We thus have a flat extension _Y,y_1⊆_X,x_1 and ideals p_x_0 and p_y_0 in _X,x_1 and _Y,y_1 respectively such that p_x_0∩_Y,y_1 = p_y_0. Since localization is flat, it follows that _Y,y_0⊆_X,x_0 is flat and U is open as claimed.
By the above, U contains all the points of codimension 1, whence its complement Z is closed of codimension at least 2. Let f^0 denote the restriction of f to f^-1(U). Since proper maps are stable under base change, f^0 is proper. f^0 is flat by construction. Thus the fibers are equidimensional <cit.> and <cit.>. Since f is generically finite, the fibers of f^0 are dimension 0 and therefore finite. Thus f^0 is a proper map with finite fibers and therefore finite <cit.>.
The degree in this section will be valued in the sections (Y) of the Grothendieck–Witt sheaf at Y, so we introduce the definition of here. The Grothendieck–Witt sheaf is a sheaf on smooth k-schemes with the Nisnevich topology which can be defined as the sheafification of the functor sending Y to the group completion of the semi-ring of isomorphism classes of locally free sheaves V on Y equipped with a non-degenerate symmetric bilinear form V × V →_Y. It has a construction given in <cit.> and in an unramified sheaf by a result of Panin and Ojanguren <cit.>. In particular, suppose U ⊆ Y is an open subset of Y with complement of codimension at least 2. Then a locally free sheaf on U equipped with a symmetric, non-degenerate bilinear form determines an element of (Y). (A complete definition of an unramified sheaf is in <cit.>.)
Let f: X → Y, and ρ be as in Assumption <ref>, so in particular, f: X → Y is a generically finite map equipped with a subset U of Y with complement of codimension at least 2 and a relative orientation ρ of f|_f^-1(U): f^-1(U) → U. By Proposition <ref>, there is an open subset U' of Y with complement of codimension at least 2 such that f |_f^-1(U') is finite and flat. Then f|_f^-1(U ∩ U') is a finite, flat, oriented, locally complete intersection morphism, and Definition <ref> constructs a bilinear form f|_f^-1(U ∩ U') on a locally free sheaf on U ∩ U'. The associated section of (Y) is defined to be f, and is independent of the choice of U' because is unramified.
Let f: X → Y, and ρ be as in Assumption <ref>. f in (Y) is the section determined by the bilinear form f|_f^-1(U ∩ U'). If we wish to make the choice of relative orientation explicit, we write (f,ρ) for f.
We now relax the hypothesis that X and Y are proper schemes.
Let Y be a smooth k-scheme. Let U ⊆ Y be an open subset such that Y-U has codimension greater than or equal to 2 and f^-1(U) has integral connected components. Suppose f: X → Y is a generically étale map such that its restriction f|_f^-1(U): f^-1(U) → U is a proper, local complete intersection morphism equipped with a relative orientation ρ: L^⊗ 2≅→ω_f |_f^-1(U).
Since f is generically étale, the restriction f|_f^-1(U): f^-1(U) → U is a generically finite map. By Proposition <ref>, we can find a codimension 2 subset Z of U such that f|_f^-1(U-Z): f^-1(U-Z) → (U-Z) is finite and flat.
Let f: X → Y, and ρ be as in Assumption <ref>. f in (Y) is the section determined by (Y) ≅→(U-Z) and the bilinear form f|_f^-1(U-Z) of Definition <ref>. If we wish to make the choice of relative orientation explicit, we write (f,ρ) for f.
The degree (f,ρ)(y) of f at a point y of Y is defined to be the pullback of f along y: k(y) → Y, so (f,ρ)(y) in (k(y)). When the relative orientation is clear from context, we also write f(y).
§.§ Global degree of a map oriented away from codimension 1 equipped with lifting data
Let k be a field of characteristic p>0, p ≠ 2, and let f: X → Y be a map to a smooth k-scheme Y. Let U ⊆ Y be a dense open subset. Suppose that the restriction f|_f^-1(U): f^-1(U) → U is a proper, generically finite, local complete intersection morphism equipped with a relative orientation ρ: L^⊗ 2≅→ω_f |_f^-1(U). Then f|_f^-1(U) determines a section of (U) by Proposition <ref>, Definition <ref>, and the ability to extend sections of over the complements of codimension 2 closed subschemes of smooth schemes. Since is unramified, the restriction map (U) →(Y) is injective. We give a condition on f ensuring that f|_f^-1(U) extends to a (necessarily unique) section of (Y), which we will define to be the ^1-degree.
Let Y be a smooth k-scheme. Let U ⊆ Y be a dense open subset. Suppose f: X → Y is a generically étale map such that its restriction f|_f^-1(U): f^-1(U) → U is a proper, local complete intersection morphism. Let ρ: L^⊗ 2≅→ω_f |_f^-1(U) be an orientation of f|_f^-1(U): f^-1(U) → U. Suppose that there exists the following data: a discrete valuation ring Λ and a lifting of f: X → Y to a generically étale map f: → of Λ schemes with →Λ smooth of finite type. Suppose there is an open subset ⊂ such that ∩ Y is dense in U, the intersection of the complement - with the generic fiber is codimension ≥ 2, and the intersection of f^-1() with the generic fiber is integral. Suppose that f|_f^-1(): f^-1() → is a proper, local complete intersection morphism of schemes and there is a lift of L to a line bundle on f^-1() ⊂ and a lift of ρ to an isomorphism ^⊗ 2≅ω_f|_f^-1().
For A a commutative ring, we let W(A) = (A)/M denote the Witt group, defined to be the group completion of the isomorphisms of non-degenerate symmetric bilinear forms over A, modulo the ideal of metabolic forms. See for example <cit.>. In place of constructing an unramified sheaf over a more general base, we use the following results on the Witt and Grothendieck Witt groups to extend the section of (U) mentioned above.
Let be a regular local ring with quotient field K. Let ^(1) be the set of height one prime ideals of . For P∈^(1), let _P⊂ K denote the localization of at P. For A⊂ K a subring, let W̅(A) be the image of W(A) in W(K). We say that purity holds for W() if
W̅()=∩_P∈^(1)W̅(_P)
[Colliot-Thélène and Sansuc <cit.>] Let be a regular local ring of dimension ≤ 2 containing 1/2. Then purity holds for W().
[Knebusch <cit.>] Let be a Dedekind domain with 1/2∈. Let K be the quotient field of . Then purity holds for W(). Moreover the map W()→ W(K) is injective.
For a ring containing 1/2, the metabolic forms are the same as the ideal generated by the hyperbolic form ⟨ 1 ⟩ + ⟨ -1 ⟩. Since the ideal in () generated by the hyperbolic form is the same as the subgroup generated by the hyperbolic form, the two purity results Theorems <ref> and <ref> extend in the evident manner to purity statements about ().
Let Λ be a discrete valuation ring with residue field k of characteristic ≠ 2. Let π:→Λ be a smooth morphism of finite type, and let ⊂ be an open subscheme satisfying the properties that
* the intersection _k with the closed fiber _k is dense in _k,
* and the intersection of the complement - with the general fiber is codimension ≥ 2.
Let q: ×→_ be a symmetric nondegenerate bilinear form over . Then the restriction of q to (_k) extends uniquely to a section in (_k).
Let x be a codimension one point of _k, which we consider as a codimension two point of , and let =_, x. Let η̅ be a generic point of _k in the connected component containing x. Let L be the field of rational functions on , and let K be the ring of rational functions on _k. Since _k is dense in _k, K is also the ring of rational functions on _k. Since is a regular ring of dimension two, it follows from Theorem <ref> and Remark <ref> that q is in the image of () in (L). Moreover, by Theorem <ref>, the map (_, η̅)→(L) is injective, so the restriction of q to (_, η̅) is in the image of ()→(_, η̅). Restricting to the fiber over k, this implies that the image q̅ of q in (K) is in the image of (__k, x). Since x was an arbitrary codimension one point of _k, it follows that q̅ extends uniquely to a section of over _k because is unramified. (Here denotes the unramified sheaf on smooth k-schemes.)
Suppose f: X → Y as in Assumption <ref>. In particular, f|_f^-1(): f^-1() → is generically étale, and proper. We may thus find an open subset ' ⊂f^-1 () on which f is étale. As f|_f^-1() is proper, the image of f^-1() ∖' under f is closed in , whence has open complement ' in . Thus ' is also open in . By construction, f|_f^-1('): f^-1(') →' is étale and proper, whence finite and flat <cit.>. We may thus apply Definition <ref> and obtain (f|_f^-1('): f^-1(') →') in (').
Moreover, the data and hypotheses given in Assumption <ref> imply that the restriction of f|_f^-1() to the generic fiber satisfies Assumption <ref>. Let η denote the generic point of Λ and let f_η denote this restriction. We may thus apply Definition <ref> to f_η and obtain (f_η) in _η.
(f_η) and (f|_f^-1('): f^-1(') →') have the same restriction to (' ∩_η) by construction and thus determine a section of (' ∪ Y_η) whose restriction to (U) is (f,ρ). By Proposition <ref>, it follows that (f,ρ) in (U) extends to a unique section of (Y).
For f: X → Y as in Assumption <ref>, the ^1-degree, (f,ρ) in (U) extends to a unique section of (Y) as above, which we define the ^1-degree.
§.§ (π^𝔸^1_0(Y))-valued global degree and (k)-valued global degree
The degree of a map M → N between smooth, oriented, compact n-dimensional manifolds is an integer when N is connected. Without assuming that N is connected, the degree can be viewed as an integer valued function on the connected components π_0(N) of N. We show the analogous results in our algebraic setting. For example, the (Y)-valued degree of Definition <ref> (or that of Definition <ref> or <ref>) is pulled back from a unique element of the Grothendieck–Witt group (k) of k when Y is appropriately connected in an algebraic sense, for example, when Y is 𝔸^1-connected. More generally, this degree is a section of (π^𝔸^1_0(Y)), where π^𝔸^1_0(Y) denotes the sheaf of A^1-connected components. We recall the needed definitions and notations.
For smooth k-schemes, or more generally, for simplicial presheaves on smooth k-schemes, X and Y, let [X,Y]_A^1 denote the set of A^1-homotopy classes of maps from X to Y <cit.>. Let π_0^𝔸^1(X) denote the Nisnevich-sheafification of the presheaf taking a smooth k-scheme U to [U,X]_A^1. There is a canonical map
ϕ_X: X →π_0^𝔸^1(X).
For example, ϕ_k : k →π_0^𝔸^1( k) is an isomorphism <cit.>.
X is A^1-connected if the canonical map π_0^𝔸^1(X) →π_0^𝔸^1( k) ϕ_k≅ k is an isomorphism.
<cit.> If X is a smooth k-variety that is covered by finitely many affine spaces A^n_k, whose pairwise intersections all contain a k-point, then X is A^1-connected.
<cit.> If X is a smooth A^1-connected k-scheme, then X has a rational point. For example, L is not an A^1-connected k-scheme for k ⊂ L a finite separable extension.
A Nisnevich sheaf of sets is said to be 𝔸^1-homotopy invariant if the projection U ×A^1 → U induces bijection (U) →(U ×A^1) for all smooth schemes U. For a smooth k-scheme X, the map ϕ_X induces the map
ϕ_X^*: (π_0^𝔸^1(X), ) →(X, ),
where, in this expression, denotes the set of maps of presheaves on smooth k-schemes. The following proposition is known to experts, but we include a proof for completeness.
Let be an 𝔸^1-homotopy invariant Nisnevich sheaf of sets and let X be a smooth k-scheme. Then
* ϕ_X^* is a bijection.
* If in addition X is 𝔸^1-connected, then the canonical map (k) →(X) is an isomorphism.
The canonical commutative triangle
X [rr]^ϕ_X[rd] [ld] π_0^𝔸^1(X)
k
gives rise to the commutative triangle
(X) [ll]^ϕ_X^*( π_0^𝔸^1(X),)
( k) [ur] [ul]
Since X is 𝔸^1-connected, π_0^𝔸^1(X) → k is an isomorphism (of sheaves of sets). Thus we have that (<ref>) implies (<ref>).
For (<ref>), it follows from <cit.> that the map ϕ_X is an epimorphism of Nisnevich sheaves of sets. (To use <cit.>, let I = 𝔸^1,= X, and ' be the 𝔸^1-localization of X.) Thus ϕ_X^* is injective.
For α: X →, the natural transformations ϕ and π_0^𝔸^1 give the commutative diagram
X [d]_ϕ_X[r]^α [d]^ϕ_
π_0^𝔸^1(X) [r]_π_0^𝔸^1(α) π_0^𝔸^1()
Since is an 𝔸^1-homotopy invariant sheaf of sets, ϕ_:→π_0^𝔸^1() is an isomorphism of Nisnevich sheaves of sets (Lemma <ref>).Then α = ϕ_X^*(ϕ_^-1∘π_0^𝔸^1(α)). Thus ϕ_X^* is surjective.
Let be an 𝔸^1-homotopy invariant Nisnevich sheaf of sets. Then ϕ_:→π_0^𝔸^1() is an isomorphism of Nisnevich sheaves of sets.
The category of simplicial presheaves on smooth k-schemes can be given the structure of a simplicial model category with the global injective model structure, the injective local model structure with the Nisnevich topology, and the A^1-model structure. Let and denote fibrant replacement functors for the injective local model structure with the Nisnevich topology and the A^1-model structure, respectively. (See for example, <cit.> <cit.>.)
All sheaves, thought of as discrete simplicial sheaves, are globally fibrant <cit.>. Thus is globally fibrant. Since a local weak equivalence of globally fibrant sheaves is a global weak equivalence, we have that the map
(U) →(U)
is a weak equivalence of simplicial sets for all smooth k-schemes U <cit.>. By <cit.>, it follows that is -local (and thus fibrant in the -model structure). By <cit.>, the map → factors →→. The map → is an -weak equivalence by 2-out-of-3 and and are both fibrant in the injective local model structure. Thus the map → is a local whence global weak equivalence.The sheaf π_0^𝔸^1() is the Nisnevich sheaf associated to the presheaf U ↦π_0 | (U) |. Since (U) ≃ (U) ≃(U) and (U) is a set (i.e., discrete topological space), we have that the natural map (U) →π_0 | (U) | is a bijection. Since is a sheaf, it follows that the natural map ϕ_: →π_0^𝔸^1() is an isomorphism.
is 𝔸^1-homotopy invariant by <cit.>. Applying Proposition <ref>, elements of (Y), such as our 𝔸^1-degrees, are pulled back from a unique element of (π_0^𝔸^1(Y)) for Y a smooth k-scheme. Thus we can refine our definition of degree to lie in (π_0^𝔸^1(Y)).
Let f: X → Y and ρ be as in Assumption <ref> (respectively Assumption <ref>, Assumption <ref>). Define (f,ρ) in (π_0^𝔸^1(Y)) to be the unique preimage of the degree of Definition <ref> (respectively Definition <ref>, Definition <ref>) under the canonical bijection (π_0^𝔸^1(Y)) →(Y) of Proposition <ref>. When there is no danger of confusion, we will simply write f.
In particular, when Y in A^1-connected, the degree lies in (k).
Suppose X is an A^1-connected smooth scheme over k. Then the canonical map (k) →(X) is an isomorphism.
This follows from Proposition <ref> and the A^1-homtopy invariance of <cit.>.
Let f: X → Y and ρ be as in Assumption <ref> and suppose that Y is A^1-connected. Define (f,ρ) in (k) to be the unique preimage of the degree of Definition <ref> under the canonical bijection (k) →(Y).
There are alternate connectivity conditions in A^1-homotopy theory which also give rise to a (k)-valued degree.
The notion of A^1-chain connected varieties was defined in <cit.> as follows. Let L be a finitely generated separable extension of k, which is defined to mean that there exists a subextension k ⊆ E ⊆ L such that E is purely transcendental over k and L is separable and algebraic over E. Let y and y' be L-points of Y.
<cit.>
An elementary A^1-equivalence between y and y' is a map f: A^1_L → Y_L such that f(t) = y and f(t')=y' for some t,t' in A^1(L).
Elementary A^1-equivalence generates an equivalence relation ∼ on Y(L). Denote the quotient Y(L)/∼ by π_0^A^1, ch(Y)(L).
Y is A^1-chain connected if for every finitely generated separable field extension L/k, the set of equivalence classes π_0^A^1, ch(Y)(L) = Y(L)/∼ consists of exactly 1 element.
For Y a smooth, proper variety over a field k, it is a Theorem of A. Asok and F. Morel that A^1-chain connected and A^1-connectedness are equivalent <cit.>. More generally, there is an evident map ψ_Y,L:π_0^A^1, ch(Y)(L)→π_0^A^1(Y)(L) sending the class of y∈ Y(L) in π_0^A^1, ch(Y)(L) to the class [y]∈π_0^A^1(Y)(L). Asok and Morel show that if Y is finite type and proper over k, then ψ_Y,L is an isomorphism for all finitely generated separable extensions L of k <cit.>.
Since the pullback along field extensions of finite odd degree induces an injection on Grothendieck–Witt groups and we are interested in a (k)-valued degree, we weaken the notion of A^1-chain connectedness as follows.
Y is A^1-odd extended chain connected if for every finitely generated separable field extension L/k, and every pair y,y' in Y(L), there exists a finite extension L ⊆ L' of odd degree such that y ∼ y' in Y(L').
We remark that an n-dimensional smooth k-scheme Y has many closed points with separable residue field: for each y in Y there is an open neighborhood U and an étale map ϕ: U →A^n_k <cit.>. The points of A^n_k with separable residue field are dense. The image ϕ(U) is open <cit.>, and therefore contains points with separable residue field. For all u of U such that ϕ(u) has separable residue field, k ⊆ k(u) is separable.
Let Y be a smooth proper k-scheme which is A^1-odd extended chain connected and assume that Y(k) ≠∅. Then for any section β of (Y), there is a unique b in (k), such that for every point y: k(y) → Y as in the commutative diagram
k(y) [dr]^p [rr]^y [ld] Y
k
and such that k ⊆ k(y) is separable, we have
y^* β = p^* b .
Choose y_0 in Y(k). We must let b= y_0^* β, showing uniqueness. Let y: k(y) → Y be a closed point. By Corollary <ref> and Example <ref>, if i:k(y) ⊆ L' is an extension of fields, y' in Y(L') and y' ∼ (y ∘ i), then
i^* y^*β = (y')^* β.
We may view y_0 ∘ p as an element of Y(k(y)). Since Y is A^1-odd extended chain connected, there is an extension i:k(y) ⊆ L' of odd degree such that y ∘ i ∼ y_0 ∘ p ∘ i in Y(L'). Thus
i^* y^* β = i^* p* b
in (L'). An odd degree field extension induces an injection on <cit.>[The reference shows the claim on Witt groups, from which the injection on follows from the isomorphism ≅×_/2.], therefore y^* β = p^* b as claimed.
Let f: X → Y and ρ be as in Assumption <ref>. If Y is additionally A^1-odd extended chain connected and Y(k) ≠∅, then define the degree of f, denoted (f,ρ) or f, to be the b associated by Lemma <ref> to the section of (Y) given by the degree of f defined in Definition <ref>.
§.§ Connectivity and restriction of scalars
We will have use of the degrees of maps whose targets are restrictions of scalars. Since the A^1-degree lands in (π_0^A^1(-)) applied to the target, we give a result on the connectivity of a restriction of scalars in this section.
We extend the notation of A^1-chain connected components π_0^A^1, ch(Y)(L) of a finite type k-scheme Y over a field L (see Definition <ref>) to products of fields by defining
π_0^A^1, ch(Y)(∏_i=1^rL_i):=∏_i=1^r π_0^A^1, ch(Y)(L_i).
Define ψ_Y,∏_i L_i: π_0^A^1, ch(Y)(∏_i L_i) →π_0^A^1(Y)(∏_i L_i) by sending the class of an r-tuple of points ∏_i=1^r y_i∈ Y(∏_i=1^r L) to the class [∏_i=1^r y_i] in π_0^A^1(Y)(∏_i L_i) as above. Clearly (Y,L)↦π_0^A^1, ch(Y)(L) is (covariantly) functorial in Y and L and ψ_Y,L is natural in (Y,L).
For a field F, let _F denote the category of finite type separated F-schemes.
Let k⊂ L be a finite separable field extension. We have the Weil restriction functor _L/k:_L→_k, which is right adjoint to the extension of scalars functor X↦ X_L:=X×_ k L; passing to the limit over suitable open subschemes, this induces the natural isomorphism X(L⊗_k F)≅_L/k(X)(F) for all finitely generated extensions F of k. The isomorphisms X(L⊗_k F)≅_L/k(X)(F) and X(A^1_L⊗_k F)≅_L/k(X)(A^1_F) are natural with respect with the 0- and 1-sections i_0, i_1: (-)→A^1_(-), and thus induce an isomorphism, natural in F and X
ρ_L,X,F:π_0^A^1, ch(X)(L⊗_k F)≅→π_0^A^1, ch(_L/k(X)(F))
Let k⊂ L be a finite separable field extension.
For X a finite type, proper L-scheme and F a finitely generated separable extension of k, the isomorphism (<ref>) induces an isomorphism
π^A^1_0(ψ_X)(L⊗_kF):π_0^A^1(X)(L⊗_kF)≅→π_0^A^1(_L/k(X))(F),
natural in F and X.
If F is a finitely generated separable extension of k, then L⊗_kF is a finite product of finitely generated separable extensions of k; if X is a proper L-scheme, then _L/k(X) is a proper k-scheme. We apply <cit.>, which together with the isomorphism (<ref>) gives us the sequence of isomorphisms
π_0^A^1(X)(L⊗_kF)π_0^A^1, ch(X)(L⊗_kF)
π_0^A^1, ch(_L/k(X))(F)π_0^A^1(_L/k(X))(F),
natural in F and X.
Let k⊂ L be a finite separable field extension. Let X be a smooth proper L-scheme. If X is A^1-connected, then so is _L/k(X).
By Lemma <ref> and (<ref>), _L/k(X) is A^1-chain connected. (See Definition <ref>). Since X is a smooth, proper L-scheme, _L/k(X) is a smooth proper k-scheme. By <cit.>, it follows that _L/k(X) is A^1 connected.
We record the following well-known fact.
Let X and Y be smooth k-schemes which are A^1-connected. Then X ×_k Y is A^1-connected.
π_0^A^1(X ×_k Y) is the sheaf associated to the presheaf sending a smooth k-scheme U to π_0 ( (X ×_k Y) (U)). The functor commutes with finite products (see e.g. <cit.>). It follows that
π_0 ( (X ×_k Y) (U)) ≅π_0 (( (X) × (Y)) (U)) ≅
π_0 ( (X)(U) × (Y)(U)) ≅π_0 X (U) ×π_0 X (U).
Since sheafification preserves finite limits, it follows that π_0^A^1(X ×_k Y) ≅π_0^A^1(X) ×π_0^A^1(Y).
§ LOCAL DEGREE
Now suppose that f: X → Y is a map of smooth n-dimensional schemes over k or a map satisfying one of Assumptions <ref> <ref> or <ref>. Let x be a point of X with image y=f(x). Suppose there are Zariski open neighborhoods W and U of x and y, respectively, such that f(W) ⊂ U and the restriction f|_W : W → U is finite and oriented by ρ: L^⊗ 2≅→ω_f|_W. By Proposition <ref> it is possible to find many x and y which admit such a U,W and ρ. We will define the local degree _x (f,ρ) in (k(y)) under such circumstances in this section. We also use the notation _x f for _x (f,ρ) when there is no danger of confusion. The degree defined in Section <ref> will be shown to be a sum of local degrees in Proposition <ref>, and we will give a formula to compute _x f with the Jacobian in Proposition <ref>.
§.§ Definition and properties
To simplify notation, we let g denote the restriction g=f|_W : W → U, where f,W, and U are as in the beginning of the section. Let g× y: U ×_y k(y) → k(y) denote the pullback of g along y: k(y) → U as in the pullback diagram
U ×_y k(y) [r] [d]^g× y W[d]^g=f|_W
k(y) [r]^y U.
The fiber g^-1(y) ≅ U ×_y k(y) is the coproduct
U ×_y k(y) ≅∐_z ∈ W: f(z) = y_g^-1(y),z.
For every z in W mapping to y, let g_z denote the composition of the inclusion _f^-1(y),z→ g^-1(y) with g × y.
_f^-1(y),z[d]_g_z[r] W [d]^g
k(y) [r]^y U.
Denote the fiber of the sheaf g_*L at y by g_* L(y). Since g is finite, the canonical map g_* L(y) → (g × y)_* L is an isomorphism. By a slight abuse of notation, we also let L denote its pull back to _f^-1(y),z. We have a canonical isomorphism
g_* L(y) ≅⊕_z ∈ W: f(z) = y (g_z)_* L.
Recall that the pullback of the pairing (g,ρ) of Definition <ref> to k(y) is denoted (g,ρ)(y), as in Definition <ref>. Each of the direct summands of the isomorphism (<ref>) are perpendicular under (g,ρ)(y).
Let _x (f,ρ) in (k(y)) be the restriction of (g,ρ)(y) to (g_z)_* L. When the relative orientation is clear from context, we also write _x f.
For f satisfying Assumption <ref>, there is an open subset U ⊂ Y such that f|_f^-1(U) is finite and oriented by Proposition <ref>. For any x in f^-1(U), we may take W=f^-1(U) and we have just shown that the global degree is the sum of local degrees.
For all y in U, there is an equality f (y) = ∑_x ∈ f^-1(y)_x f in (k(y)).
Given a field extension k(y) ⊆ E, let f_E|_U_E: W_E → U_E denote f |_W ⊗_k E. We have the commutative diagram
W [d]_f |_W [l]_π' W_E [d]^f_E|_W_E
U [l]^π U_E
There is a canonical point x̃ of W_E mapping to x under π', and f_E(x̃) = ỹ, the canonical point ỹ of U_E mapping to y under π. The pullback of ρ determines an orientation (π')^* ρ of f_E|_W_E because there is a canonical isomorphism (π')^* ω_f |_W≅ω_f_E|_W_E<cit.>. Let _x (f, ρ) ⊗(E) denote the image of _x (f, ρ) under the map (k(y)) →(E).
_x̃ (f_E, (π')^* ρ) = _x (f, ρ) ⊗(E) in (E).
Let L denote the line bundle on W of the orientation ρ. By the proof of Proposition <ref>, there is an isomorphism (f_E|_W_E)_* (π')^* L ≅ (π^* f |_W)_* L identifying the forms defining π^*(f, ρ) and (f_E, (π')^* ρ). Under this isomorphism the subspaces ((f_E)_x̃)_* L and π^*(f_x)_* L are identified, proving the claim.
By Proposition <ref>, the computation of _x(f, ρ) reduces to the case where y=f(x) is a rational, because it may be computed after basechange to k(y), and in particular we may assume that y is a closed point. We now give such a method of computation for _x f for x a closed point with k ⊆ k(x) separable.
As above, let W and U be smooth k-schemes of dimension n, let x be a point of X, and let f|_W : W → U be finite and oriented by ρ. Suppose that y=f(x) is a closed point with k ⊆ k(x) separable. Then f|_W^-1(y) ↪ W is a closed immersion. Consider the pullback diagram
f|_W^-1(y) [d]_f'|_W[r]^y' W [d]^f|_W
k(y) [r]^y U
The map f'|_W is a finite, flat, local complete intersection morphism.
f'|_W is a finite and flat because these properties are stable under pullback. Since k(y) and U are regular schemes, the finite type map y: k(y) → U is a local complete intersection morphism <cit.>. Since f|_W is finite and W and U are smooth and dimension n, f|_W is flat <cit.>. Thus the pullback y' is a local complete intersection morphism <cit.>. Since W and U are smooth k-schemes, f|_W is a local complete intersection morphism. Thus the composition f|_W ∘ y' = y ∘ f'|_W is lci as well. Since U → k is smooth, it follows that the structure map for f|_W^-1(y) over k is lci. Since k(y) → k is smooth, it follows that f'|_W is lci as claimed <cit.>
Since f'|_W is flat, the square (<ref>) and <cit.> define a canonical isomorphism
ω_f'|_W≅ (y')^*ω_f|_W.
Since f|_W is finite, x determines a closed and open subscheme _f^-1(y),x↪ f|_W^-1(y) of f|_W^-1(y). By <cit.>, the inclusion of a closed and open component in a locally Noetherian scheme is a local complete intersection morphism, because Koszul-regular immersions are lci. Define f_x: _f^-1(y),x→ k(y) to be the composition of _f^-1(y),x↪ f|_W^-1(y) and f'|_W, so in particular f_x is finite, flat, lci and fits into the commutative diagram
_f^-1(y),x[d]_f_x[rr]^i_f,x W [d]^f|_W
k(y) [rr]^y U .
The isomorphism (<ref>) defines an isomorphism ω_f_x≅ i_f,x^* ω_f|_W, whence i_f,x^*ρ defines a relative orientation of f_x.
Let W and U be smooth k-schemes of dimension n, let x be a point of X, and let f : W → U be finite and oriented by ρ. Suppose that y=f(x) is a closed point with k(y) ⊆ k(x) separable. Then
_x(f, ρ) = (f_x, i_f,x^*ρ).
f is flat by <cit.>. Both _x(f, ρ) and (f_x, i_f,x^*ρ) are bilinear forms on the k(y)-vector space (f_x)_* i_f,x^* L. (f_x, i_f,x^*ρ) is obtained by composing
((f_x)_* i_f,x^* L)^⊗ 2→ (f_x)_* i_f,x^* (L^⊗ 2) i_f,x^*ρ→ (f_x)_* ω_f_x→ k(y),
and _x(f, ρ) is the composition
((f_x)_* i_f,x^* L)^⊗ 2→ ((f |_W)_* L(y))^⊗ 2→ (f |_W)_* (L^⊗ 2)(y) ρ→ (f |_W)_* ω_f |_W (y)→ k(y).
By the commutative diagram
((f |_W)_* L)^⊗ 2[r] y^*(f |_W)_* (L^⊗ 2) [r]^ρ y^*(f |_W)_* ω_f |_W
((f_x)_* i_f,x^* L)^⊗ 2[r] [u] (f_x)_* i_f,x^* (L^⊗ 2) [u] [r]^i_f,x^*ρ (f_x)_* ω_f_x[u]
it suffices to check that the trace maps (f_x)_* ω_f_x→ k and (f |_W)_* ω_f |_W (y)→ k are compatible in the sense that the outermost rectangle in the diagram
(f |_W)_* ω_f |_W (y)[r] k(y)
(f_y)_* ω_f_y[r] [u] k(y) [u]^id
(f_x)_* ω_f_x[r] [u] k(y) [u]^id
is commutative. To see this, let f_y: f^-1(y) = _f^-1(y), x→ k(y) ×_U W → k(y) be the pullback of f|_W along y. The associated trace map (f_y)_* ω_f(y)→ k(y) fits in the commutative diagram above, so we may check the commutivity of the upper and lower squares. The commutativity of the lower follows from <cit.> applied to the composition _f^-1(y), x→ k(y) ×_U W → k(y). For the upper square, the trace map for finite flat maps commutes with base change as can be seen by the description “evaluate at 1" of <cit.>. In more detail, let A ⊂ B be ring map corresponding to a finite flat map f: B → A and let y be a point of A. View _A(B,A) as a coherent sheaf on A. Then there is a canonical isomorphism f_* ω_f ≅(B,A) and the trace map _B/A:_A(B,A) → A is evaluation at 1. The pull back of _B/A by k(y) → A is the evaluation at 1 map _k(y)(B_y, k(y)) as claimed.
§.§ Computation with the Jacobian
Let f: X → Y be a oriented map between smooth, connected schemes of dimension n. So we have a line bundle L on X, and an isomorphism
ρ: (∧^n TX, f^* ∧^n TY) → L^⊗ 2 .
Then f induces a map Tf on tangent bundles and a global section
Tf ∈ (∧^n TX, f^* ∧^n TY).
Taking the image under the ρ gives a global section ρ( Tf) of L^⊗ 2.
A section of the square of a line bundle determines a canonical element of _X/(_X^*)^2. Namely, suppose σ is a section of L^⊗ 2. Around any point x, choose a local trivialization of L, identifying σ with an element of _X,x. Any two choices of local trivialization will change this element by the square of a unit in _X,x.
The Jacobian Jf of f is the section of /(^*)^2 corresponding to ρ( Tf) by Construction <ref>:
Jf = ρ( Tf) ∈/(^*)^2.
Let Jf_x (respectively Jf(x)) be the image of Jf in _X,x/ (_X,x^*)^2 (respectively k(x)/(k(x))^*).
If f is étale at x, then _x f = _k(x)/k(y)⟨ Jf(x) ⟩.
Since f is étale at x, the canonical map _f^-1(y),x→ k(x) is an isomorphism and k(y) ⊆ k(x) is a separable extension. The form (f, ρ)(y) on f_* L_f^-1(y) is defined
f_* L_f^-1(y)× f_* L_f^-1(y)→ f_* L^⊗ 2_f^-1(y)ρ≅ f_*(ω_f)_f^-1(y)→_Y(y) ≅ k(y)
where the first map is the canonical map associated to the tensor product. The local degree _x f is then the restriction of (f, ρ)(y) to f_* L_f^-1(y),x. Choosing any local trivialization of L around x, we obtain an isomorphism L_f^-1(y),x≅_f^-1(y),x≅ k(x). The canonical map associated to the tensor product becomes multiplication on k(x). Since f is finite when restricted to a neighborhood of x, there is a canonical isomorphism
(ω_f)_x ≅__Y,y(_X,x,_Y,y)
from the adjunction f_* ⊣ f^!. Under this identification : (f_x)_*(ω_f)_x→_Y,y is evaluation at 1.
Since f is étale at x, the isomorphism (<ref>) sends the section Tf of (ω_f)_x to : _X,x→_Y,y <cit.>, where here : _X,x→_Y,y denotes the trace associated to the finite étale algebra _Y,y⊆_X,x. Thus for any a in k(x) ≅_f^-1(y),x, the composition
(f_x)_* L^⊗ 2_f^-1(y),xρ≅ (f_x)_*(ω_f)_f^-1(y),x→_Y(y) ≅ k(y)
sends a ρ ( Tf (x)) to _k(x)/k(y) a, where _k(x)/k(y): k(x) → k(y) denotes the trace of the finite étale algebra k(y) ⊆ k(x). Remembering the chosen local trivialization of L around x, the bilinear form
k(x) × k(x) ≅ (f_x)_* L_f^-1(y),x× (f_x)_* L_f^-1(y),x→ (f_x)_* L^⊗ 2_f^-1(y),xρ≅ (f_x)_*(ω_f)_f^-1(y),x→_Y(y) ≅ k(y)
represents the local degree _x f. Since we have fixed a local trivialization of L around x, we have ρ ( Tf (x)) ∈ k(x). From the above, it follows that for any a, b in k(x), the form (<ref>) sends (a,b) in k(x) × k(x) to
_k(x)/k(y) (ab/ρ ( Tf (x))).
Thus _x f = _k(x)/k(y)⟨1/Jf(x)⟩ = _k(x)/k(y)⟨ Jf(x) ⟩ as claimed.
Taking x to be the generic point of X, this shows the Jacobian can be used to compute f.
Let f: X → Y be a separable map between smooth, proper connected k-schemes of dimension n. Suppose there exists a closed subset Z of Y of codimension at least 2 such that the restriction of f to f^-1(Y-Z) is oriented. Let η denote the generic point of X. Then _k(X)/k(Y)⟨ Jf(η) ⟩ is in the image of (Y) ⊆(k(Y)) and the degree of f is given by
f = _k(X)/k(Y)⟨ Jf(η) ⟩.
Moreover, if Y is either A^1-connected or Y has a k-point and is A^1-odd extended chain connected, then _k(X)/k(Y)⟨ Jf(η) ⟩ is in the image of the pull-back (k) →(k(Y)).
f is étale at η because f is a separable morphism, so we may apply Proposition <ref> to x=η. The first statement then follows from Proposition <ref> and the fact that is an unramified sheaf. The second assertion follows from Corollary <ref> in the first case and Lemma <ref> in the second case.
Combining Proposition <ref> with Proposition <ref> implies:
Let y be a regular value of f. Then f = ∑_x ∈ f^-1 y_k(x)/k(y)⟨ J(f) ⟩ in (k(y)).
§ COUNTS OF RATIONAL CURVES
In the remaining sections, we give quadratically enriched counts of rational curves on del Pezzo surfaces passing through the appropriate number of points by taking the A^1-degree of maps from moduli spaces of such curves.
§.§ Kontsevich moduli space of rational curves on del Pezzo surfaces
We set up notation for the needed moduli spaces and maps, consistent with that of <cit.>, to which we refer the reader for further information and references. Let k be a perfect field.
A del Pezzo surface over k is a smooth and projective k-scheme S such that the anti-canonical sheaf -K_S is ample. The degree d_S of a del Pezzo surface S is the self-intersection K_S^(2).
Let S be a del Pezzo surface over k of degree d_S. Fix an effective Cartier divisor D on S and let d:=(-D· K_S)>0 denote its degree with respect to -K_S.
Let M_0,n(S, D) denote the moduli stack of n-pointed stable maps u:P^1→ S in curve class D equipped with n points of P^1 (meaning sections from the base). Let M̅_0,n(S, D) denote the compactified moduli stack of n-pointed, stable maps of a genus zero curve to S, in the curve class D. By definition this means that for an algebraically closed extension F of k, an F-point of M̅_0,n(S,D) corresponds to the data (u: P_F → S_F, p_1, …, p_n) where P_F is a semistable genus 0 curve over F, u is a stable map, p_i: F →P_F are disjoint sections landing in the smooth locus and u_* [P_F] ∈ D. See <cit.> and <cit.> as well as <cit.> for more information.
Our counts of rational curves will be the A^1-degrees of appropriate modifications of the following evaluation map.
Define the evaluation map :M̅_0,n(S,D)→ S^n by
(u:P→ S, (p_1,…, p_n)) ↦ (u(p_1),…, u(p_n))
For a point q_*=(q_1, …, q_n) of S^n, note that the fiber ^-1(q_*) of over q_* consists of stable maps u: P→ S with u(p_i) = q_i for i=1,…,n. Thus ^-1(q_*) consists of the rational curves passing through (q_1,…,q_n) together with a chosen point p_i of u^-1(q_i) (which usually is no choice at all, as u^-1(q_i) will be a single point for generally chosen q_i). Since the A^1-degree of a map is a sum over the fiber ^-1(q_*) of a local degree (Proposition <ref>), our quadratically enriched counts of rational curves will be -degrees of appropriate modifications of the map .
Consider a geometric point (u: P^1_F → S_F, p_1, …, p_n) of M_0,n(S, D). Except in a few special cases (d=-1, or d=1,2 and d_S≥ 3 etc.), the image curve u(P^1) is not smooth. In <cit.>, we study the geometry of M̅_0,n(S, D) using the singularities of the image curve. An ordinary double point of u(P^1) is a point q of u(P^1) such that there exist distinct points p_1 ≠ p_2 of P^1 such that u(p_1) = q = u(p_2) and T_q S is spanned by the images of T_p_1P^1 and T_p_2P^1. There is an open subscheme of the moduli stack M̅_0,n(S, D) whose geometric points (u:P^1→ f(P^1), ((p_1,…, p_n)) are unramified in the sense that the induced map on cotangent spaces du: u^* T^*S → T^* P^1 is surjective and such that any singularities of u(P^1) are ordinary double points. It is well-known that M^_0,n(S, D) is either empty or a smooth scheme of dimension n+d-1. See for example <cit.>. Assume that F ≠ 2,3. We say that u has an ordinary cusp at p ∈P^1 if T_pu T_p(P) → T_f(p)S is the zero map, u^-1(u(p))={p}, and we may choose parameters x, y for _S,q^∧ and t for ^∧_P^1, p so that f^*(x)= t^2, f^*(y)=v t^3 where v is a unit in ^∧_P^1, p. We say u has an ordinary tacnode at distinct smooth points p_1,p_2 ∈P if u(p_1) = u(p_2) and T_p_1u( T_p_1P) = T_p_2u(T_p_2P), and we may choose parameters x, y for _S,q^∧ and t_i for ^∧_P^1, p_i with i=1,2 so that t_i:=u_p_i^*(x), u_p_1^*(y)=0 and u_p_2^*(y)=t_2^2. We say that u has an ordinary triple point at distinct smooth points p_1,p_2, p_3 ∈P if u(p_1) = u(p_2) = u(p_3) = q and the images of T_p_i u: T_p_i P→ T_q S are pairwise linearly independent subspaces.
(<cit.>) Let k be a perfect field, S a del Pezzo surface over k and D an effective Cartier divisor on S. Let n=d-1. Then : M^_0,n(S, D) → S^n is étale.
We introduce a list of assumptions, which will be convenient for future reference, but which are not running assumptions throughout the remainder of the paper.
*
k = 0.
* D is not an m-fold multiple of a -1-curve for m>1.
*
One the the following holds.
* d_S≥ 4
* d_S=3 and d≠ 6
* d_S = 2 and d≥ 7
We remind the reader that M^_0(S,D) ⊆ M_0(S,D) represents the locus of stable maps with irreducible domain curve which are birational onto their images.
(<cit.>.) Suppose Basic Assumptions <ref>(<ref>)(<ref>)(<ref>) hold for k, S, D. Then there is a closed subset A⊂ S^n with A≥ 2 such that the inverse image M̅_0,n(S,D)^:=M̅_0,n(S,D) ∖^-1(A) satisfies the following.
*
M̅_0,n(S,D)^=∅ if and only if M^_0(S,D)=∅. If M^_0(S,D)≠∅, then the moduli space M̅_0,n(S,D)^ is a geometrically irreducible smooth finite-type k-scheme, and the restriction of to :M̅_0,n(S,D)^→ S^n∖ A is a finite, flat, dominant morphism.
*
The evaluation map is étale in a neighborhood of each f ∈M̅_0,n(S,D)^ with f unramified.
*
M̅_0,n(S,D)^ contains a dense
open subset of M_0,n^(S;D).
* Geometric points f of M̅_0,n(S,D)^ correspond to birational maps.
*
Let f be a geometric point of M̅_0,n(S,D)^∖ M_0,n^(S;D), which we consider as a morphism f:P→ S for some genus 0 semi-stable curve P. Then f satisfies:
* If P=P^1 is irreducible, then the image curve C:=f(P^1) has one singular point q that is not an ordinary double point, and C has either an ordinary cusp, an ordinary tacnode or an ordinary triple point at q. Moreover, the marked points do not map to q and f is free.
* If P is not irreducible, then P=P_1∪P_2, with P_i≅P^1. The image curve C:=f(P) has only ordinary double points as singularities. Moreover, if n_i of the n marked points of P are in P_i, and C_i:=f(P_i) has degree d_i:=-K_S· C_i, then d_i-1≤ n_i≤ d_i for i=1,2.
Define D_⊂M̅_0,n(S,D)^∖ M_0,n^(S;D) (respectively D_) to be the closure of the locus of those u in M̅_0,n(S,D)^ such that C has an ordinary cusp (respectively tacnode). D_ and D_ are divisors by <cit.>.
In <cit.>, we define the double point locus π: ^→ M_0,n^(S,D) based on <cit.>. For k, S, D satisfying Basic Assumptions <ref>(<ref>)(<ref>)(<ref>), we furthermore define the double point locus π: ^→M̅_0,n(S,D)^. See <cit.>. The loci ^ and ^ are smooth k-schemes, constructed as a closure of a locus of geometric points (u: P→ S, (p_1, …, p_n), (p_n+1, p_n+2)) where (u: P→ S, (p_1, …, p_n)) is a geometric point of M_0,n^(S,D) or M̅_0,n(S,D)^ respectively, and p_n+1≠ p_n+2 are smooth points of P such that u(p_n+1) = u(p_n+2). These double point loci are certain closed subschemes in the fiber product of universal curves. They contain an open subscheme of stable maps u: P^1 → S together with a pair of distinct points of P^1 mapping to the same point q of S, and n additional marked points of P^1. Note that q is a double point of u(P^1), and the double point loci are introduced to have a useful moduli space of such double points.
Let S be a smooth del Pezzo surface over a perfect field k equipped with an effective Cartier divisor D. Then π: ^→_0,n(S,D)^ is finite étale of degree δ = 1/2D·(K_S + D) + 1.
π is proper by construction and is shown to be étale in <cit.>. The arithmetic genus of u(P^1) is δ = 1/2D·(K_S + D) + 1. See for example <cit.>. Thus points of M^_0,n(S, D) correspond to maps where u(P^1) has δ ordinary double points.
Let f:Y→ Z be a finite, flat morphism of smooth k-schemes that is étale over each generic point of Z. Then f_*_Y is a locally free _Z-module. The multiplication map on _Y gives the morphism of _Z-modules m:f_*_Y⊗__Zf_*_Y→ f_*_Y. Since f_*_Y is a finite locally free _Z-module, we have the trace map _f:f_*_Y→_Z defined by sending s∈ f_*_Y(U) to the trace of the multiplication map × s: f_*_Y(U)→ f_*_Y(U). The trace form
f_* _Y ⊗ f_* _Y →_Z
(b, b') ↦(bb')
defines a map τ: f_* _Y → f_* _Y^-1. The determinant
τ: f_* _Y → f_* _Y^-1
determines a section (f) of f_* _Y^-2 and a canonical isomorphism
(D((f))) ≅ ( f_* _Y)^⊗ 2.
Since _f is a surjection if f is étale, we see that the divisor of (f) is supported on the branch locus of f. The associated element of _Z/(_Z^*)^2 (see Construction <ref>) has the property that at every closed point z of Z,
(f) = ((b_i b_j)_i,j)
where b_i runs over a basis of f_* _Y as an _Z-module.
The map π: ^→M̅_0,n(S,D)^ is finite, flat and étale over the generic points of M̅_0,n(S,D)^ by <cit.>. Thus π_* _^ is locally free and we have a discriminant map
π: _M̅_0,n(S,D)^→ (π_* _^)^⊗ -2.
(<cit.>.) Suppose Basic Assumptions <ref>(<ref>)(<ref>)(<ref>) hold for k, S, D. Let n=d-1. Let be the invertible sheaf on M̅^_0,n(S,D) given by
=(π_*_(-D_))^-1
Then the composition d∘_π^-1:^⊗ 2→ω_ is an isomorphism on M̅^_0,n(S,D).
Suppose Basic Assumptions <ref>(<ref>)(<ref>)(<ref>) hold for k, S, D. Then the isomorphism of Theorem <ref> is an orientation on :M̅^_0,n(S,D) → S^n. It follows from Theorem <ref> that this map satisfies Assumption <ref>, where U ⊆ Y (in the notation of Assumption <ref>) is S^n ∖ A ⊆ S^n. We therefore have
(:M̅^_0,n(S,D) → S^n) ∈(S^n)
as in Definition <ref>.
§.§ Positive characteristic
Let k be a field of characteristic p>3 and let S be a smooth del Pezzo surface over k with an effective Cartier divisor D.
For every effective Cartier divisor D' on S, there is a geometric point f in each irreducible component of M^_0(S, D') with f unramified.
Assumption <ref> is automatically satisfied in characteristic 0 <cit.> and in characteristic p>3 when d_S ≥ 3 <cit.>.
Suppose (k,S,D) satisfies Basic Assumptions <ref>(<ref>)(<ref>) and Assumption <ref>. Let d= -K_S · D and n= d-1. If M_0,n(S,D)^ is empty, we consider the degree of _k: M_0,n(S,D)^→ S^n to be zero. Suppose M_0,n(S,D)^ is not empty. In this section, we construct the data of Assumption <ref> for _k: M_0,n(S,D)^→ S^n.
The ordinary double point locus M_0,n(S,D)^ is an open subcheme of M̅_0,n(S,D) (see e.g. <cit.>). By <cit.>, M_0,n(S,D)^ is empty if and only if there are no u:P_F^1 → S_F in curve class D with P_F^1 → u(P_F^1) birational (where F is some algebraically closed extension of k). For S a twisted form of a smooth toric del Pezzo, it is possible to give a complete list of (S,D) for which M_0,n(S,D)^ is empty, although we do not include this result here.
The evaluation map M̅_0,n(S,D) → S^n being proper, _k(M̅_0,n(S,D)∖ M_0,n(S,D)^) is closed in S^n. By <cit.>, _k(M̅_0,n(S,D)∖ M_0,n(S,D)^) this closed set has positive codimension in S^n. Letting U ⊂ S^n denote its complement, we have _k: _k^-1(U) → U a proper map between smooth k-schemes, which is furthermore étale by Lemma <ref>. By Remark <ref>, _k: _k^-1(U) → U has a canonical relative orientation. This constructs the data of Assumption <ref> over the special fiber.
It remains to construct the data of Assumption <ref> over a lift to characteristic 0. Let Λ be a complete discrete valuation ring with reside field k and quotient field K of characteristic 0. In <cit.> we construct S̃→Λ a smooth del Pezzo surface equipped with an effective Cartier divisor D̃ with special fiber S̃_k ≅ S and D̃_k ≅ D. The general fiber of (S̃, D̃, Λ) gives rise to (S̃_K, D̃_K, K) satisfying Basic Assumptions <ref>(<ref>)(<ref>)(<ref>). See <cit.>. Set = S̃^n in the notation of Assumption <ref>.
There is a compactified moduli stack M̅_0,n(S̃, D̃) of n-pointed, stable maps of a genus zero curve to S̃, in the curve class D̃ by <cit.>. There is more discussion in <cit.>. As before, the moduli M̅_0,n(S̃, D̃) admits the evaluation map
: M̅_0,n(S̃, D̃) →S̃^n
In <cit.>, we construct a closed subset Ã⊂S̃^n such that
* the special fiber Ã_k contains _k(M̅_0,n(S,D) ∖M̅_0,n(S,D)^) and is codimension ≥ 1 in S^n.
* Ã is codimension 2 in S̃^n.
* _K^-1(S̃_K^n ∖Ã_K) can be taken to be M̅_0,n(S̃_K, D̃_K)^ in Theorem <ref>
* M̅_0,n(S̃, D̃)^:=^-1(S̃^n∖Ã) is a smooth Λ-scheme (see in particular <cit.>)
In the notation of Assumption <ref>, set = S̃^n∖Ã and let =M̅_0,n(S̃, D̃)^. In <cit.> we construct an orientation on the proper map M̅_0,n(S̃, D̃)^→ restricting to the constructed orientation on _k: _k^-1(U) → U. We therefore have the data of Assumption <ref> on the map _k: M_0,n(S,D)^→ S^n and may define
(_k: M_0,n(S,D)^→ S^n) ∈(S^n)
to be the degree of Definition <ref>.
§ TWISTS OF
Let S be a smooth del Pezzo over a field k equipped with a relative Cartier divisor D. Let d= -K_S · D. Let k ⊆ denote a separable closure of k. Let
σ= (L_1, …, L_r)
be an r-tuple of subfields L_i ⊂ containing k for i=1,…, r subject to the requirement that ∑_i=1^k [L_i : k] = n. We think of σ as the fields of definition of a list of points of S that our curves will be required to pass through.
The list σ is used to define twists _σ of the evaluation map in the following manner. The Galois group (/k) acts on the -points of k-schemes. Thus σ gives rise to a canonical homomorphism (σ) :(/k) →_(σ), where (σ) denotes the -points of ∐_i=1^r L_i and _(σ)≅_n denotes the symmetric group. For convenience, we fix an identification (σ) = {1,2,…, n} and thus a canonical isomorphism _(σ) = _n.
Permuting the factors of S defines an inclusion of _n into (S^n). We include _n into (_0,n(S,D)) by permutation of the marked points, and acting trivially on the underlying curve and the morphism to S: for τ in _n, set
τ(u: C → S, p_1, …, p_n) = (u: C → S, p_τ^-1(1), …, p_τ^-1(n)).
The 1-cocycle
g ↦(σ)(g) × g
(/k) → (X_^n)
for X=S^n, X=_0,n(S,D), X=_0,n^(S,D), or X = ^ determines twists X_σ. Since _ and π_ are Galois equivariant for the twisted action, they descends to a k-maps
_σ: _0,n(S,d)_σ→ (S^n)_σ
π_σ:^_σ→_0,n^(S,d)_σ
There is a natural isomorphism
ϕ_σ:(S^n)_σ≅∏_i=1^r _L_i/k S
We may assume that each L_i is a subextension of k in k̅. Fix a Galois subextension M of k in k̅ containing each of the L_i. Let G denote the Galois group of M over k.
Using the universal property of the restriction of scalars, it suffices to define for each finitely generated k-algebra A, an isomorphism
ϕ_A:(S^n)_σ(A)→∏_i=1^rS(L_i⊗_kA),
natural in A.
For this, it follows from the theory of descent that (S^n)_σ(A) is naturally in bijection with the G-invariant subset of S^n(M⊗_kA)=S(M⊗_kA)^(σ), where g∈ G acts on M⊗_kA through its action on M and on S(M⊗_kA)^(σ) by
g·(ι↦ x_ι∈ S(M⊗_kA)) := (ι↦ x^g_g^-1·ι∈ S(M⊗_kA)).
Let x_*:=(x_ι∈ S(M⊗_kA))_ι∈(σ) be a G-invariant element. Let ι_i:L_i→k̅ be the inclusion as a subextension of k, and take g∈ G with g·ι_i=ι_i. Note that (M/L_i) is exactly the isotropy group of ι_i under the G-action on (σ) and that
(M⊗_kA)^(M/L_i)=L_i⊗_kA
since A is flat over k. Looking at the ι_i component of x_*, we see that x_ι_i is invariant under (M/L_i), so x_ι_i is in L_i⊗_kA. Thus we have a well-defined map
ϕ_A:(S(M⊗_kA))^G→∏_i=1^rS(L_i⊗_kA)
sending x_* to (x_ι_1,…, x_ι_r). To map in the other direction, start with
(x_1,…, x_r)∈∏_i=1^rS(L_i⊗_kA) and take an arbitrary ι∈(σ), corresponding to an embedding ι:L_i↪ M⊂k̅ over k for a unique i. Then there is an element g∈ G, unique modulo (M/L_i), with ι=g·ι_i. We then take x_ι∈ S(M⊗_kA) to be x_i^g. It follows directly that x_*:=(x_ι∈ S(M⊗_kA))_ι is G-invariant and that ϕ_A(x_*)=(x_1,…, x_r).
Thus ϕ_A is a split surjection. The injectivity of ϕ_A follows from the identity x_ι=x_ι_i^g if ι=g·ι_i and x_*=(x_ι∈ S(M⊗_kA))_ι is G-invariant. The naturality of ϕ_A in A is clear, which completes the proof.
We may thus view _σ as a map with codomain ∏_i=1^r _L_i/k S. Similarly to the fibers of , the fibers of _σ consist of rational curves passing through chosen points. For simplicity of notation, consider a k-point q_* of (S^n)_σ. By (<ref>), q_* is given by an r-tuple (q_1,…, q_r) where q_i is an L_i point of S. In particular, q_* gives rise to a canonical geometric point of S^n (use the above identification of (σ) with {1,…,n}) which we will also denote by q_*. The geometric points of the fibers of _σ and over q_* are canonically identified.
§.§ Characteristic 0
Let n=d-1. Let k be a perfect field, S a del Pezzo surface over k and D an effective Cartier divisor on S satisfying Assumptions <ref>(<ref>)(<ref>)(<ref>). We then have :M̅_0,n(S,D)^→ S^n and π: ^→M̅_0,n(S,D)^ by Theorem <ref> and <cit.> (recalled in Section <ref>) respectively. The cocyle (<ref>) defines twists
_σ:M̅_0,n(S,D)_σ^→ S^n_σ
π: ^_σ→M̅_0,n(S,D)_σ^
In <cit.>, it is shown that _σ is a map between smooth k-schemes and setting _σ:=[(π_σ)_*__σ(-D_)]^-1, we have the isomorphism
d_σ∘_π_σ^-1:(_σ)^⊗ 2→ω__σ.
Note that _σ is generically étale because its base change to is by Theorem <ref> This gives the data of Assumption <ref>. Let
_σ∈(S_σ^n)
denote the degree of Definition <ref>.
§.§ Positive characteristic
Place ourselves in the situation of Section <ref>, which is to say k is a field of characteristic p>3. S is a smooth del Pezzo surface over k with an effective Cartier divisor D. Suppose (k,S,D) satisfies Basic Assumptions <ref>(<ref>) and Assumption <ref> and that M_0,n(S,D)^≠∅. (Note Remark <ref> on the condition M_0,n(S,D)^≠∅. As the condition M_0,n(S,D)^≠∅ is equivalent to M_0,n(S,D)^≠∅, the condition Basic Assumptions <ref>(<ref>) is also satisfied.) In this section, we construct the data of Assumption <ref> for _σ: M_0,n(S,D)^_σ→ S_σ^n, thereby defining
_σ∈(S^n).
(If M_0,n(S,D)^ = ∅, we consider the degree of _σ to be 0 as above.)
Let U ⊂ S^n_σ be the complement _σ(M̅_0,n(S,D)_σ∖ M_0,n(S,D)^_σ). As in Section <ref>, U is a dense open subset of S^n_σ; _σ^-1(U) → U is a proper, étale, map between smooth k-schemes, and therefore has a canonical relative orientation.
Take (Λ, S̃, D̃) as in Section <ref>. Enlarging the closed subset Ã⊂S̃^n to be _n invariant, we construct in <cit.> a map between smooth Λ-schemes _σ:M̅_0,n(S̃, D̃)^_σ→S̃^n_σ, a line bundle _σ:=[(π̃_σ)_*__σ(-D_)]^-1, and an isomorphism
d_σ∘_π_σ^-1:(_σ)^⊗ 2→ω__σ.
on M̅_0,n(S̃, D̃)^_σ restricting to the canonical relative orientation on the special fiber. This constructs the data of Assumption <ref> for _σ: M_0,n(S,D)^_σ→ S_σ^n and the degree _σ in (S_σ^n).
§ THE SYMMETRIZED MODULI SPACE
Let S be a smooth del Pezzo surface over a field k equipped with an effective Cartier divisor D with -K_S · D ≥ 1. The symmetric group _n acts on S^n by permuting the factors, and acts freely M̅_0,n(S,D) by permuting the marked points and acting trivially on the underlying curve and morphism to S. We obtain a _n-equivariant diagram
M̅_0,n+1(S,D) →M̅_0,n(S,D) → S^n
projecting from the universal curve to the moduli stack, followed by the evaluation map. Passing to quotients gives rise to a symmetrized evaluation map
^: M̅_0,n(S,D)^→^n S.
Because symmetric powers of surfaces are not necessarily smooth, it will be useful to let ^nS^0⊂^nS be the open subscheme formed as the quotient of S^n∖{diagonals} by _n. Rational points of ^nS^0 correspond to {p_1,…,p_r} where p_i is a closed point of S and ∑_i=1^r [L_i:k] = n, where L_i=k(p_i). Fibers of ^ correspond to rational curves passing through the p_i. So by symmetrizing, we are allowing for curve counts through non-rational points. The choice of the field extensions L_i is not fixed. This will assemble the degrees of Section <ref> into a section of (^n S^0).
§.§ Characteristic 0
Suppose (k,S,D) satisfies Assumptions <ref>(<ref>)(<ref>)(<ref>). Let d= -K_S · D ≥ 1 and n=d-1. We then have :M̅_0,n(S,D)^→ S^n and π: ^→M̅_0,n(S,D)^ by Theorem <ref> and <cit.> (recalled in Section <ref>) respectively. By enlarging A ⊂ S^n of Theorem <ref>, we may assume that M̅_0,n(S,D)^ and ^ inherit the action of _n from M̅_0,n(S,D) and M̅_0,n+1(S,D) ×_M̅_0,n(S,D)M̅_0,n+1(S,D), respectively. By Theorem <ref>(<ref>), there are no contracted components in the stable maps corresponding to geometric points of M̅_0,n(S,D)^, whence S^n ∖ A ⊂ S^n∖{diagonals}. Furthermore, M̅_0,n(S,D)^, ^, S^n∖{diagonals} and S^n∖ A are all quasi-projective because S is projective over k and : M̅_0,n(S,D)^→ S^n ∖ A is finite (Theorem <ref>(<ref>)), and π: ^→M̅_0,n(S,D)^ is finite by <cit.>. We may thus take their quotients in the category of quasi-projective k-schemes by the action of _n. Since the actions are free, the quotients, denoted M̅_0,n(S,D)^__n, ^__n, ^n_0 S, U= (S^n∖ A)/_n respectively, are smooth quasi-projective k-schemes. We have maps
^__nπ__n⟶M̅_0,n(S,D)^__n^__n⟶ U ⊆^n_0 S
The composition of ^__n with the inclusion is denoted
__n:M̅_0,n(S,D)^__n→^n_0 S
We will construct the data of Assumption <ref> on __n. By <cit.>, we have a line bundle __n:=[π__n*_^__n(-D_)]^-1 on M̅_0,n(S,D)^__n together with an isomorphism
d^__n∘_π__n^-1:(__n)^⊗ 2→ω_^__n.
This constructs the data of Assumption <ref>, defining
__n∈(^n_0 S)
by Definition <ref>.
^n S is not smooth by purity of the branch locus <cit.> <cit.> applied to the quotient map S^n →^n S. So even though the complement of ^nS^0⊂^nS is codimension 2, purity results on Witt groups do not give that the associated restriction map of is an isomorphism. In fact, the sections ^ of (^nS^0) do not extend in general: for d=3 and S= P^2 the value at for example 8 real points has a different weighted count than the value at 6 real points and one complex conjugate pair.
§.§ Positive characteristic
As in Sections <ref>, <ref>, let S be a smooth del Pezzo surface over a field k of characteristic p>3. Let D be an effective Cartier divisor on S. Suppose (k,S,D) satisfies Basic Assumptions <ref>(<ref>)(<ref>) and Assumption <ref>. Noting Remark <ref>, we will assume M_0,n(S,D)^≠∅. (This also implies Basic Assumptions <ref>(<ref>).)
Let d= -K_S · D and n=d-1. Note that M_0,n(S,D)^ is stable under the free action of . As in Section <ref>, we will take a the quotient in quasi-projective schemes by of an evaluation map. It is convenient to pass to a dense open subset of M_0,n(S,D)^ first. By <cit.>, the closed set A_k :=(M̅_0,n(S,D) ∖ M_0,n(S,D)^) has positive codimension. (A_k is closed becasue is proper and M_0,n(S,D)^ is open.) Let M_0,n(S,D)^, := ^-1(S^n ∖ A_k). By <cit.>, :M_0,n(S,D)^→ S^n is étale, whence locally quasi-finite. Since M_0,n(S,D)^, is proper over S^n∖ A, it follows that M_0,n(S,D)^, is quasi-projective. Let M_0,n(S,D)^,__n := M_0,n(S,D)^,/_n denote the quotient in quasi-projective schemes, which is smooth because M_0,n(S,D)^, is and the action is free. We thus have a map between smooth quasi-projective k-schemes
^,__n: M_0,n(S,D)^,__n→^n_0 S
where as before ^n_0 S denotes the the quotient of S^n minus the diagonals by _n. In this section, we construct the data of Assumption <ref> for ^,__n.
Since ^,__n is a proper, local complete intersection morphism, which is étale, ^,__n has a canonical orientation (Remark <ref>), so it remains to construct the data of Assumption <ref> over a discrete valuation ring Λ.
Take (Λ, S̃, D̃) as in Section <ref>. We enlarge Ã⊂S̃^n to be _n invariant and obtain the smooth Λ-scheme M̅_0,n(S̃, D̃)^:=^-1(S̃^n∖Ã) as in Section <ref>. Taking quotients by _n, we construct in <cit.> a map
__n: M̅_0,n(S̃, D̃)^__n→^n_0 S̃
between smooth Λ-schemes, a line bundle __n:=[π̃__n, *___n(-D_)]^-1, and an isomorphism
d __n∘_p̃ĩ__n^-1:(__n)^⊗ 2→ω___n.
This constructs the data of Assumption <ref> for ^,__n defining
^,__n∈(^n_0 S).
by Definition <ref>.
§ LOCAL DEGREE OF
In Sections <ref> <ref> <ref> <ref>, respectively, we have constructed A^1-degrees
(_σ:M̅_0,n(S,D)_σ^→ S^n_σ ) ∈(S^n_σ)
(_σ: M_0,n(S,D)^_σ→ S_σ^n) ∈(S^n_σ)
(__n:M̅_0,n(S,D)^__n→^n_0 S) ∈(^n_0 S)
^,__n (M_0,n(S,D)^,__n→^n_0 S) ∈(^n_0 S)
under appropriate hypotheses. In this section, we compute the local degrees of these maps at a point of the locus of parametrized curves with only ordinary double points, e.g. M_0,n(S,D)^ in the untwisted, unsymmetrized version.
We first compare the twisted and symmetrized degrees and local degrees. There are pullback diagrams <cit.>
M̅_0,n(S,D)^_σ⊂M̅_0,n(S,D)^_σ[d]__σ[rr]^ [d]___nM̅_0,n(S,D)^__n
S^n_σ[rr]__S ^n S
M̅_0,n(S,D)^_σ[d]__σ[rr]^ [d]___nM̅_0,n(S,D)^,__n
S^n_σ[rr]__S ^n S,
where the latter is the characteristic p>0 version of the former. Let S^n_σ,0 denote the inverse image under _S of the open subset ^n_0 S of ^n S, and let
i_S^n,0: S^n_σ,0→ S^n
denote the inclusion. Pulling back (<ref>) by ^n_0 S →^n S produces the diagrams
M̅_0,n(S,D)^_σ⊂M̅_0,n(S,D)^_σ[d]__σ[rr]^ [d]___nM̅_0,n(S,D)^__n
S^n_σ,0[rr]__S,0 ^n_0 S
M̅_0,n(S,D)^_σ[d]__σ[rr]^ [d]___nM̅_0,n(S,D)^,__n
S^n_σ,0[rr]__S,0 ^n_0 S,
We have equalities of (global) degrees
_S,0^* (__n:M̅_0,n(S,D)^__n→^n_0 S)= i_S^n,0^* (_σ:M̅_0,n(S,D)_σ^→ S^n_σ)
_S,0^* ^,__n (M_0,n(S,D)^,__n→^n_0 S) = i_S^n,0^* (_σ: M_0,n(S,D)^_σ→ S_σ^n) ∈(S^n_σ,0)
in (S^n_σ,0)
Follows from (<ref>) and Proposition <ref>.
Consider a point (u: P→ S_k(u), p_1,…, p_n) in the twisted locus of parametrized curves with only ordinary double points M̅_0,n(S,D)^_σ. Let (u) denote the image of u in M̅_0,n(S,D)^__n or M̅_0,n(S,D)^,__n depending on if the characteristic of k is 0 or positive respectively. Let k((u)) ⊆ k(u) be the associated field extension. By Propositions <ref> and <ref>, the local degree _(u)__n of __n and the local degree of _σ at u are related by
_(u)__n⊗_k((u)) k(u) = _u_σ.
Note that this holds both in characteristic p and in characteristic 0.
For any closed point u' of the symmetrized locus of parametrized curves with only ordinary double points, there is an associated k^s point of M̅_0,n(S,D)^__n,k^s such that the action of (k^s/k(u')) acts by permuting the marked points. This defines a map (k^s/k(u')) →_n which determines a list σ = (L_1, …, L_r) of intermediate fields k(u')⊆ L_i ⊆ k^s by the Galois correspondence. (So, L_1 is the fixed field of the stabilizer of 1, L_2 is the fixed field of the stabilizer of the smallest integer not in the orbit of 1, etc.) We obtain
: M̅_0,n(S,D)^_σ→M̅_0,n(S,D)^,__n, k(u')
The k^s point corresponding to u' determines a point u of M̅_0,n(S,D)^_σ such that (u) = u' and the induced extension of residue fields k(u') ⊆ k(u) is an isomorphism. Basechange to k(u') does not affect _u'__n. We have therefore reduced the calculation of the local degree of __n at a closed point to the calculation of the local degree of _σ.
It follows similarly that for any closed point y' of ^n_0 S, we can compute the fiber of
(__n:M̅_0,n(S,D)^__n→^n_0 S)
in characteristic 0 (or ^,__n (M_0,n(S,D)^,__n→^n_0 S) in characteristic p) at y' by choosing an appropriate σ and y in S^n_σ and computing the fiber of
(_σ:M̅_0,n(S,D)_σ^→ S^n_σ )
at y (respectively the fiber of (_σ: M_0,n(S,D)^_σ→ S_σ^n) at y).
As above, let π_σ: ^→ M_0,n(S,D)^ denote the finite étale map from the twisted double point locus. (See Proposition <ref>.) Let ⟨π_σ⟩ in (k(u)) denote the corresponding discriminant. See Remark <ref>.
Let u be in M_0,n(S,D)^. Then the local degree of _σ at u is computed _u = _k(u)/k((u))⟨π_σ⟩.
On M_0,n^(S,D)_σ, the composition
ω__σ≅→(D_) →(D_ + 2 D_) = ( (π_σ)) ≅→ (π_σ,*_^)^⊗ 2
constructed in Sections <ref> and <ref> defines the orientation of _σ. Let D_σ denote the section of (_σ^* T^* S^n, T^* M^_0,n(S,D)) given by the differential of _σ, and let D_σ be its determinant, so D_σ is a section of ω__σ. By construction, the first isomorphism of (<ref>) takes D_σ to the section 1 of (D_), and the last isomorphism takes 1 to (π_σ). Since the map (D_) →(D_ + 2 D_) takes the function 1 to the function 1, we have that D_σ↦(π_σ). Therefore _u _σ = _k(u)/k((u))⟨π_σ⟩ by Proposition <ref>.
The discriminant ⟨π_σ⟩ of Proposition <ref> has a concrete geometric description. The point u corresponds to a map u: P→ S_k(u) from a smooth genus 1 curve over k(u) together with points p_1,…, p_n of P_k^s permuted appropriately under the Galois action. Over k^s, the image curve u_k^s(P_k^s) has δ = 1/2D·(K_S + D) + 1 nodes permuted under the action of (k^s/k(u)). Over k(u) these nodes p consist of points of S with various residue fields k(p). At each node, there are exactly two tangent directions of u(P) defining a degree 2 field extension k(p) ⊆ k(p)[√(m(p))] where m(p) is called the mass of the node. See Definition <ref>.
Suppose u is a point in M^_0,n(S,D)_σ, and let u: P→ S_k(u) be the corresponding map. Let denote the set of nodes of u(P). For p ∈, let m(p) denote the mass of the node p. Then
_u _σ = _k(u)/k(_σ(u))∏_p ∈ m(p)
We show π_σ(u) = ∏_p∈ m(p) in k(u)^*/(k(u)^*)^2, which is sufficient by Proposition <ref>.
The double point locus ^↪ X ×_M^_0, n(S,D) X, where X = M^_0, n+1(S,D) is the universal curve, inherits an action of /2 from the involution on X ×__0, n X. Let denote the quotient, which will be called the universal node. We obtain a factorization of π_σ
^_σ→→_0,n(S,D)_σ.
Pulling back over k(u) → M_0,n^(S,D), the universal node splits as the disjoint union of the nodes p of u(P) as u is in the ordinary double point locus, giving ⊗ k(u) ≅∏_p nodes k(p). The pullback of double point locus ^ over k(p) is a degree 2 extension _k(p)→ k(p) , whence of the form k(p)[√(D(p))] → k(p) or k(p) ∐ k(p) → k(p) (a split node) because the characteristic of k is not 2. Since the discriminant is multiplicative over products of rings, it follows that
π_σ(u) = ∏_p nodes (_k(p)→ k(u))
By Lemma <ref>,
(_k(p)→ k(u)) = (k(p)/k(u))^2 N_k(p)/k(u) (_k(p)→ k(p))
= N_k(p)/k(u) D(p),
where we set D(p) = 1 in the case of a split node.
We include the following well-known lemma for completeness.
Let K ⊂ L ⊂ M be a tower of finite degree field extensions. Then,
(M/K) = (L/K)^[M:L]LK((M/L)).
Let {x_i}_i ∈ S be a basis for L over K and let {y_j}_j ∈ T be a basis for M over L. Define matrices A,B, by
A_i^j := _L/K(x_ix_j), B_i^j: =_M/L(y_iy_j).
Observe that {x_iy_j}_i ∈ S, j ∈ T is a basis for M over K. Define a matrix C by
C_ij^kl := _M/K(x_iy_j x_k y_l), i,k ∈ S, j,l ∈ T.
So, we have
(M/K) = [(C)] ∈ K^×/(K^×)^2 , (M/L) = [(B)] ∈ L^×/(L^×)^2,
(L/K) = [(A)] ∈ K^×/(K^×)^2.
Calculate
C_ij^kl = _L/K(x_ix_k _M/L(y_jy_l)) = _L/K(x_ix_k B_j^l).
Write B_j^l x_i = ∑_m ∈ S D_ij^mlx_m for D_ij^ml∈ K. Then
_L/K(x_ix_k B_j^l) = ∑_m ∈ S D_ij^ml_L/K(x_mx_k) = ∑_m ∈ S D_ij^ml A_m^k.
Writing the result of the preceding calculation in terms of matrix multiplication, we have
C = D ∘ (𝕀_[M:L]⊗ A).
Taking determinants, we obtain
(C) = (A)^[M:L](D) = (A)^[M:L] N_L/K((B)).
A reference for the last equality is <cit.>. The lemma follows.
§ ENUMERATIVE THEOREMS
Let k be a perfect field of characteristic not 2 or 3. Let S be a del Pezzo surface with an effective Cartier divisor D, satisfying Hypothesis <ref>. If k is of positive characteristic, assume additionally that Hypothesis <ref> is satisfied. Fix a list σ = (L_1, L_2, …, L_r) of field extensions k ⊆ L_i ⊆ such that ∑_i=1^r [L_i: k] = n:=(-D· K_S) -1. Then, there is N_S,D, σ in (π_0^A^1(∏_i=1^r _L_i/k S)) such that for any generally chosen points p_i of S, i=1,…, r , with k(p_i) ≅ L_i, we have the equality in (k)
N_S,D, σ(p_*) = ∑_u rational curve
on S
in class D
through the points
p_1, …, p_r_k(u)/k∏_p node of u(P^1)mass(p)
where p_* is the k-point of ∏_i=1^r _L_i/k S given by p_* = (p_1, …, p_r).
In Sections <ref> and <ref> we constructed
(_σ:M̅_0,n(S,D)_σ^→ S^n_σ ) ∈(S^n_σ)
(_σ: M_0,n(S,D)^_σ→ S_σ^n) ∈(S^n_σ)
in the case of characteristic 0 and positive characteristic, respectively. By (<ref>), S^n_σ≅∏_i=1^r _L_i/k S. In particular, the k-points p_* of ∏_i=1^r _L_i/k S correspond to points p_i of P^2, i=1,…, r , with k(p_i) ≅ L_i. We may therefore define N_S,D, σ in (π_0^A^1(∏_i=1^r _L_i/k S)) to be ^^1_σ as in Definition <ref>. By Proposition <ref>, for any generally chosen k-point p_* of ∏_i=1^r _L_i/k S, we have N_S,D, σ(p_*) = ∑_u ∈_σ^-1 (p_*)_u _σ. By construction, _σ^-1 (p_*) is the set of rational curves on S in class D passing through the points p_*=(p_1,…,p_r). By <cit.>, for a generally chosen p_*, the rational curves u on S in class D passing through(p_1,…,p_r) determine points in M_0,n(S,D)^_σ. By Proposition <ref>, the local degree _u _σ is given by _u _σ = ∏_p node of u(P^1)mass(p), completing the proof.
In the notation of Theorem <ref>, suppose that S is additionally ^1-connected. There is N_S,D, σ in (k) such that for any generally chosen points p_i of S, i=1,…, r , with k(p_i) ≅ L_i, we have the equality in (k)
N_S,D, σ= ∑_u rational curve
on S
in class D
through the points
p_1, …, p_r_k(u)/k∏_p node of u(P^1)mass(p)
When S is ^1-connected, the twisted product ∏_i=1^r _L_i/k S is as well by Proposition <ref>. By Corollary <ref>, the section N_S,D, σ of Theorem <ref> is pulled back from a unique element N_S,D, σ in (k), which has the claimed property by Theorem <ref>.
By construction, the invariants (_σ) in (k) only depend on the list of field extensions {L_1,…,L_r}. Thus the multi-set {k(p_i): i = 1,…, r} of the fields of definition the p_i counted with multiplicity determines the count of the degree D rational curves through points with the same multi-set of field extensions is independent of the chosen points. This strengthens <cit.> where this statement is proven for [L_i:k] ≤ 2.
§ EXAMPLES
Let S be an ^1-connected del Pezzo surface over a field k and D a Cartier divisor on S. Let σ = (L_1, L_2, …, L_r) be a list of separable field extensions k ⊆ L_i ⊆ such that ∑_i=1^r [L_i: k] = n:=(-D· K_S) -1. Let
k(σ) : = ∏_i=1^r L_i.
So _k(σ)/k⟨ 1 ⟩ = ∑_i=1^r _L_i/k⟨ 1 ⟩ is the sum of the trace forms of the field extensions k ⊆ L_i. For example, for k of characteristic not dividing 2d, d ∈ k, and σ = (k,k,…,k,k√(d)), we have
_k(σ)/k⟨ 1 ⟩ = (n-2)⟨ 1 ⟩ + ⟨ 2 ⟩ + ⟨ 2d ⟩.
Table <ref> computes some values of N_S,D,σ. Justifications follow below. (Note in particular that for appropriate σ and S, many of the N_S,D,σ in Table <ref> are not only sums of ⟨± 1⟩'s; a lot more is happening here than over R and C.) For S a hypersurface, for example an ^1-connected cubic surface, the ^1-Euler characteristic can be computed explicitly using <cit.>.
§.§ ^1-connected del Pezzo surfaces
The following is a theorem of Asok and Morel <cit.>.
A smooth proper surface over k which is rational over k is ^1-connected.
<cit.> A smooth cubic surface is rational over k if it contains two skew lines over k or two conjugate skew lines defined over k(√(a)) for some degree 2 extension k ⊂ k(√(a)). It then follows <cit.> that x^2y+y^2z+z^2w + w^2 x = 0 and x^3+y^3+z^3+w^3=0 determine ^1-connected smooth cubic surfaces over fields of characteristic not 2 or 3.
§.§ N_S,-K_S,σ for d_S ≥ 3
For D = -K_S, we have n=d_S -1. Since d_S ≥ 3, a choice of basis for H^0(S, -K_S) determines an embedding S ↪P_k^d_S. For a general choice of points (p_1,…,p_r) of S such that k(p_i) ≅ L_i, the ∑_i=1^r [k(p):k] = n linear conditions on H^0(P_k^d_S, (1)) corresponding to vanishing on p_i for i=1,…,r are independent. (As before, the meaning of the phrase “a general choice of points (p_1,…,p_r) of S such that k(p_i) ≅ L_i" is that there is a nonempty open set U of ∏_i=1^r _L_i/kS such that the claim holds for rational points of U. Moreover, this U is stable under base change, so there will be rational points after some finite extension of k, giving rise to potentially different Galois representation and list σ = (L_1, …, L_r') for which the result holds.) Thus
{ f ∈ H^0(P_k^d_S, (1)): f(p_i) = 0 for i=1,… r}
is a 2-dimensional vector space over k. Choose a basis {f,g} and let
X = { [s,t] × x : tf(x) + sg(x) = 0 }⊂P^1 × S
be the corresponding pencil. The baselocus B={ f = g = 0}↪ S of the pencil has degree d_S by Bézout's theorem. By construction, the points p_i lie in B, whence B = { p_1,…,p_i}∪{p_0} where p_0 is a k-rational point of S.
Let π: X →P^1_k denote the projection. By construction, the fibers of π are precisely the curves in class -K_S passing through { p_1,…,p_i}. (These all then also pass through p_0.)
Let C ↪ S be a general fiber of π. By adjunction, C has canonical class K_C = K_S ⊗(C). Since C is in class -K_S, we have (C) ≅ -K_S, whence K_C = and C has arithmetic genus 1. It follows that the fibers of π are either smooth or rational with a single node. Thus
N_S,D,σ = ∑_u rational curve
on S
in class -K_S
through the points
p_1, …, p_r_k(u)/kmass(p(u)),
where p(u) denotes the note of u(P^1). We will compute the right hand side directly using the ^1-Euler characteristic.
The projection X → S realizes X as the blow-up
X ≅_B S.
It follows from <cit.> that the ^1-Euler characteristic χ^^1(X) is computed
χ^^1(X) = χ^^1(S) + (χ^^1(P^1) - ⟨ 1 ⟩ ) χ^^1 (B )
= χ^^1(S) + ⟨ -1 ⟩χ^^1 ({ p_0,…,p_i} )
= χ^^1(S) + ⟨ -1 ⟩ + ⟨ -1 ⟩χ^^1 ({ p_1,…,p_i} )
= χ^^1(S) + ⟨ -1 ⟩ + ∑_i=1^i⟨ -1 ⟩_k(p_i)/k⟨ 1 ⟩,
whence
χ^^1(X) = χ^^1(S) + ⟨ -1 ⟩ + ⟨ -1 ⟩_k(σ)/k⟨ 1 ⟩.
We have a second calculation of χ^^1(X) using π and the work of the second named author M. Levine <cit.>. Comparing the two will compute N_S,-K_S,σ. Here is the second calculation. An isomorphism T P^1 ≅(1)^⊗ 2 defines a relative orientation of (π^* T^* P^1, T^* X), where T^*X denotes the cotangent bundle, or Kähler differentials. We may thus let n((π^* T^* P^1, T^* X)) be the Euler number. The morphism π determines a section d π of the bundle (π^* T^* P^1, T^* X) and the Euler number can be computed as a sum
n((π^* T^* P^1, T^* X)) = ∑_x: d π(x) = 0_x d π.
See <cit.> or <cit.> and <cit.> for compatibility checks. In the Witt group W(k):= (k)/Z(⟨ 1 ⟩ + ⟨ -1 ⟩), we have equalities
χ^^1(X) = n( T^* X ) = n((π^* T^* P^1, T^* X)) = ∑_x: d π(x) = 0_x d π,
where the first equality is <cit.> and and the second is <cit.>. Comparison with the classical computation (where χ is multiplicative and the general fiber has Euler characteristic 0), shows (<ref>) is also valid in (k).
For general (p_1,…,p_i), the zeros of d π are the nodes in the fibers of π, and for a node p, the local index _p d π is computed _p d π= _k(p)/k⟨ -1 ⟩ m(p).
Let U⊂ X denote the open subset of the pencil given by U = {s ≠ 0}≅A^1 × S and let t be the coordinate on A^1. Choose (t,p) in U, and local analytic coordinates (x,y) on S for the completion of _S at p. In these coordinates, U is given by
{ t × (x,y) : t F(x,y) + G(x,y) = 0}.
The point (t,p) is a zero of d π if and only if π^* dt (t,p)= 0. Since t F(x,y) + G(x,y) = 0, we have that
dt F + t ∂_x F dx + t ∂_y F dy + ∂_x G dx + ∂_y G dy = 0
at p. Since we assume the pencil is smooth, we can not have F = t ∂_x F + ∂_x G = t ∂_y F + ∂_y G = 0. It follows that π^* dt (t,p)= 0 if and only if
(t ∂_x F + ∂_x G) dx + ( t ∂_y F + ∂_y G ) dy = 0,
which occurs if and only if t ∂_x F + ∂_x G = t ∂_y F + ∂_y G = 0. This latter condition occurs if and only if p is a node of tF + G = 0. Thus the zeros of d π are the nodes in the fibers of π as claimed. Note that we have also shown that if p is a node in the fiber at t, then F(p) ≠ 0.
d π is a section of the vector bundle (π^* T^* P^1, T^* X ). Without loss of generality, we may assume that a zero of π is in U, i.e. the zero is of the form (t,p). Consider the local trivialization of (π^* T^* P^1, T^* X ) corresponding to the basis {dt ↦ dx, dt ↦ dy }. This local trivialization is compatible with the local coordinates and the canonical relative orientation of (π^* T^* P^1, T^* X ) (coming from the orientability of P^1). Using these local coordinates and local trivialization, the section dπ corresponds to
( t ∂_x F + ∂_x G/F , t ∂_y F + ∂_y G /F)
because dπ (dt) = (t ∂_x F + ∂_x G)/F dx + ( t ∂_y F + ∂_y G )/F dy by (<ref>). Since t = -G/F, this dπ likewise corresponds to the function
( -G ∂_x F + F ∂_x G/F^2 , -G ∂_y F +F ∂_y G /F^2) = (∂_x G/F, ∂_y G/F).
The Hessian (G/F) as a function of x and y equals the Hessian (t + G/F) because t is a fixed scalar. Moreover, since t+ G/F and its partials vanish at p, there is an equality of (t F + G) and (t + G/F) evaluated at p by the chain rule. By genericity, the fibers {(x,y): t F + G = 0 } of π have only nodes. The A^1-Milnor number of {t F + G = 0} at p is ⟨(t F + G) (p)⟩ and (t F + G) (p) ≠ 0. Thus (G/F)(p)≠ 0. Since p is a node k ⊆ k(p) is a separable extension <cit.> and we we have that _p d π = _k(p)/k⟨(G/F)(p) ⟩ by <cit.>, whence _p d π = _k(p)/k⟨(t F + G) (p)⟩, proving the claim.
Combining (<ref>), (<ref>) and Lemma <ref> we have:
N_S,-K_S,σ = ⟨ -1 ⟩χ^^1(S) + ⟨ 1 ⟩ + _k(σ)/k⟨ 1 ⟩
Note that it is not necessary to assume that S is ^1-connected for this computation to be valid.
alpha
|
http://arxiv.org/abs/2307.00847v1
|
20230703084119
|
An analysis on stochastic Lanczos quadrature with asymmetric quadrature nodes
|
[
"Wenhao Li",
"Zongyuan Han",
"Yixuan Huang",
"Shengxin Zhu"
] |
math.NA
|
[
"math.NA",
"cs.NA",
"65C05, 65D32, 65F15, 65F60, 65G99, 65Y20, 68Q10, 68Q87"
] |
BehaveFormer: A Framework with Spatio-Temporal Dual Attention Transformers for IMU enhanced Keystroke Dynamics
[
August 1, 2023
==============================================================================================================
The stochastic Lanczos quadrature method has garnered significant attention recently. Upon examination of the error analyses given by Ubaru, Chen and Saad [https://epubs.siam.org/doi/abs/10.1137/16M1104974SIAM J. Matrix Anal. Appl. (SIAMX), 38:1075–1099 (2017)] and Cortinovis and Kressner [https://link.springer.com/article/10.1007/s10208-021-09525-9Found. Comput. Math (FoCM), 22:875-903 (2021)], certain notable inconsistencies arise. It turns out that the former's results are valid for cases with symmetric quadrature nodes and may not be adequate for many practical cases such as estimating log determinant of matrices. This paper analyzes probabilistic error bound of the stochastic Lanczos quadrature method for cases with asymmetric quadrature nodes. Besides, an optimized error allocation technique is employed to minimize the overall number of matrix vector multiplications required by the stochastic Lanczos quadrature method.
log determinant approximation, trace estimation, stochastic Lanczos quadrature, asymmetric quadrature nodes
65C05, 65D32, 65F15, 65F60, 65G99, 65Y20, 68Q10, 68Q87
§ BACKGROUND
The computation of log determinant arises in many proliferating applications, such as Gaussian process kernel learning <cit.>, Bayesian interpolation <cit.>, Kullback-Leibler divergence <cit.> and linear mixed models <cit.>. Since the data sets are continuously growing in the current era, a fast and scalable solver to calculate or approximate the log determinants of large matrices is in demand. The stochastic Lanczos quadrature (SLQ) method is a promising approach to approximate log determinants of large scale positive definite matrices. It might be first proposed by Bai, Fahey and Golub <cit.>, while the first rigorous theoretical analysis was developed by Ubaru, Chen and Saad <cit.>. Recently, certain inconsistencies arise between their results and the work by Cortinovis and Kressner <cit.>. Since the SLQ method is receiving an increasing attention, a consistent and cogent result is necessary for further developing such a method <cit.>. This short note details these inconsistencies and develops a new error analysis for practical cases.
After a close examination of the SLQ method, it turns out that Ubaru, Chen and Saad's result works for the cases when the quadrature nodes are symmetric <cit.>. Whereas Cortinovis and Kressner pointed out that quadrature nodes generated in the SLQ process may be asymmetric <cit.>. And perhaps the asymmetric cases arise more often in practice. For instance, <ref> illustrates that
the quadrature nodes generated by SLQ are asymmetric based on the matrix from <cit.>. Another point worth noticing is that the SLQ method is based on a quadrature rule corresponding to a discontinuous measure defined by the spectrum of the underlying matrix (see (<ref>) below). Unlike the continuous measure case, under such a discontinuous measure the quadrature error will remain unchanged when an affine transformation from the reference domain [-1, 1] to the physical domain [λ_min, λ_max] is applied (this will be detailed in <ref>).
With consideration of these two points, this paper updates estimation on the query numbers and the Lanczos steps based on the asymmetric quadrature rule, and further develops a probabilistic relative error bound for log determinant estimation. The main results are presented in <ref>, <ref> and <ref>.
§ SLQ BASICS
For real symmetric positive definite matrix A ∈ℝ^n × n with eigenvalues {λ_j}_j=1^n, the log determinant computation problem can be equivalently converted to the following terms <cit.>
log A = log∏_j = 1^n λ_j = ∑_j = 1^nlogλ_j = (log (A)).
There are various approaches <cit.> to estimate the trace term in equation (<ref>) but this note would focus on the stochastic Lanczos quadrature (SLQ) method <cit.>. For readers who are interested in other methods, please refer to papers regarding inducing points methods <cit.>, randomized low-rank approximation methods <cit.>, Chebyshev polynomial-based methods <cit.>, and methods that utilize variance reduction in trace estimation, such as Hutch++ and improved algorithms based on that <cit.>.
The SLQ method utilizes the Girard-Hutchinson's trace estimator <cit.>, where the trace of logarithm matrix in equation (<ref>) can be approximated by an N-query Girard-Hutchinson's trace estimator
(log (A)) ≈_N(log(A)) = 1/N∑_i=1^N z^(i)^T log (A) z^(i) = n/N∑_i=1^Nv^(i)^Tlog (A) v^(i),
where z^(i) is the i^th query vector that follows the Rademacher distribution (i.e., every entry of the vector would either be +1 or -1 with probability 50%) and v^(i) = z^(i)/√(n) is the corresponding unit vector. Based on the fact that A is diagonalizable, its logarithm can be calculated by first taking the eigen-decomposition A=Q Λ Q^T and then log(A)=Q log(Λ)Q^T. Note that log(Λ) is a diagonal matrix with entries log(λ_j), j = 1,…, n. Let μ^(i) = Q^Tv^(i), then equation (<ref>) further reads
(log (A)) ≈n/N∑_i=1^Nv^(i)^Tlog (A) v^(i) = n/N∑_i=1^Nv^(i)^T Qlog (Λ) Q^Tv^(i)
=n/N∑_i=1^N μ^(i)^T log (Λ) μ^(i).
Based on <cit.>, the last term of equation (<ref>) can be reformulated as a summation of Riemann Stieltjes integrals I^(i) = ∫_a^blog(t) dμ^(i)(t), i = 1, …, n,
n/N∑_i=1^N μ^(i)^T log(Λ) μ^(i) = n/N∑_i = 1^N∑_j=1^n log(λ_j^(i))[μ_j^(i)]^2
= n/N∑_i=1^N ∫_a^b log(t) dμ^(i)(t) = n/N∑_i = 1^N I^(i),
where μ_j^(i) is the j^ th element of μ^(i) and μ^(i)(t) is the corresponding piecewise measure function of the i^th integral
μ^(i)(t)={ 0 , if t < λ_1=a,
∑_j=1^k-1[μ_j^(i)]^2 , if λ_k-1≤ t < λ_k, k=2,...,n,
∑_j=1^n[μ_j^(i)]^2 , if t ≥λ_n=b.
.
According to the Gauss quadrature rule <cit.>, a Riemann Stieltjes integral I^(i) can be approximated by an (m+1)-point Lanczos quadrature rule I_m^(i) so the last term in equation (<ref>) reads
n/N∑_i=1^N I^(i)≈n/N∑_i=1^N I_m^(i) = n/N∑_i=1^N∑_k=1^m+1τ_k^(i)log(θ_k^(i)),
where m is the Lanczos step, and {τ_k^(i)}_k=1^m+1, {θ_k^(i)}_k=1^m+1 are the quadrature weights and nodes. According to <cit.>, the weights {τ_k^(i)}_k=1^m+1 are computed by the squares of the 1^st elements of the normalized eigenvectors {y_k^(i)}_k=1^m+1 of T_m+1^(i), and the nodes {θ_k^(i)}_k=1^m+1 are the corresponding eigenvalues {λ_k}_k=1^m+1 of T_m+1^(i). <ref> <cit.> outlines how to compute m+1 Lanczos quadrature nodes and weights given a real symmetric matrix and a randomly generated Rademacher vector.
<ref> <cit.> summarizes how the SLQ method approximates the log determinant for any SPD matrix.
§ ERROR ANALYSIS
According to <cit.>, the computational cost of stochastic Lanczos quadrature is 𝒪((#nz(A)+nm^2)N), where #nz(A) denotes the number of non-zeros in symmetric matrix A and 𝒪(nm^2) represents the orthogonalization cost spent in the Lanczos iteration. Normally, the steps of Lanczos iteration m and the number of Rademacher distributed vectors N are expected to be much smaller than the matrix size n.
In the following of this paper, we denote the logarithm function by f for simplicity.
§.§ Error for symmetric quadrature nodes
Ubaru, Chen and Saad <cit.> study the absolute error between a Riemann Stieltjes integral and an (m+1)-point Lanczos quadrature.
<cit.>
Let f be analytic in [-1, 1] and analytically continuable in the open Bernstein ellipse E_ρ with foci ±1 and elliptical radius ρ > 1. Let M_ρ be the maximum of |g(t)| on E_ρ. Then the (m+1)-point Lanczos quadrature approximation satisfies
|I-I_m| ≤4M_ρ/1 - ρ^-2ρ^-2m -2.
See <cit.>. It should be pointed out that the result of <ref> assumes that the quadrature nodes are symmetric.
As a direct consequence, they give a probabilistic absolute error bound for SLQ's log determinant approximation based on symmetric quadrature rules.
<cit.>
Let A∈ℝ^n× n be a real symmetric positive definite matrix with eigenvalues in [λ_min,λ_max], and κ = λ_max/λ_min be the condition number. Given ϵ, η∈ (0,1), if the Lanczos iteration step m and the query number N satisfy
m≥ (√(3κ)/4) ·log (K/ϵ),
and
N≥ (24/ϵ^2) ·(log (1+κ))^2 ·log(2/η),
where
K = 5κlog (2(κ + 1))/√(2κ +1),
then
ℙ{| log A - [log A ]_L | ≤ϵ n }≥ 1-η.
See <cit.>.
§.§ Error for asymmetric quadrature nodes
One should be careful that the Lanczos quadrature rule may not always be symmetric under certain measures, as pointed out by Cortinovis and Kressner <cit.>. In particular, the measure (<ref>) does not guarantee symmetry. As shown in <ref>, <ref> demonstrates that Lanczos quadrature nodes might be asymmetric. Therefore, <ref> derives an error estimation for the Lanczos quadrature with asymmetric quadrature nodes.
<cit.>
Let g be analytic in [-1, 1] and analytically continuable in the open Bernstein ellipse E_ρ with foci ±1 and elliptical radius ρ > 1. Let M_ρ be the maximum of |g(t)| on E_ρ. Then the (m+1)-point Lanczos quadrature approximation with asymmetric quadrature nodes satisfies
|I - I_m| ≤4M_ρ/1-ρ^-1ρ^-2m-2.
The proofs of <ref> and <ref> differ by a single step, namely the calculation of an infinite geometric progression
|I - I_m| ≤∑_j = 2m+2^∞4M_ρ/ρ^j.
As stated that the Lanczos quadrature rule is not always symmetric for the measure (<ref>) <cit.>, the progression reads
∑_j=2m+2^∞4 M_ρ/ρ^j = 4 M_ρ/ρ^2m+2·1/1-ρ^-1 = 4M_ρ/1 - ρ^-1ρ^-2m -2.
<ref> is in fact <cit.> but with a redefinition of the parameter m. In <cit.>, m corresponds to m quadrature nodes. Whereas, in <ref>, m corresponds to (m+1) quadrature nodes, so does <cit.>.
§.§ Error under affine transform
<ref> assumes that function g is analytic in the reference domain [-1, 1] and then gives an error estimation in the physical domain [λ_min, λ_max]. In order to conduct analysis back on the physical interval, a proper affine transform should be included. The SLQ method is based on a quadrature rule corresponding to a discontinuous measure (<ref>). Unlike the continuous measure case, under such a discontinuous measure the quadrature error will remain unchanged when using an affine transformation from the reference domain [-1, 1] to the physical domain [λ_min, λ_max] <cit.>.
Precisely, consider the integrals
I^(i)=∫_λ_min^λ_maxf(t)dμ^(i)(t)
and
J^(i)=∫_-1^1g(t)dμ^(i)(h(t)),
where μ^(i) are defined in (<ref>), h is an affine transform that shifts the interval [-1,1] to [λ_min,λ_max], g=f∘ h, and g satisfies the assumptions in <ref>. These integrals are in fact the finite sums of the form
J^(i)=∫_-1^1f(h(t))dμ^(i)(h(t)) =∑_t_j:h(t_j)=λ_jf(h(t_j))[μ^(i)(h(t_j))-μ^(i)(h(t_j)-)]
=∑_j=1^nf(λ_j)[μ^(i)(λ_j)-μ^(i)(λ_j-)]
=∫_λ_min^λ_maxf(t)dμ^(i)(t)=I^(i),
where μ^(i)(t-) represents the left limit of the function μ^(i) on t. One may proceed another computation with the delta function
J^(i)=∫_-1^1f(h(t))dμ^(i)(h(t)) =∫_-1^1f(h(t))[μ^(i)]'(h(t))h'(t)dt
=∫_-1^1f(h(t))∑_j=1^n[μ_j^(i)]^2δ(h(t)-λ_j)(λ_max-λ_min/2)dt
=(λ_max-λ_min/2)∑_t_j:h(t_j)=λ_j[μ_j^(i)]^2f(h(t_j))
=(λ_max-λ_min/2)∑_j=1^n[μ_j^(i)]^2f(λ_j)
=(λ_max-λ_min/2)I^(i),
which gives rise to an extra scalar (λ_max-λ_min)/2. Such a derivation assumes the validity of the chain rule of measures
dμ^(i)(h(t)) =dμ^(i)/dt(h(t))dh/dt(t)dt
=(λ_max-λ_min/2)∑_j=1^n[μ_j^(i)]^2δ(t-h^-1(λ_j))dt
=(λ_max-λ_min/2)dμ^(i)(h(t)),
where dμ^(i)(h(t)) is the measure with distribution function μ^(i)(h(t)) and dt is the Lebesgue measure. However, this will lead to contradiction if λ_max-λ_min≠ 2. In fact, the chain rule of measures
dν/dλ=dν/dμdμ/dλ
is guaranteed by some assumptions on ν,μ and λ. One such assumption is given in <cit.>.
<cit.>
Suppose ν,μ,λ are σ-finite measures on (X,ℳ) such that
ν(E)=0μ(E)=0,μ(E)=0λ(E)=0,E∈ℳ,
then the Radon-Nikodym derivatives satisfy the chain rule
dν/dλ=dν/dμdμ/dλ,λ-a.e.
The measure in (<ref>) does not satisfy assumption (<ref>) since dμ^(i)(h(t)) assigns positive measure [μ_j^(i)]^2 on single point h^-1(λ_j), while the Lebesgue measure dt assigns zero on any single point. This may be the reason that (<ref>) fails.
§.§ Analyses for SLQ with asymmetric quadrature nodes
In light of the framework of <ref>, we derive an analysis for the SLQ method in <ref>.
Let A∈ℝ^n× n be a real symmetric positive definite matrix with eigenvalues in [λ_min,λ_max], and κ = λ_max/λ_min be the condition number. Given ϵ, η∈ (0,1), if the Lanczos iteration step m and the query number N satisfy
m ≥1/2log (ρ_1)·log(K_ρ_1/ϵ),
and
N≥ (24/ϵ^2) ·(log (1+κ))^2 ·log(2/η),
where
ρ_1 = √(2κ + 1) + 1/√(2κ +1) - 1, K_ρ_1 = 8M_ρ_1/ρ_1^2 - ρ_1, M_ρ_1 = 5 log (2(κ + 1)),
then
ℙ{| log A - [log A ]_L | ≤ϵ n }≥ 1-η.
Note that the parameter bound for the number of Rademacher distributed vectors N is the same in the proof <cit.>. Thus, the focus is on the lower bound of parameters related to the error between (m+1)-points Gauss quadrature I_m and Riemann Stieljes integral I. In <cit.>, the authors commenced with the function log(t/λ_max+λ_min) that is analytic on [λ_min, λ_max]. Here, one may simply focus on the analytic function f(t) = log(t) on [λ_min/(λ_max+λ_min), λ_max/(λ_max +λ_min)]. One can further find a linear transformation
h(t) = (κ-1/2(κ + 1))t + 1/2
to define
g(t) = f(h(t) ),
such that g is analytic on [-1, 1]. The semimajor axis was chosen as α_1 = (κ + 1)/κ so the elliptical radius is
ρ_1 = α_1 + √(α_1^2 - 1) = √(2κ + 1) + 1/√(2κ +1) - 1,
and g(t) is analytically continuable on E_ρ_1 with foci ± 1. Note that the maximum of g on the Bernstein ellipse E_ρ_1 equals the one of f on the scaled Bernstein ellipse E_ρ_1^*. Further, the maximum value of f, i.e., M_ρ_1, is calculated on E_ρ_1^* with foci 1/(κ + 1) and κ/(κ + 1)
max_t∈ E_ρ_1^*|f(t)|
≤max_t∈∂_E_ρ_1^*√((log |t|)^2 + π^2)
=√((log|1/2κ|)^2+π^2)
≤ 5 log (2(κ + 1)) = M_ρ_1,
with the maximum obtained at h(-α_1) = 1/2κ according to the maximum modulus theorem <cit.>. Based on <ref>, error between I_m and I has an upper bound
|I - I_m| ≤4M_ρ_1/(ρ_1^2 - ρ_1)ρ_1^-2m.
Then the absolute error bound between SLQ's approximation and N-query Girard-Hutchinson estimator is
|_N(f(A)) - [log A ]_L | = n/N∑_i=1^N |I^(i) - I_m^(i)| = 4nM_ρ_1/(ρ_1^2-ρ_1)ρ_1^2m≤ϵ n /2.
Therefore, one should take
m ≥1/2log (ρ_1)log(K_ρ_1/ϵ),
K_ρ_1 = 8M_ρ_1/ρ_1^2 - ρ_1, M_ρ_1 = 5 log (2(κ + 1)),
to guarantee the probabilistic error bound stated in <ref>. Other parameters' bounds are kept unchanged compared with <ref>.
Next, a probabilistic relative error bound for log determinant estimation problem would be derived. We follow the framework of <cit.> and propose <ref> under the assumption λ_max < 1 to give
straight theoretical guidance on choosing parameters for SLQ that control the relative error under certain probability. This assumption is mild since for any SPD matrix with λ_max > 1, it can be simply scaled to have eigenvalues λ∈ (0,1).
Let A∈ℝ^n× n be a real symmetric positive definite matrix with eigenvalues in [λ_min,λ_max], λ_max<1, and κ = λ_max/λ_min be the condition number. Given ϵ, η∈ (0,1), if the Lanczos iteration step m and the query number N satisfy
m≥1/2log(ρ_2)·log(K_ρ_2/ϵlog( √(κ) / λ_max)),
and
N≥ (24/ϵ^2) ·log(2/η),
where
ρ_2 = λ_max+√(2λ_minλ_max-λ_min^2)/λ_max-λ_min,
and
K_ρ_2 = 8 M_ρ_2/ρ_2^2 - ρ_2, M_ρ_2 = √(log(λ_min/2)^2 + π^2),
then
ℙ{| log A - [log A ]_L | ≤ϵ |log A |}≥ 1-η.
Let
h(t) = (λ_max-λ_min/2)t+λ_max+λ_min/2
be a linear transformation and g(t)=f(h(t)), then g(t) is analytic on [-1, 1]. For function f(t)=log(t) that is analytic on [λ_min,λ_max], since function g(t) has a singular value at t=t = -(λ_max+λ_min)/(λ_max-λ_min), let α_2=λ_max/(λ_max-λ_min) < |t| as the semimajor axis length of E_ρ_2, with the elliptical radius
ρ_2 = λ_max+√(2λ_minλ_max-λ_min^2)/λ_max-λ_min,
then g(t) is again analytically continuable in the open Bernstein ellipse E_ρ_2 with foci ± 1.
In light of <ref>, one can derive
|_N(f(A))-[log A ]_L|≤4nM_ρ_2/(ρ_2^2-ρ_2)ρ_2^2m,
where g(t) ≤ M_ρ_2 inside E_ρ_2.
Due to the maximum-modulus theorem <cit.>, M_ρ_2 is computed as
max_t∈ E_ρ_2|g(t)| =max_t∈∂_E_ρ_2|log[(λ_max-λ_min/2)t+(λ_max+λ_min/2)]|
≤max_t∈∂_E_ρ_2√((log|(λ_max-λ_min/2)t+(λ_max+λ_min/2)|)^2+π^2)
=√((log(λ_min/2))^2+π^2),
because the maximum is obtained at t = -α = -λ_max/(λ_max-λ_min). In order to guarantee
|_N(f(A))-[log A ]_L|≤ϵ/2|(f(A))|,
let the m-related term in equation (<ref>) be smaller than ϵlog( κ/λ_max^n ) /2, where
log( κ/ λ_max^n ) = |(n-1)log (λ_max) + log (λ_min)| ≤ |(f(A))|.
In this setting, let K_ρ_2=8 M_ρ_2/(ρ_2^2-ρ_2) and the Lanczos step m satisfies
m≥1/2log(ρ_2)·log(K_ρ_2/ϵlog( √(κ)/λ_max)),
inequality (<ref>) holds.
With the convergence analysis for the stochastic trace estimator first proposed by <cit.> and further developed in <cit.>, when N, the number of queries, satisfies N≥ (24/ϵ^2)·log(2/η), one may obtain
ℙ{|(f(A)-_N(f(A))| ≤ϵ/2 |(f(A))| }≥ 1-η.
Thus,
1-η≤ ℙ{|(f(A))-_N(f(A))| ≤ϵ/2|(f(A))|}
≤ ℙ{|(f(A))-_N(f(A))|+|_N(f(A))-[log A ]_L|
≤ϵ/2|(f(A))|+ϵ/2|(f(A))|}
≤ ℙ{|(f(A))-[log A ]_L| ≤ϵ|(f(A))|},
that is
ℙ{| log A - [log A ]_L | ≤ϵ| log A | }≥ 1-η.
It should be pointed out that a larger semimajor axis implies a higher convergence rate of the Lanczos quadrature <cit.>. In <ref>, semimajor axis α_2 = λ_max/(λ_max-λ_min) is set greater than α_1 = (λ_max + λ_min)/λ_min in <ref> <cit.>. On the other hand, α_2 is smaller than α = (λ_max+λ_min)/(λ_max-λ_min) in <cit.> but the log function would reach infinity near t = -α, so α is invalid in this case.
§ FURTHER DISCUSSION
In <ref> and <ref>, both Lanczos quadrature and Girard-Hutchinson's trace estimator set ϵ/2 as the error bound. However, one may reallocate the error tolerance so that the total number of matrix vector multiplications (MVMs) is lessened. In brief, one should try to find a pair (α, β) such that
* |_N(f(A))-[log A ]_L|≤ϵ/α |(f(A))|;
* ℙ{|(f(A))-_N(f(A))| ≤ϵ/β|(f(A))|}≥ 1 - η;
* 1/α + 1/β = 1, α > 1, β > 1,
and equation (<ref>) still holds. In this case, the steps of Lanczos iteration m and the number of Rademacher vectors N read
m ≥1/2log(ρ_2)log(K_ρ_2/ϵlog( √(κ)/λ_max)),
and
N ≥ (6/ϵ^2)· (α/(α-1))^2·log(2/η),
where K_ρ_2 = 4 α M_ρ_2/ρ_2^2 - ρ_2, ρ_2 and M_ρ_2 are defined in <ref>. Then from the perspective of minimizing MVMs with respect to α, the problem can be viewed as
min_α>1 log( 4α M_ρ_2/ϵ(ρ_2^2 - ρ_2) log( √(κ)/λ_max)) (α/α-1)^2.
By introducing a constant C = 4 M_ρ_2 / (ϵ (ρ_2^2 - ρ_2) log( √(κ)/λ_max)), the problem is simplified to
min_α>1 log(C α) (α/(α-1) )^2.
The minimizer α^* of (<ref>) then satisfies
α^* = 2logα^* + 2 log C + 1,
which is obtained through taking the derivative of the objective function and making it equal to 0. Numerical methods can be applied to solve equation (<ref>). <ref> summarizes the bounds of m and N which product is minimized through a reallocation of error.
Let A∈ℝ^n× n be a real symmetric positive definite matrix with eigenvalues in [λ_min,λ_max], λ_max<1, and κ = λ_max/λ_min be the condition number. Given ϵ, η∈ (0,1), let C = 4 M_ρ_2 / (ϵ (ρ_2^2 - ρ_2) log( √(κ)/λ_max)). If α^* = 2logα^* + 2 log C + 1, the Lanczos iteration step m and the query number N satisfy
m ≥1/2log(ρ_2)log(nK_ρ_2/ϵlog( √(κ)/λ_max)),
and
N ≥ (6/ϵ^2)· (α^*/(α^*-1))^2·log(2/η),
where
ρ_2 = λ_max+√(2λ_minλ_max-λ_min^2)/λ_max-λ_min,
and
K_ρ_2 = 4 α M_ρ_2/ρ_2^2 - ρ_2, M_ρ_2 = √(log(λ_min/2)^2 + π^2),
then
ℙ{| log A - [log A ]_L | ≤ϵ |log A |}≥ 1-η.
§ EXPERIMENTS
A controllable experiment is designed to compare the numbers of MVMs required to guarantee the probabilistic error bound (<ref>) in theory based on the results of <ref>, <ref> and <ref>. The test is carried out on four matrices with different decaying rates, such matrices can be kernel matrices corresponding to varying smoothness <cit.>. We also conduct experiments on two matrices that have practical applications <cit.>:
* SPD matrix A ∈ℝ^5000× 5000 with eigenvalues λ_i = 0.99/i^0.5, i = 1, …, 5000;
* SPD matrix A_1 ∈ℝ^5000× 5000 with eigenvalues λ_i = 0.99/i, i = 1, …, 5000;
* SPD matrix A_2 ∈ℝ^5000 × 5000 with eigenvalues λ_i = 0.99/i^2, i = 1, …, 5000;
* SPD matrix A_3 ∈ℝ^5000 × 5000 with eigenvalues λ_i = 0.99/i^3, i = 1, …, 5000;
* SPD matrix from in <cit.>;
* SPD matrix from in <cit.>.
In the test, we set different levels of relative error ϵ^* = 0.01, 0.02, …, 0.2, with fixed failure probability η = 0.1. Note that <ref> does not directly analyze the relative error of the log determinant approximation, but sets ϵ n as the absolute error bound. To ensure the comparability of results, ϵ in <ref> should be replaced by
ϵ = ϵ^* log A / n.
However, this move is somehow idealistic since the exact log determinant value in (<ref>) is not known in advance. Hence, the numerical results of <ref> in the comparison is biased to some extent. Despite the small defect, we may consider them as the benchmark.
For matrix A with decaying rate 0.5, <ref> displays the numbers of MVMs required by different theorems to reach the varying probabilistic error bound. In terms of the other three synthetic matrices with different decaying rates, figures that depict the MVMs are much similar in shape so we would not plot them here. <ref> shows that the error reallocation-based MVMs, as required by <ref>, would be smaller when eigenvalues decay slower. In <ref> and <ref>, one may also observe that the strategy of finding the suitable error allocation would reduce the approximations of log determinant for real matrices in the University of Florida sparse matrix collection <cit.>.
§ CONCLUSION
In this paper, we reinvestigate the SLQ method for log determinant approximation problem. We articulate why the inconsistencies arise between <cit.> and <cit.>. It turns out that Ubaru, Chen and Saad's analyses of SLQ is valid for symmetric quadrature nodes. But it seems that asymmetric cases arise more often in practice. An unnecessary scaling factor in the error of quadrature occurs in the proof of <cit.> when interval shifts between the physical domain and the reference domain. However such a tiny flaw doesn't impact the decisive conclusion in <cit.>. It is based on such a framework that we develop analyses of SLQ for asymmetric quadrature nodes case. Such updated analyses will be required for our further study on the SLQ method <cit.>. Further work which details whether a Gauss quadrature rule is symmetric will be released in <cit.>.
In this paper, we also apply an optimized error allocation technique that lessens the minimum requirements of total MVMs in theory. Experiment results demonstrate that such a technique is valid and might be of practical significance. Despite the advancements achieved in this field, a huge discrepancy persists between the theoretically determined number of MVMs and the actual number required, highlighting the need for further investigation to bridge this gap and optimize computational efficiency.
siamplain
|
http://arxiv.org/abs/2307.02246v1
|
20230705124146
|
S3C: Self-Supervised Stochastic Classifiers for Few-Shot Class-Incremental Learning
|
[
"Jayateja Kalla",
"Soma Biswas"
] |
cs.CV
|
[
"cs.CV"
] |
headings
6655
ECCV-22 submission ID
ECCV-22 submission ID
Paper ID
S3C: Self-Supervised Stochastic Classifiers for FSCIL
J. Kalla et al.
Department of Electrical Engineering,
Indian Institute of Science, Bangalore, India.
{jayatejak, somabiswas}@iisc.ac.in
S3C: Self-Supervised Stochastic Classifiers for Few-Shot Class-Incremental Learning
Jayateja Kalla Soma Biswas
August 1, 2023
===================================================================================
Few-shot class-incremental learning (FSCIL) aims to learn progressively about new classes with very few labeled samples, without forgetting the knowledge of already learnt classes.
FSCIL suffers from two major challenges: (i) over-fitting on the new classes due to limited amount of data, (ii) catastrophically forgetting about the old classes due to unavailability of data from these classes in the incremental stages. In this work, we propose a self-supervised stochastic classifier (S3C)[code: <https://github.com/JAYATEJAK/S3C>] to counter both these challenges in FSCIL.
The stochasticity of the classifier weights (or class prototypes) not only mitigates the adverse effect of absence of large number of samples of the new classes, but also the absence of samples from previously learnt classes during the incremental steps. This is complemented by the self-supervision component, which helps to learn features from the base classes which generalize well to unseen classes that are encountered in future, thus reducing catastrophic forgetting. Extensive evaluation on three benchmark datasets using multiple evaluation metrics show the effectiveness of the proposed framework. We also experiment on two additional realistic scenarios of FSCIL, namely where the number of annotated data available for each of the new classes can be different, and also where the number of base classes is much lesser, and show that the proposed S3C performs significantly better than the state-of-the-art for all these challenging scenarios.
§ INTRODUCTION
In recent years, Deep Neural Networks (DNN) have shown significant performance improvement on various computer vision applications <cit.>.
Usually, the DNN models require enormous amount of annotated data from all the classes of interest to be available for training.
In real-world, since data from different classes may become available at different instants of time, we want the model to learn about the new classes incrementally without forgetting about the old classes, which is precisely the task addressed in Class-Incremental Learning (CIL).
CIL approaches are very useful and practical, not only because it is computationally expensive and time-consuming to retrain the model from scratch, but also data from the previous classes may not be available due to storage and privacy issues.
Since collecting large number of annotated data from all the new classes is also very difficult, recently, the more challenging but realistic few-shot class-incremental learning (FSCIL) is gaining increasing attention, where the new classes have few labeled samples per class <cit.>.
In FSCIL, a model is first learnt using a set of base classes with large number of labeled examples per class.
At each incremental step (task), the model has access to a few labeled samples of the new classes and a single prototype for each of the previously learnt classes.
The goal is to learn a unified classifier to recognize the old as well as the new classes, without having access to any task labels.
This helps the model to quickly learn about the new classes without requiring to collect and annotate large amounts of data for the new classes.
FSCIL faces two major challenges, namely overfitting due to limited samples for the new classes, and catastrophic forgetting of the already learnt classes due to absence of old classes data at the incremental steps.
In this work, we propose a novel framework, S3C (Self-Supervised Stochastic Classifier) to simultaneously address both these challenges in the FSCIL setting.
Unlike the standard classifiers, stochastic classifiers (SC) are represented by weight distributions, i.e. a mean and variance vector <cit.>.
Thus, each classifier weight sampled from this distribution is expected to correctly classify the input samples.
We show for the first time, that SC learnt for both the base and new classes can significantly reduce the over-fitting problem on the new classes for FSCIL task.
It can also arrest the catastrophic forgetting of the previously learnt classes to a certain extent.
As is common in most FSCIL approaches <cit.>, we propose to freeze the feature extractor and learn only the SC at each incremental step. In order to compute features from the base classes which generalize to unseen classes, inspired by recent works <cit.>, we use self-supervision along with SC giving our final S3C framework. As expected, this helps to significantly mitigate the effect of catastrophic forgetting, while at the same time retaining the advantage on the new classes.
To this end, our contributions are as follows:
* We propose a novel framework, termed S3C (Self-Supervised Stochastic Classifier) to address the FSCIL task.
* We show that stochastic classifiers can help to significantly reduce over-fitting on the new classes with limited amount of data for FSCIL.
* We also show that self-supervision with stochastic classifier can be used to better retain the information of the base classes, without hindering the enhanced performance of the stochastic classifiers for the new classes.
* We set the new state-of-the-art for three benchmark datasets, namely CIFAR100 <cit.>, CUB200 <cit.> and miniImageNet <cit.>.
* We also propose and evaluate on two additional, realistic FSCIL settings, namely FSCIL-im (FSCIL-imbalanced) - where the new classes may have different number of samples/class and (ii) FSCIL-lb (FSCIL-less base) - where there are less number of base classes, which further justifies the effectiveness of the proposed S3C framework.
§ RELATED WORKS
Here, we provide some pointers to the related work in literature.
Class-Incremental Learning (CIL):
The goal of CIL is to learn new classes progressively without any task information. Due to plenty of annotated new class data, mitigating catastrophic forgetting is a challenging problem. LwF <cit.> proposed to use knowledge distillation <cit.> to alleviate catastrophic forgetting. iCaRL <cit.> showed that nearest classifier mean (NCM) using old class exemplars can generate robust classifiers for CIL. EEIL <cit.> used knowledge distillation to remember old classes and cross-entropy to learn new classes in an end-to-end training. UCIR <cit.> proposed cosine-based classifiers and used feature-space distillation and inter-class separation margin loss to mitigate catastrophic forgetting. Several state-of-art-works <cit.> proposed different techniques to address the class imbalance problem in CIL like rescaling scores or balanced finetuning of classifiers, etc.
Some of the recent works <cit.> have focused on non-exemplar based methods, with no access to exemplars from the old classes.
Few-Shot Class-Incremental Learning (FSCIL): Recently, there has been a significant focus on the more realistic and challenging FSCIL task, where very few samples per class are available for training at each incremental task.
Tao et al. <cit.> proposed this protocol and used neural network gas architecture to preserve the feature topologies of the base and new classes.
Mazumder et al. <cit.> proposed to identify unimportant parameters in the model based on their magnitudes and learn only these parameters during the incremental tasks.
The works proposed in <cit.> focus on learning robust manifolds by regularizing feature space representations.
The works in <cit.> used graph-based networks for old classes' knowledge retention.
Recently, CEC <cit.> proposed a meta-learning strategy and achieved state-of-art results for the FSCIL setting.
Self-Supervised Learning (SSL): SSL uses predefined pretext tasks to learn features from unlabeled data.
Different pretext tasks have been proposed like image rotations <cit.>, image colourization <cit.>, clustering <cit.>, and solving jigsaw puzzles from image patch permutations <cit.>.
These features can notably improve the performance of downstream tasks like few-shot learning <cit.>, semi-supervised learning <cit.>, to improve the model robustness <cit.>, class imbalance <cit.>, etc.
Recently, Lee et al. <cit.> used SSL to improve the performance for supervised classification, by augmenting the original labels using the input transformations.
In this work, we show that SSL <cit.> can be used very effectively for the FSCIL task.
Stochastic Neural Networks:
Traditional neural networks cannot model uncertainty well due to their deterministic nature <cit.>.
Stochastic neural networks <cit.> give robust representations in the form of distributions.
Subedar et al. <cit.> proposed uncertainty aware variational layers for activity recognition.
Recently, it has been used for person re-identification <cit.> and unsupervised domain adaptation <cit.> tasks.
§ PROBLEM DEFINITION AND NOTATIONS
Here, we explain the FSCIL task, which consists of a base task and several incremental stages, and also the notations used in the rest of the paper.
In the base task, the goal is to learn a classifier using large number of labeled samples from several base classes.
At each incremental step, using a few labeled samples per new class and a single class prototype of the old (previously learnt) classes, the model needs to be updated such that it can classify both the old and the new classes.
Let 𝒟^(0) denote the base task which contains large number of annotated data from classes 𝒞^(0).
Let the incremental task data be denoted as
{𝒟^(1), ..,𝒟^(t),.., 𝒟^(𝒯)}, and the corresponding label spaces be denoted as 𝒞^(t), where t = 1,, 𝒯.
Thus, the model will learn a total of 𝒯 tasks incrementally and there is no overlap in the label space between the different tasks, i.e. C^(t)∩ C^(s) = ϕ; (t ≠ s).
Once the model has learned on the data 𝒟^(t), it has to perform well on all the classes seen so far i.e {C^(0)∪ C^(1)∪…∪ C^(t)}.
§ PROPOSED METHOD
Here, we describe the proposed S3C framework for the FSCIL task.
In many of the initial FSCIL approaches <cit.>, the main focus was to develop novel techniques for the incremental step to prevent catastrophic forgetting and overfitting.
Recently, CEC <cit.> showed that the base network training has a profound effect on the performance of the incremental tasks.
Using appropriate modifications while learning the base classifier can significantly enhance not only the base class accuracies, but also the performance for the incrementally added classes.
Even without any fine-tuning during the incremental steps, CEC reports the state-of-the-art results for FSCIL.
In the proposed S3C framework, we combine the advantages of both these techniques and propose to not only improve the base classifier training, but also update all the classifiers during the incremental steps.
First, we describe the two main modules of S3C, namely Stochastic Classifier and Self-Supervision and then discuss how to integrate them.
Stochastic Classifier: One of the major challenges in FSCIL is the few number of annotated samples that is available per class at each incremental step.
This may result in overfitting on the few examples and learning classification boundaries which do not generalize well on the test data.
Now, we discuss how stochastic classifiers can be used to mitigate this problem.
In this work, we use cosine similarity between the features and the classifier weights to compute the class score for that particular feature.
For a given input image 𝐱 from class C_i, let us denote its feature vector as f_θ(𝐱), where the parameters of the feature extractor f is denoted by θ.
Let the classifier weights corresponding to class C_i be denoted as ϕ_i.
Then the cosine similarity of the feature with this classifier weight can be computed as ⟨ϕ_i, f_θ(𝐱)⟩, where u=u/||u||_2 denotes the l_2 normalized vector.
Fig. <ref>(a) shows the normalized feature extractor, and classifier weights for two classes, C_i and C_j. The green shaded area denotes the region where f_θ(𝐱) will be correctly classified to class C_i, and m_ij is the classification boundary between the two classifiers (considering only the upper sector between ϕ_i and ϕ_j).
Now, instead of a single classifier, let us learn two different classifiers for each class (eg. ϕ^1_i and ϕ^2_i for class C_i).
In Fig. <ref> (b), {m_ij^11,m_ij^12,m_ij^21,m_ij^22} are the four classification boundaries for four combination of classifiers.
To ensure that the input data is correctly classified using all the classifiers, the feature embedding f_θ(𝐱) has to move closer to the classifier of its correct class, thus making the samples of a class better clustered and further from samples of other classes.
But it is difficult to choose (and compute) how many classifiers should be used.
By using a stochastic classifier (Fig. <ref> (c)), we can ensure that we have infinite such classifiers around the mean classifier.
Using a stochastic classifier ψ = {μ,σ} at the classification head resembles the use of multiple classifiers, where μ and σ denotes the mean and variance of the classifier ψ.
For a given input image 𝐱, the output score of the stochastic classifiers is proportional to ⟨μ, f_θ(𝐱)⟩ (μ = μ + 𝒩(0,1) ⊙σ), where the classifier is sampled from the distribution.
This has similarity with feature augmentations which are also commonly used <cit.>.
There are two main advantages of using a stochastic classifier instead of feature augmentations:
(1) Instead of using a fixed variance for the features (which has to be manually calculated), the means and variances used in the proposed framework are automatically learnt in an end-to-end manner.
(2) The means and variances learnt using the base classes also help to initialize the corresponding parameters for the new classes in a semantically meaningful manner as explained later.
Self-supervision: At the incremental stages, due to presence of few examples from the new classes, in general, most of the FSCIL approaches either fix the feature extractor after learning the base classes <cit.> or fine-tune it with a very small learning rate <cit.>, so that it does not change significantly.
This reduces catastrophic forgetting as well as overfitting.
In our work, we fix the feature extractor after learning the base classes and only fine-tune the classifiers.
To make the base feature extractor generalize well to unseen data, we propose to use self-supervision for the base classifier training as well as during the incremental learning stages.
Since self-supervised training does not use class labels, more generic features can be learnt, which can generalize well to unseen classes. SSL has been used successfully for several tasks <cit.>, including the standard class-incremental setting <cit.>. Here, we use the recently proposed SSL approach <cit.>, where image augmentations are used to generate artificial labels, which are used to train the classification layer.
For a given input image 𝐱, let the augmented versions be denoted as 𝐱_r=t_r(𝐱), where {t_r}_r=1^M denotes pre-defined transformations.
In this work, we use images rotated by {0^∘,90^∘,180^∘,270^∘}, i.e. (M=4) as the augmented images. We show that the feature extractor learnt using self-supervision performs very well in the incremental stages.
First, we describe the integrated S3C loss which is used in the training process.
Construction of S3C loss:
At task t, C^(s)_i denotes the i^th class in task s∈{0,1,..,t}.
Then its corresponding stochastic classifier is denoted as ψ^(s)_i with mean μ^(s)_i and variance σ^(s)_i. To integrate the stochastic classifiers with self-supervision, for each class, we create four classifier heads corresponding to each of the four rotations as in <cit.>.
In this work, we want to jointly predict the class and its rotation r = {0^∘,90^∘,180^∘,270^∘}, thus we denote the final classifiers as ψ^(s)_i, r, with individual means (μ^(s)_i,r), but with the same class-wise variance (σ^(s)_i).
Since the same data is present in different rotations, we enforce that the classifiers for the same class share the same variances, which reduces the number of parameters to be computed.
Thus, the joint softmax output of a given sample 𝐱 for C^(s)_i class at r^th rotation is given by
ρ^(s)_ir(𝐱;θ,ψ^(0:t)) = exp(η ⟨ μ^(s)_ir, f_θ(𝐱) ⟩)/∑_j=0^t∑_k=0^|C^(t)|∑_l=0^Mexp(η ⟨ μ^(j)_kl, f_θ(𝐱) ⟩)
Where μ^(j)_ir = μ^(j)_ir + 𝒩(0,1) ⊙σ^(j)_i represents the sampled weight from the stochastic classifier ψ^(j)_ir, η is a scaling factor used to control peakiness of the softmax distribution.
Finally, the S3C training objective for a training sample 𝐱 with label y from task s can be written as
ℒ_S3C(𝐱,y;θ,ψ^(0:t)) = -1/M∑_r=1^Mlog(ρ^(s)_yr(𝐱_r;θ,ψ^(0:t)))
This implies that the input image is transformed using the chosen image transformations (4 rotations in this work) and the loss is combined for that input.
Note that the first transformation corresponding to 0^∘ is the identity transformation (i.e. the original data itself).
We now describe the base and incremental stage training of the S3C framework (Fig. <ref>).
§.§ Base Network Training of S3C
In FSCIL setting, we assume that we have access to several base classes with sufficient number of annotated data for base training.
Given the data from the base classes C^(0), we use a base network (ResNet20 for CIFAR100 and ResNet18 for CUB200 and miniImageNet) along with a Graph Attention Network inspired by <cit.><cit.>.
We train the base network, i.e. the feature extractor with parameters θ and the stochastic classifiers corresponding to the base classes (ψ^(0)) with S3C objective ℒ_base=ℒ_S3C(𝐱,y;θ,ψ^(0)), with the base training data given by {𝐱,y}∈𝒟^(0).
The proposed objective improves the performance of the base classes, in addition to that of the new classes that will be encountered in the incremental stages as we will observe in the experimental evaluation.
§.§ Preparing for the incremental step
After the base classifier training, the training data of the base classes may not be available any longer.
This may be due to limited storage capacity, privacy issues, etc.
After the first incremental step, we want the unified classifier to perform well on the base as well as on the new classes.
For this, to mitigate catastrophic forgetting of the base classes, their class prototypes are stored as is the common practice <cit.><cit.>.
These stored class prototypes can be treated as class representatives of the base classes and thus can be used for updating the network at the incremental step.
The class prototypes are computed by averaging the training features given by the feature extractor (f_θ(·)) for each class.
This is done not only at the end of the base training, but after each incremental step as well, i.e. after incremental step t, we store the class prototypes of all the classes that the model has encountered till step t.
The class prototype set 𝐏^(t) contains the classes prototypes encountered in task t.
The class prototype P_i^(t) after task t for i^th class is calculated as
P_i^(t) = 1/N^(t)_i∑_n=1^N^(t)𝕀_(y_n=i) f_θ(𝐱_n)
Where N^(t) is the number of samples in the dataset 𝒟^(t), N^(t)_i is number of samples in i^th class of task t, and {𝐱_n, y_n}_n=1^N^(t)∈𝒟^(t).
The indicator variable 𝕀_(y_n=i) will be 1 if the sample belongs to the i^th class (i.e. y_n=i).
Thus, the class prototype set is updated at the end of each task.
§.§ Incremental Step
Here, we will discuss the training process involved in each incremental step.
As in <cit.> <cit.>, we propose to freeze the already learnt feature extractor, since the self-supervision has ensured that it will generalize well to previously unseen classes.
This also helps in mitigating the catastrophic forgetting and over-fitting problems.
In our work, we propose to update the classifiers of the previous as well as the new classes with the stored class-prototypes and the few examples of the new classes.
This will help the model better adapt to the new set of classes.
Now, we discuss how to initialize the stochastic classifiers for the new classes.
Initialization of the Stochastic Classifiers of the new classes: For the new classes, we need to initialize the stochastic classifiers before fine-tuning. The means are initialized with the centroid of the features for that class (calculated using the previous model).
We initialize the variances of the new classes using that of the most semantically similar class from the base set.
Semantic similarity is computed using GloVE embeddings <cit.> of the base and new class names.
Fine-tuning the classifiers:
With this initialization, we fine-tune the classifiers of the new as well as the previous classes using the few labeled examples of the new classes and the stored class-prototypes of the previous classes. Let q ∈𝐏^(0:t-1) be a prototype from any old class, then the joint softmax output of the stochastic classifier for i^th class and r^th rotation (task s) is
ζ^(s)_ir(q;ψ^(0:t)) = exp(η ⟨ μ^(s)_ir, q ⟩)/∑_j=0^t∑_k=0^|C^(t)|∑_l=0^M exp(η ⟨ μ^(j)_kl, q ⟩)
For fair comparison with the state-of-the-art approaches, we only store a single class-prototype per class corresponding to the original images (i.e. 0^∘ rotation).
Thus for the previous classes, only the parameters of the stochastic classifier corresponding to the 0^∘ rotation are updated.
To mitigate catastrophic forgetting, we use cross entropy loss based on the class prototypes as
ℒ_proto(q,y̌,ψ^(0:t)) = -log(ζ^(s)_y̌r(q;ψ^(0:t)))
where y̌ is the class label of the prototype in task s.
For the new classes, very few labeled samples per class is available.
Since the few examples cannot cover the entire distribution, generalization to new classes is quite challenging.
As discussed before, we propose to use stochastic classifiers which mitigates the problem of overfitting and generalizes well to the new classes even with few examples.
To this end, we calculate a loss as in equation (<ref>) on the new task data using stochastic classifiers.
Finally, the total loss at each incremental task is given by
ℒ^(t)_inc = λ_1·ℒ_proto(q,y̌,ψ^(0:t)) + λ_2·ℒ_S3C(𝐱,y;θ,ψ^(0:t))
where {𝐱,y}∈𝒟^(t) and t>0.
λ_1, λ_2 are hyper-parameters to balance the performance between old and new classes.
At the end of task t, we have the learnt classifiers for all the classes seen so far, namely ψ^(0), , ψ^(t).
§ TESTING PHASE
At inference time, the test image 𝐱 can belong to any of the classes seen so far.
To utilize the learnt classifiers effectively, we generate transformed versions of 𝐱 and aggregate all the corresponding scores.
Thus, the aggregate score for the i^th class in task s is computed as
z^(s)_i=1/M∑_r=1^Mη ⟨ μ^(s)_ir, f_θ(𝐱_r)⟩.
Then the aggregated probability used for predicting the class is given by
P_agg(i,s/𝐱,θ,ψ^(0:t)) = exp(z^(s)_i)/∑_j=0^t∑_k=1^|C^(j)|exp(z^(j)_k)
Thus, the final prediction for the test sample 𝐱 is
i,s = argmax_i,sP_agg(i,s/𝐱)
which implies that the input 𝐱 belongs to i^th class of task s.
This aggregation scheme improves the model performance significantly.
§ EXPERIMENTAL EVALUATION
Here, we describe the extensive experiments performed to evaluate the effectiveness of the proposed S3C framework.
Starting with a brief introduction of the datasets, we will discuss the performance of the proposed framework on three standard benchmark datasets.
In addition, we also discuss its effectiveness on two real and challenging scenarios, where (i) the data may be imbalanced at each incremental step and (ii) fewer classes may be available during base training.
We also describe the ablation study to understand the usefulness of each module.
Datasets Used: To evaluate the effectiveness of the proposed S3C framework, we perform experiments on three benchmark datasets, namely CIFAR100 <cit.>, miniImageNet <cit.> and CUB200 <cit.>.
CIFAR100 <cit.> contains 32×32 RGB images from 100 classes, where each class contains 500 training and 100 testing images.
We follow the same FSCIL dataset splits as in <cit.>, where the base task is trained with 60 classes and the remaining 40 classes is trained in eight incremental tasks in a 5-way 5-shot setting.
Thus, there are a total of 9 training sessions (i.e., base + 8 incremental).
MiniImageNet <cit.> is a subset of the ImageNet dataset and contains 100 classes with images of size 84×84. Each class has 600 images, 500 for training and 100 for testing. We follow the same task splits as in <cit.>, where 60 classes are used for base task training and the remaining 40 classes are learned incrementally in 8 tasks.
Each task contains 5 classes with 5 images per class.
CUB200 <cit.> is a fine-grained birds dataset with 200 classes.
It contains a total of 6000 images for training and 6000 images for testing. All the images are resized to 256×256 and then cropped to 224×224 for training.
We used the the same data splits proposed in <cit.>, where there are 100 classes in the base task, and each of the 10 incremental tasks are learned in a 10-way 5-shot manner.
Implementation details: For fair comparison, we use the same backbone architecture as the previous FSCIL methods <cit.>.
We use ResNet20 for CIFAR100 and ResNet18 for miniImageNet and CUB200 as in <cit.>.
Inspired by CEC <cit.>, we used the same GAT layer at the feature extractor output for better feature representations. We trained the base network for 200 epochs with a learning rate of 0.1 and reduced it to 0.01 and 0.001 after 120 and 160 epochs for CIFAR100 and miniImageNet datasets.
For CUB200, the initial learning rate was 0.03 and was decreased to 0.003, 0.0003 after 40 and 60 epochs.
We freeze the backbone network and fine-tune the stochastic classifiers for 100 epochs with a learning rate of 0.01 for CIFAR100 and miniImageNet and 0.003 for CUB200 at each incremental step.
The base network was trained with a batch size of 128, and for the newer tasks, we used all the few-shot samples in a mini-batch for incremental learning.
All the experiments are run on a single NVIDIA RTX A5000 GPU using PyTorch. We set η=16, λ_1=5 and λ_2=1 for all our experiments.
Evaluation protocol: We evaluate the proposed framework using the following three evaluation metrics as followed in the FSCIL literature:
(1) First, at the end of each task, we report the Top1 accuracy <cit.> of all the classes seen so far, which is the most commonly used metric;
(2) To be practically useful, the model needs to perform well on all the tasks seen so far (i.e. have a good performance balance between the previous and new tasks).
To better capture this performance balance, inspired from <cit.>, recent FSCIL works <cit.> propose to use the Harmonic Mean (HM) of the performance of the previous and new classes at the end of each incremental task.
If t denotes the task id, t∈{0,1,...,𝒯}, let Acc_n^t denote the model accuracy on test data of task n after learning task t, where n∈{0,1,2,...,t}.
Then at the end of task t, to analyze the contribution of base and novel classes in the final accuracy, harmonic mean is calculated between Acc_0^t and Acc_1:t^t.
Inspired by CEC, we also report performance dropping rate (PD=Acc_0^0-Acc_0:𝒯^𝒯) that measures the absolute difference between initial model accuracy after task 0 and model accuracy at the end of all tasks 𝒯.
Here, we report the performance of S3C framework for the standard FSCIL setting on all the three benchmark datasets.
Note that all the compared approaches have used the same backbone architecture, i.e. ResNet20 for CIFAR100 and ResNet18 for miniImageNet and CUB200 datasets.
As mentioned earlier, most of the FSCIL approaches like TOPIC <cit.>, Ft-CNN <cit.>, EEIL <cit.>, iCaRL <cit.>, UCIR <cit.>,
adopted this classifier as it is and proposed different techniques in the incremental stage.
Thus they have the same base task accuracy as can be observed from the results.
The current state-of-the-art in FSCIL, CEC <cit.> showed that using the same backbone along with appropriate modifications for learning the base classifier can significantly enhance not only the base class accuracies, but also the performance on the incrementally added classes.
We combine the advantages of both these techniques, i.e. making the base classifier better (using the same backbone), and at the same time, effectively fine-tuning the stochastic-classifiers in S3C.
Thus the base accuracy of CEC and the proposed S3C is better than the other approaches.
[figure]skip=-2pt
[table]skip=8pt
[figure]skip=10pt
[table]skip=8pt
§.§ Results on standard FS-CIL propocol
Here, we report the results on the three benchmark datasets.
Fig. <ref> compares the proposed SC3 framework with the state-of-the-art approaches in terms of top1 accuracy on CIFAR100.
We observe that the modifications while learning the base classifier improves the performance for both CEC and S3C significantly.
At the end of all tasks, S3C achieves a top1 accuracy of 53.96% compared to 49.14% obtained by the state-of-art CEC (relative improvement is 4.82%).
The performance of all the compared approaches are directly taken from <cit.>.
Table <ref> shows the HM of S3C at the end of each incremental task.
We observe that S3C obtains a relative improvement of 13.95% compared to CEC in terms of HM.
This shows the effectiveness of S3C in achieving a better balance between the base and new class performance.
Fig. <ref> shows the t-SNE plot for new classes after task 1, where we observe that the new classes in S3C are relatively well clustered compared to CEC.
In terms of PD, S3C is close to CEC (higher by 0.7%), but it outperforms CEC in terms of the other two metrics, namely top1 accuracy and HM.
From Fig. <ref> (right), we observe that S3C achieves 52.14% top1 accuracy on minIimageNet, with a relative improvement of 4.51% over the second best of 47.63% obtained by CEC.
In terms of HM (Table <ref>) S3C achieves 9.96% relative improvement over CEC. Performance dropping rate (PD) of CEC is slightly lower (0.35%) than S3C.
We observe from Table <ref> and Table <ref> that S3C outperforms CEC by 6.67% and 11.72% respectively in terms of top1 accuracy and HM for CUB200 dataset.
For this dataset, the proposed S3C has the least performance dropping (PD) rate compared to all the other approaches.
§.§ Analysis and Ablation
Here, we perform additional experiments and ablation studies on the CIFAR100 dataset to evaluate the effectiveness of the proposed S3C framework.
Experiments on More Realistic and Challenging Scenarios:
First, we show the effectiveness of S3C for two realistic scenarios, (i) where there is class imbalance at each incremental task; (ii) where the number of base classes is less.
1. FSCIL-im (imbalance in new classes): The standard FSCIL setting assumes that equal number of images per new class are available at each incremental task.
For example, 5 images for each of the 5 new classes are available at each incremental task in a 5-way 5-shot setting,
In real-world, number of samples per class can vary, since for some classes, it is easier to collect data compared to others.
Obviously, one can collect more samples from the minority classes, or select a sub-set from the majority classes.
But it is more practical if the algorithm can satisfactorily work without this constraint.
To create the data imbalance, at each incremental step, we consider the number of training samples for the 5 new classes as
{5,4,3,2,1}.
Few samples along with the imbalance makes this setting very challenging.
Fig. <ref> (left) shows the top 1 accuracy and HM of S3C and CEC for this scenario without any modification of the algorithms.
We observe that S3C performs very well for both the metrics, thus showing its effectiveness in handling imbalanced new class data.
2. FSCIL-lb (fewer base classes): The standard FSCIL setting assumes that the number of base classes is quite high, with many annotated samples per class.
Here, we analyze the performance of S3C when the number of base classes is lower.
A similar setting has been explored in <cit.> for CIL.
The advantage of having lesser number of base classes is that the base learner becomes ready for incremental learning quickly (with fewer classes requiring many annotated samples) and the remaining classes can be learnt incrementally with fewer number of labeled samples per class.
For the CIFAR100 experiments conducted so far, there were 60 base and 40 new classes.
For this experiment, we use only 40 base classes, and keep the incremental tasks unchanged.
From Fig. <ref> (right), we observe that S3C obtains a relative improvement of 5.29% in top1 accuracy (18.83% in HM) over CEC.
This shows that S3C can start learning incrementally at an early stage of data collection, which makes it more suited for real-world scenarios.
Ablation studies:
Table <ref> shows the effect of self-supervision and type of classifier on CIFAR100 base task accuracy.
The top 1 accuracy and HM after all the incremental stages are also reported.
We observe that both the modules help in improving the performance of the base and incremental classes.
Though the top 1 accuracy of both linear and stochastic classifiers are close after the incremental stages, there is significant improvement in HM with the stochastic classifier.
This implies that both the modules help in achieving very good performance on the new classes, in addition to retaining the performance on the base, thus achieving a great performance balance between the two.
§ CONCLUSIONS
In this paper, we proposed a novel S3C framework, which integrates self-supervision with stochastic classifiers seamlessly for the FSCIL task.
We show that this framework not only reduces overfitting on the few labeled samples of the new classes, but also mitigates catastrophic forgetting of the previously learnt classes. Extensive experiments on three benchmark datasets, namely CIFAR100, CUB200 and miniImageNet and additional analysis show that the proposed S3C significantly outperforms the state-of-art approaches.
Acknowledgements: This work is partly supported through a research grant from SERB, Department of Science and Technology, Govt. of India and Google Research, India.
splncs04
|
http://arxiv.org/abs/2307.01569v1
|
20230704084601
|
Thermodynamic phase transition and winding number for the third-order Lovelock black hole
|
[
"Yu-Shan Wang",
"Zhen-Ming Xu",
"Bin Wu"
] |
gr-qc
|
[
"gr-qc",
"hep-th"
] |
empty
^1School of Physics, Northwest University, Xi'an 710127, China
^2Shaanxi Key Laboratory for Theoretical Physics Frontiers, Xi'an 710127, China
^3Peng Huanwu Center for Fundamental Theory, Xi'an 710127, China
Phase transition is important for understanding the nature and evolution of the black hole thermodynamic system. In this study, the connection between the phase transition of a black hole and the winding number derived by the complex analysis is used to predict the type of the black hole phase transition. For the third-order Lovelock black holes, at the hyperbolic topology in any dimensions and the spherical topology in 7 dimensions, we arrive at the winding numbers both are W=3 which predicts that the system will undergo both the first-order and second-order phase transitions. For the spherical topology in 7<d<12 dimensions, the winding number is W=4 and the corresponding phase transition will occur in two situations: one with only pure second-order phase transition and the other with both first-order and second-order phase transitions. We further confirm the correctness and rationality of this prediction by placing the black hole thermodynamics system in the potential field.
Thermodynamic phase transition and winding number for the third-order Lovelock black hole
Yu-Shan Wang, Zhen-Ming Xu[E-mail: zmxu@nwu.edu.cn], and Bin Wu
August 1, 2023
=========================================================================================
§ INTRODUCTION
Black hole is an extreme celestial body predicted by the general relativity <cit.>. Inspired by the presentation of the Bekenstein's entropy <cit.> for the black hole, Hawking concluded that when the quantum effect is taken into account, a black hole emits thermal radiation just like a normal black body. This means that the black hole has a temperature. The proposal that the black hole possesses entropy and temperature is undoubtedly one of the great discoveries of the twentieth century and has been the subject of discussion for decades.
A central element of black hole thermodynamics is the phase transition, i.e., the transition from one state to another, accompanied by abrupt changes in physical quantities such as energy, entropy and volume under different parameter conditions. Hawking and Page first investigated the thermodynamic properties of the Anti-de Sitter (AdS) black hole and found that there is a phase transition between the Schwarzschild AdS black hole and pure AdS thermal radiation, i.e., the famous Hawking-Page phase transition <cit.>. Subsequently, the AdS black hole thermodynamically extended phase space was introduced, in which the negative cosmological constant is considered as the effective thermodynamic pressure of the black hole and its conjugate quantity is the thermodynamic volume. Black hole thermodynamics becomes more attractive, and the small-large black hole phase transition presented by the charged AdS black hole thermodynamic system has a more direct and precise overlap with the van der Waals system <cit.>. Currently, the study of the phase transition of black holes in extended phase space has been widely applied to various complex scenarios <cit.>.
In addition, the holographic thermodynamics <cit.> and the restricted phase space thermodynamics <cit.> have been proposed to give a holographic interpretation of black hole thermodynamics and to make black hole thermodynamics more like ordinary thermodynamics. Moreover, the topology has emerged as a new way to describe the type of phase transition in black holes. In the study <cit.>, it is described in detail how to use the ϕ-map topological flow theory to construct a topological number that is independent of the endogenous parameters of black holes. The topological number can be used to distinguish between locally stable and locally unstable black hole phases, as well as to topologically classify the same class of black holes <cit.>. These studies can deepen our understanding of black hole physics and contribute to go for clues to reveal the nature of black holes and the quantum theory of gravity.
The analysis of the type and criticality of the thermodynamic phase transition in black holes currently dominates the investigations. The swallowtail diagrams of the Gibbs free energy can give certain answers about the macroscopic thermodynamic processes of black hole phase transitions, but overlook the details of phase transition. Some ideas have been proposed to use the free energy landscape <cit.> and Landau free energy <cit.> to explore the evolutionary processes associated with black hole phase transitions.
In a recent study <cit.>, the author constructed a thermal potential to study the black hole phase transition. The thermal potential is
U=∫(T_h-T)dS,
where T_h is the Hawking temperature of the black hole, T is the canonical ensemble temperature and S is the entropy of the black hole. The U, T_h and S are the functions of the radius of the event horizon r_h and T is just a positive constant which can be assigned in any way. When a standard system is determined to be a black hole, then the ensemble temperature of the system should be the Hawking temperature of the black hole, i.e., T=T_h. Similar to the fluctuation, the thermal potential shows that all possible other thermodynamic states of the system deviate from the black hole states. From Eq. (<ref>), it follows that the extremum of the potential represents all possible black hole states,
dUdS=0 ⇒ T=T_h.
More importantly, the concave (convex) nature of the thermal potential represents the stable (unstable) state of the black hole
δ( dUdS|_T=T_h)={∂ T_h(r_h)∂ S(r_h)|_T=T_h>0, stable case;
∂ T_h(r_h)∂ S(r_h)|_T=T_h<0, unstable case.
.
The schematic diagram of thermal potential described by Eq. (<ref>) is shown in Fig. <ref>. It is certain that the lowest point (the red point) is the most stable state in the entire canonical ensemble. As the different parameters of the black hole change, the extreme point of the thermal potential constantly changes, which corresponds to the changes between the black hole state and other unknown states in the ensemble. In the framework, we studied the microscopic phase transition mechanism of the charged AdS black holes <cit.> and found that the phase transition of the large and small black holes exhibits severely asymmetric features, which fills the gap in the analysis of stochastic processes in the first-order phase transition rate problem of AdS black holes.
In four-dimensional spacetime, Einstein gravity can give the most appropriate explanation. While in higher dimensions, when the energy approaches the Planck energy scale, the high-order curvature terms of spacetime cannot be neglected and Einstein's general relativity theory requires some modifications. One of the widely accepted and valid candidates is the Lovelock gravity. Naturally, Lovelock gravity is an extension of Einstein gravity in higher dimensional spacetime and it proposes that the quantities acting in higher dimensional gravity should contain high-order gauge terms. The black hole solution in this gravity and the associated thermodynamic properties have been much studied <cit.>. When we consider third-order Lovelock gravity, its action contains four terms: the cosmological constant term, the Einstein action term, the Gauss-Bonnet term, and the third-order Lovelock term. Black hole thermodynamics under third-order Lovelock gravity has also been much studied <cit.>. Then the specific details behind its phase transition become the main object to study. Therefore, it inspires us to explore and analyse the microscopic processes of the phase transition of small and large black holes in third-order Lovelock black holes. Through the thermal potential and complex analysis, we study how a black hole transforms from one black hole state to another one under the influence of temperature T and pressure P in order to obtain its specific transition processes. We wish to further enrich the black hole phase transition dynamics process.
The structures of the paper are as follows. In Sec. <ref>, we give a brief introduction to third-order Lovelock black holes. Then, the winding number is related to black hole thermodynamics using a complex analysis approach. In Sec. <ref>, the phase transition in the hyperbolic case is studied, focusing on d = 7. In Sec. <ref>, the phase transition in the spherical case is further studied, focusing on the analysis of d = 7 and d=9 cases. Finally, Sec. <ref> is devoted to summary and discussion.
§ REVIEW OF THE THIRD ORDER LOVELOCK BLACK HOLE
We briefly review the thermodynamics of the third order Lovelock black hole. The action is composed by the cosmological constant term Λ, the Einstein term R, the Gauss-Bonnet densities ℒ_2 and the third order Lovelock densities ℒ_3 and can be written in d-dimensions <cit.>
ℐ =116π∫ d^dx√(-g)(R-2Λ+α_2ℒ_2+α_3ℒ_3),
where
ℒ_2 =R_μνγδ R^μνγδ-4 R_μν R^μν+R^2,
ℒ_3 =R^3 +2 R^μνσκ R_σκρτ R_μν^ρτ+8 R_σρ^μν R_ντ^σκ R^ρτ+24 R^μνσκ R_σκνρ R_μ^ρ
+3 R R^μνσκ R_μνσκ+24 R^μνσκ R_σμ R_κν+16 R^μν R_νσ R_μ^σ-12 R R^μν R_μν.
Correspondingly, the static spherical symmetry metric is expressed as
ds^2=-V(r)dt^2+1V(r)dr^2+r^2dΩ_k^2,
where
V(r)=k+r^2α[1-(1+6Λα(d-1)(d-2)+3α mr^d-1)^1/3],
here α is the parameter with respect to α_2 and α_3, m is a parameter related to the mass of a black hole, and k is topology of the spacetime curvature and can take -1, 0, and 1.
The Hawking temperature of the third order black hole in terms of the radius of the event horizon r_h is
T_h=112π r_h(r_h^2+kα)^2[48π P r_h^6(d-2)+3(d-3)kr_h^4+3(d-5)α k^2r_h^2+(d-7)α ^2k],
where P is pressure via P=-Λ/(8π G). The entropy conjugated to the temperature reads as
S=∑_kr_h^d-24[1+2(d-2)kα(d-4)r_h^2+(d-2)k^2α^2(d-6)r_h^4],
where ∑_k is the volume of the (d-2)-dimensional submanifold. Therefore, the thermal potential of the third order Lovelock black hole is expressed as
U= ∫(T_h-T)dS
= ∑_k r_h^d-748 π (d-1)[48 π P r_h^6+(d-1)(d-2)(3kr_h^4+3α k^2r_h^2+α^2k)]
-∑_kr^d-24[1+2(d-2)kα(d-4)r_h^2+(d-2)k^2α^2(d-6)r_h^4]T.
Now we place various thermal states in the black hole thermodynamic system at the extreme points of the potential function and it meets the expressions
f(r_h)=dU(r_h)dS(r_h)=0.
Then we can turn the thermodynamic problems into solving the zeroes of the real function f(r_h). To see the full picture of the problem, we need to change the real function f(r_h) into a complex continuation
function f(z), and use the method of complex analysis to study <cit.>.
In complex analysis, the Argument Principle is an effective method to calculate the number of zeros of analytic functions. If f(z) is a meromorphic function in a simple closed contour C, and is analytically nonzero on C, then there is:
N(f, C)-P(f, C)=1/2 π i∮_C f^'(z)/f(z)d z=Δ_C f(z)/2 π,
where N(f, C) and P(f, C) are respectively the number of zeros and poles of f(z) in C, f'(z) is the first order derivative of f(z), and argf(z) is the argument of f(z).
Making a transformation ω = f(z), the above equation is then expressed as the number of rotations of ω around the origin of curve C' as the complex variable z moves around the complex envelope C, where C' is the image curve of C after the transformation. The winding number is denoted by
W:=1/2 π i∮_C'dω/ω=1/2 π i∮_C f^'(z)/f(z)d z.
If the analytic function f(z) does not have poles within the complex perimeter, then the winding number of about the origin is W=N(f, C).
When the complex variable z varies on contour C, the image of the argument function θ = argf(z) can be a Riemann surface. The winding number of the origin corresponds to the foliations of the Riemann surface of the complex variable function.
When W=1, there is no phase transition, when W=2, it corresponds to the second-order phase transition, and when W=3, it means that the first-order phase transition will occur, accompanied by the second-order phase transition <cit.>. Next we hope to use this method to predict the structure of phase transitions of third-order Lovelock black hole. For the case k = 0 of the topology of the spacetime curvature, the phenomena are the same as an ideal gas and have no phase changes. Therefore we will focus on the two cases k = -1 and k = +1.
§ HYPERBOLIC TOPOLOGY
In this case, we have k=-1. For the Lovelock black hole in the hyperbolic case, the analytic function f(z) is calculated by Eqs. (<ref>) and (<ref>) as follows
f(z)=112π z(z^2-α)^2[48π P z^6(d-2)-3(d-3)z^4+3(d-5)α z^2-(d-7)α ^2-12π z T(z^2-α)^2].
Whether d=7 or either 7<d≤ 12, this analytic function has three zeros at most in the entire complex plane 𝐂 with the singularities removed. The only difference between them is that the singularities are ±√(α) for d=7, whereas for 7<d≤ 12, the singularities are 0 and ±√(α). Hence we obtain the winding number W=3 and its complex structure is the Riemann surface with three foliations, as shown in Fig. <ref>. Based on the results of the study <cit.>, we predict that the black hole will undergo the phase transitions with first and second orders.
Since d=7 is of the same type as d>7, let's make a long story short and just use d=7 as an example to verify the above viewpoint. We easily know that there is only one set of critical points in the hyperbolic case, and obtain the critical points for d = 7 from <cit.>
P_c=58πα, T_c=12π√(α), v_c=45√(α),
where v=4r_h/(d-2). For the sake of discussion, we introduce the following dimensionless thermodynamic quantities
p:=PP_c, t:=TT_c, x:=r_hr_c, t_h:=T_hT_c, s:=SS_c, u:=U|U_c|.
The validity of the method is now checked with an analysis of the behavior of the thermal potential. After a series of calculations, we obtain the dimensionless thermal potential for d = 7
u =516(px^6-3x^4+3x^2-1)-38t(x^5-103x^3+5x).
From Eq. (<ref>) it can be seen that the two key parameters (p and t) affect the behaviors of the thermal potential. Here we fix the parameter t to observe the variation of the thermal potential with p. In Fig. <ref> we show the u-x plot at d = 7.
According to Eqs. (<ref>) and (<ref>), we know that the black hole state can be placed at an extreme value of the thermal potential. An unstable black hole state is at the local maximum point, and a stable black hole state is at the minimal point. The lower the potential, the higher the probability that the black hole is at that point and the more stable the system is.
From the diagrams (a) and (b) in Fig. <ref>, we obtain that at a fixed temperature t=0.2 (for any value 0 < t < 1, we always get the same result), a global minimum and a local minimum start to change as the pressure p increases from p=0 to p=p_m at which the global minima of the thermal potential are equivalent. Specifically, at 0 < p < p_m, the thermal potential of the large black hole phase is lower than that of the small black hole phase, implying that the system tends towards the large black hole phase. When p increases to p_m, the large black hole phase and the small black hole phase are in equilibrium.
Similarly, it is clear from the diagrams (b) and (c) that as p increases, the two equivalent global minimums begin to change. Small black hole phase is at the global minimum, while the large black hole phase changes to be in the local minimum until it disappears. Specifically, the thermal potential of the small black hole phase is lower than that of the large black hole phase, which means that the system tends to the small black hole phase at p>p_m.
Thus, it is clear from the above analysis that in the k=-1 hyperbolic case, the system has a first-order phase transition from a large black hole to a small black hole. From Eq. (<ref>), it follows that there is a critical point, which ia the inflection point of the curve, therefore the system also has a second-order phase transition. This is exactly what we predicted.
§ SPHERICAL TOPOLOGY
In this case, we have k=+1. For the Lovelock black hole in the spherical case, the analytic function is calculated by the Eqs. (<ref>) and (<ref>),
f(z)=112π z(z^2+α)^2[48π P z^6(d-2)+3(d-3)z^4+3(d-5)α z^2+(d-7)α ^2-12π z T(z^2+α)^2].
Here we note that the zeroes of the cases in d = 7 and d>7 are not equal across the complex plane with all singularities removed, which leads to different winding number and Riemann surfaces. So the spherical case is not as straightforward as the hyperbolic one and needs to be discussed differently.
In particular at d=12 there is only one zero point, so the winding number is 1 and it is a single-foliation Riemann surface, resulting that the system does not undergo a phase transition. This conclusion is already well-known and is not elaborated here.
§.§ d=7
For d=7, we can obtain its analytic function from Eq. (<ref>) expressed as
f(z)=110π (z^2+α)^2[(8Pπ z^5+10z^3+5α z)-10π T (z^2+α)^2].
Similarly, there are three zeroes at most on the complex plane 𝐂\{±√(α)i}, so that the winding number W=3 and the complex structure is similar to the hyperbolic case in d=7 with three foliations. Hence, we predict that there will have second-order and first-order phase transitions. Next, we verify its correctness by the thermal potential.
Firstly from <cit.> we obtain the critical points
P_c1=0, T_c1=0, v_c1=0,
and
P_c2=17200πα, T_c2=1π√(5α), v_c2=45√(5α).
Then by use of the Eqs. (<ref>) and (<ref>), we can obtain the expression for the dimensionless thermal potential in d = 7,
u =14(17px^6+75x^4+15x^2+1)-t(15x^5+10x^3+3x).
Surprisingly, its behaviors are extremely similar to those in the hyperbolic case. From the diagrams (a) and (b) in Fig. <ref>, we get that at a fixed temperature t=0.8 (we always get the same results when taking any value 0 < t < 1) when the pressure p starts increasing from 0 to p_m, a global minimum and a local minimum start to become two equivalent global minimum values of thermal potential. Specifically, at first the thermal potential of the large black hole phase is lower than that of the small black hole phase, which means that the black hole system tends towards the large black hole phase. Gradually there is a clear upward trend in the large black hole phase, and finally the large black hole phase is at the same level as the small black hole phase. From the diagrams (b) and (c), as the pressure increases from p_m, the two equivalent global minima start to change, with the small black hole phase being a global minimum and the large black hole phase becoming a local minimum until it disappears. This means that the black hole system tends to the small black hole phase at p>p_m.
When p<p_m, the whole black hole system will be completely in the large black hole phase, and conversely, the system is completely in the small black hole phase at p>p_m. There is also a critical points Eq. (<ref>) under this dimensions. So it is concluded that, the system will have first-order and second-order phase transitions. This is the same result as calculated by the winding number.
§.§ d>7
Let now study the cases of 8, 9, 10, and 11 dimensions. From Eq. (<ref>), it follows that the 8, 9, 10, and 11-dimensional cases are similar. Therefore, we take the case of d=9 as an example.
The analytic function is obtained by substituting d=9 into Eq. (<ref>), which reads as
f(z)=142π z(z^2+α)^2[24Pπ z^6+63z^4+42α z^2+7α^2-42π zT( z^2+α)^2]
There are four zeroes at most on the complex plane 𝐂\{±√(α)i, 0}. Hence, the winding number is W=4 and the complex structure is the Riemann surface with four foliations.
According to the basic elements that the corresponding relationship between winding number and phase transition, we can find that W=4 can be decomposed in two ways:
(i) 4=2+2, it means that the system only has two second-order phase transitions; (ii) 4 =1+3, it shows that the system will have one first-order and one second-order phase transitions. Of course, there is a clearer breakdown in Fig. <ref>. So we conjecture that in d=9 there will be two different types of phase transition.
For the case of d > 7, there are two pairs of critical points for the system. This makes dimensionless processing more complicated, so we do not do this here, which is slightly different from the previous analysis. The two sets of critical points in d=9 were obtained from <cit.> which can be written as
P_c1=63/16 πα(6-√(21))^3(√(21)-21), T_c1=3√(3)( √(21)-7)/√(α)π√(6-√(21)) (√(21) -21), v_c1=4 √(α)√(18-3 √(21))/21,
and
P_c2=63/16 πα(6+√(21))^3(√(21)+21), T_c2=3√(3)( √(21)+7)/√(α)π√(6+√(21)) (√(21) +21), v_c2=4 √(α)√(18+3 √(21))/21.
The thermal potential is expressed with the help of Eq. (<ref>) as
U=∑_k/4[7/12π(6/7π P r_h^8+3r_h^6+3α r_h^4+α^2r_h^2)-T(r_h^7+14/5α r_h^5+7/3α^2r_h^3)].
For the sake of simplicity, we set both α=1 and ∑_k=1. By analysis, we find that the phase transition between the two critical temperatures needs to be discussed on a case-by-case basis.
[(a)]T_c1≤ T< T_cm
We can see that from the Fig. <ref>, there is a gradual merging between the extremal points until they disappear as the pressure P increases, while during this time the large black hole phase is always a global minimum and there is no transition between two minima. This means that the system is no first-order phase transition. Instead, the system has a second-order phase transition due to the presence of the inflection point Eq. (<ref>).
[(b)]T=T_cm
From the Fig. <ref>, we can see that the thermal potential changes similarly to that of T<T_cm at both P<P_m and P>P_m. While it is worth noting that at P=P_m, the global minimum and the local minimum become two equal global minima, a phenomenon that does not exist for T<T_cm. Therefore, T_cm is the point at which the phase transition will begin to occur, which is still a second-order phase transition.
[(c)]T_cm<T≤ T_c2
From the diagrams (a) to (b) in Fig. <ref>, as the pressure P starts to rise, it can be seen that the global minimum of the large black hole phase and the local minimum of the small black hole phase change to two equivalent global minima. The black hole system changes from a large black hole phase to a co-existing large and small black hole phases. From the diagrams (b) to (d), the pressure continues to increase from P_m, the two global minima are transformed into a global minima and a local minima ,until the local minima disappear. The thermal potential of the small black hole phase is lower than that of the large black hole phase, which means that the system tends towards the small black hole phase. From the above analysis, it is clear that the system has a first-order phase transition. Meanwhile due to the inflection point Eq. (<ref>), it also has a second-order phase transition.
We conclude from the thermal potential diagrams that there are indeed two different phase transition processes in d = 9, perfectly verifying the previous conjecture.
§ SUMMARY
In this paper, the complex structure of third-order Lovelock black hole phase transition is predicted by the local winding number, and its accuracy is verified by the behavior of thermal potential. By transposing the complex analysis from mathematics to study the microstructure of the black hole thermodynamics and relating the winding number to the type of phase transition, it is easy to know the order of the different phase transitions of the black hole.
In the hyperbolic case of arbitrary dimensions and the spherical case of 7 dimensions, the winding number is W=3 and the complex structure is the Riemann surface with three foliations, which indicates that there are first-order and second-order phase transitions in this system. The winding number is W=4 in 7<d < 12 for the spherical case and the corresponding complex structure is four-foliations Riemann surface.
The thermal potential is next used to explore specifically how a black hole changes from one state to another. The thermal potential of systems with varying pressure reveals different properties. Based on the nature of the thermal potential, the phase transition processes of Lovelock black holes under different topologies are analysed.
For k=-1, the system has first-order and second-order phase transitions.
For k=+1, the situation is slightly more complicated. The phase transition process in 7 dimensions is similar to that of the hyperbolic case, which there are also the first-order and second-order phase transitions. While in 8,9,10,11 dimensions, there is the key intermediate temperature T_cm. When the temperature is T_c1<T< T_cm, there are only second-order phase transitions, and when T_cm<T< T_c2, the system has both second-order and first-order phase transitions. As the winding number tells that:
(i) 4=2+2 states that only second-order phase transition occurs. It is of this type when the temperature is between T_c1 and T_cm for the Lovelock black holes in the spherical topology of d>7 dimensions.
(ii) 4=1+3 indicates that the system has both first-order abd second-order phase transitions. It occurs when the temperature is between T_cm and T_c2 for the Lovelock black holes in the spherical topology of d>7 dimensions.
The results of the above thermal potential analysis perfectly match the one of winding number prediction. By establishing the connection between the winding number and the black hole phase transition, we get the complex phase transition structure. Complex analysis is an effective method to further study the microstructure of black hole systems. We hope that this work will provide new ideas for the study of black hole thermodynamic phase transitions, and thus further enrich the content of black hole thermodynamics.
§ ACKNOWLEDGMENTS
This research is supported by National Natural Science Foundation of China (Grant No. 12105222, No. 12275216, and No. 12247103).
99
Curiel2018cbtE. Curiel, The many definitions of a black hole, Nature Astron. 3, 27-34 (2019).
Bekenstein1973urJ. D. Bekenstein, Black holes and entropy, Phys. Rev. D 7, 2333-2346 (1973).
Hawking1982dhS. W. Hawking and D. N. Page, Thermodynamics of black holes in Anti-de Sitter space, Commun. Math. Phys. 87, 577 (1987).
Chamblin1999hgA. Chamblin, R. Emparan, C. V. Johnson and R. C. Myers, Holography, thermodynamics and fluctuations of charged AdS black holes, Phys. Rev. D 60, 104026 (1999).
Dolan2011xtB. P. Dolan, Pressure and volume in the first law of black hole thermodynamics, Class. Quant. Grav. 28, 235017 (2011).
Kubiznak2012wpD. Kubiznak and R. B. Mann, P-V criticality of charged AdS black holes, JHEP 07, 033 (2012).
Niu2011tbC. Niu, Y. Tian and X. N. Wu, Critical phenomena and thermodynamic geometry of RN-AdS black holes, Phys. Rev. D 85, 024017 (2012).
Wei2015iwaS. W. Wei and Y. X. Liu, Insight into the microscopic structure of an AdS black hole from a thermodynamical phase transition, Phys. Rev. Lett. 115, 111302 (2015).
Bhattacharya2017nruK. Bhattacharya, B. R. Majhi and S. Samanta, Van der Waals criticality in AdS black holes: a phenomenological study, Phys. Rev. D 96, 084037 (2017).
Wei2019uqgS. W. Wei, Y. X. Liu and R. B. Mann, Repulsive Interactions and Universal Properties of Charged Anti-de Sitter Black Hole Microstructures, Phys. Rev. Lett. 123, 071103 (2019).
Mo2014qsaJ. X. Mo and W. B. Liu, P-V criticality of topological black holes in Lovelock-Born-Infeld gravity, Eur. Phys. J. C 74, 2836 (2014).
Miao2016ulgY. G. Miao and Z. M. Xu, Phase transition and entropy inequality of noncommutative black holes in a new extended phase space, JCAP 03, 046 (2017).
Guo2021wcfY. Guo and Y. G. Miao, Weinhold geometry and thermodynamics of Bardeen AdS black holes, Nucl. Phys. B 980,115839 (2022).
Qu2022nrtY. Qu, J. Tao and H. Yang, Thermodynamics and phase transition in central charge criticality of charged Gauss-Bonnet AdS black holes, Nucl. Phys. B 992, 116234 (2023).
Guo2023pobY. Guo, H. Xie and Y. G. Miao, Recovery of consistency in thermodynamics of regular black holes in Einstein's gravity coupled with nonlinear electrodynamics, arXiv:2306.12709.
Guo2022cdjY. Guo and Y. G. Miao, On heat properties of charged AdS black holes in Gauss-Bonnet gravity coupled with nonlinear electrodynamics, Phys. Lett. B 840, 137884 (2023).
Ahmed2023dnhM. B. Ahmed, W. Cong, D. Kubiznak, R. B. Mann and M. R. Visser, Holographic dual of extended black hole thermodynamics, Phys. Rev. Lett. 130, 181401 (2023).
Cong2021fnfW. Cong, D. Kubiznak and R. B. Mann, Thermodynamics of AdS black holes: critical behavior of the central charge, Phys. Rev. Lett. 127, 091301 (2021).
Visser2022M. R. Visser, Holographic thermodynamics requires a chemical potential for color, Phys. Rev. D 105, 106014 (2022).
Gong2023ywuT. F. Gong, J. Jiang and M. Zhang, Holographic thermodynamics of rotating black holes, JHEP 06, 105 (2023).
Kong2022gwuX. Kong, T. Wang, Z. Gao and L. Zhao, Restricted phased space thermodynamics for black holes in higher dimensions and higher curvature gravities, Entropy 24, 1131 (2022).
Kong2022tgtX. Kong, Z. Zhang and L. Zhao, Restricted phase space thermodynamics of charged AdS black holes in conformal gravity, arXiv:2211.00963.
Gao2021xttZ. Y. Gao, X. Kong and L. Zhao, Thermodynamics of Kerr-AdS black holes in the restricted phase space, Eur. Phys. J. C 82, 112 (2022).
Zeyuan2021uolZ. Y. Gao and L. Zhao, Restricted phase space thermodynamics for AdS black holes via holography, Class. Quant. Grav. 39, 075019 (2022).
Wei2021vdxS. W. Wei and Y. X. Liu, Topology of black hole thermodynamics, Phys. Rev. D 105, 104003 (2021).
Wei2022dzwS. W. Wei, Y. X. Liu and R. B. Mann, Black hole solutions as topological thermodynamic defects, Phys. Rev. Lett. 129, 191101 (2022).
Yerra2022alzP. K. Yerra and C. Bhamidipati, Topology of black hole thermodynamics in Gauss-Bonnet gravity, Phys. Rev. D 105, 104053 (2022).
Yerra2022cohP. K. Yerra, C. Bhamidipati and S. Mukherji, Topology of critical points and Hawking-Page transition, Phys. Rev. D 106, 064059 (2022).
Wu2023sueD. Wu and S. Q. Wu, Topological classes of thermodynamics of rotating AdS black holes, Phys. Rev. D 107, 084002 (2023).
Fang2022rsbC. Fang, J. Jiang and M. Zhang, Revisiting thermodynamic topologies of black holes, JHEP 01, 102 (2023).
Bai2022klwN. C. Bai, L. Li and J. Tao, Topology of black hole thermodynamics in Lovelock gravity, Phys. Rev. D 107, 064015 (2023).
Li2020nsyR. Li, K. Zhang and J. Wang, Thermaldynamic phase transition of Reissner-Nordström Anti-de Sitter black holes on free energy landscape, JHEP 106, 090 (2020).
Yang2021ljnS. J. Yang, R. Zhou, S. W. Wei and Y. X. Liu, Kinetics of a phase transition for a Kerr-AdS black hole on the free-energy landscape, Phys. Rev. D 105, 084030 (2022).
Li2022oupR. Li and J. Wang, Generalized free energy landscape of a black hole phase transition, Phys. Rev. D 106, 106015 (2022).
Xu2021qywZ. M. Xu, B. Wu and W. L. Yang, van der Waals fluid and charged AdS black hole in the Landau theory, Class. Quant. Grav. 38, 205008 (2021).
Xu2021uslZ. M. Xu, Fokker-Planck equation for black holes in thermal potential, Phys. Rev. D 104, 104022 (2021).
Xu2022jypZ. M. Xu, B. Wu and W. L. Yang, Rate of the phase transition for a charged Anti-de Sitter Sitter black hole, Sci. China Phys. Mech. Astron. 66, 240411 (2023).
Myers1988zeR. C. Myers and J. Z. Simon, Black hole thermodynamics in Lovelock gravity, Phys. Rev. D 38, 2434-2444 (1988).
Cai2003ktR. G. Cai, A Note on thermodynamics of black holes in Lovelock gravity, Phys. Lett. B 582, 237-242 (2004).
Dehghani2005vhM. H. Dehghani and M. Shamirzaie, Thermodynamics of asymptotic flat charged black holes in third order Lovelock gravity, Phys. Rev. D 72, 124015 (2005).
Zou2010yrD. Zou, R. Yue and Z. Yang, Thermodynamics of third order Lovelock Anti-de Sitter black holes revisited, Commun. Theor. Phys. 55, 449-456 (2011).
Xu2014tjaH. Xu, W. Xu and L. Zhao, Extended phase space thermodynamics for third order Lovelock black holes in diverse dimensions, Eur. Phys. J. C 74, 3074 (2014).
Xu2016tjaH. Xu, and Z. M. Xu, Maxwell’s equal area law for Lovelock thermodynamics, Int. J. Mod. Phys. D 26, 1750037 (2016).
Farhangkhah2021tzqN. Farhangkhah and Z. Dayyani, Extended phase space thermodynamics for third-order Lovelock black holes with nonmaximally symmetric horizons, Phys. Rev. D 104, 024068 (2021).
Xu2023vyjZ. M. Xu, Y. S. Wang, B. Wu and W. L. Yang, Riemann surface, winding number and black hole thermodynamicss, arXiv:2305.05916.
|
http://arxiv.org/abs/2307.01014v2
|
20230703134506
|
Microwave Gaussian quantum sensing with a CNOT gate receiver
|
[
"Hany Khalifa",
"Kirill Petrovnin",
"Riku Jäntti",
"Gheorghe Sorin Paraoanu"
] |
quant-ph
|
[
"quant-ph"
] |
Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000.
10.1109/ACCESS.2023.0322000
[1]Department of Information and Communications Engineering, Aalto University, Espoo, 02150, Finland
[2]QTF Centre of Excellence, Department of Applied Physics,
Aalto University, FI-00076 Aalto, Finland
[3]InstituteQ – the Finnish Quantum Institute, Aalto University, Finland
HK and RJ acknowledge funding from the Academy of Finland project Quantum Enhanced Microwave Backscatter Communications (grant number 319578). This work has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement no. 862644 (FET-Open project: Quantum readout techniques and technologies QUARTET), supporting the work of KP and partly that of HK. GSP acknowledges the project Business Finland QuTI (decision 41419/31/2020). In addition, the authors are grateful to Saab for the scientific collaboration under a research grant agreement with Aalto University. This work was performed as part of the Academy of Finland Centre of Excellence program (project 352925)
Author : Preparation of Papers for IEEE TRANSACTIONS and JOURNALS
Author : Preparation of Papers for IEEE TRANSACTIONS and JOURNALS
Corresponding author: Hany Khalifa (e-mail: hany.khalifa@aalto.fi).
In quantum illumination (QI) the non-classical correlations between continuous variable (CV) entangled modes of radiation are exploited to detect the presence of a target embedded in thermal noise. The extreme environment where QI outperforms its optimal classical counterpart suggests that applications in the microwave domain would benefit the most from this new sensing paradigm. However all the proposed QI receivers rely on ideal photon counters or detectors, which are not currently feasible in the microwave domain. Here we propose a new QI receiver that utilises a CV controlled not gate (CNOT) in order to perform a joint measurement on a target return and its retained twin. Unlike other QI receivers, the entire detection process is carried out by homodyne measurements and square-law detectors. The receiver exploits two squeezed ancillary modes as a part of the gate's operation. These extra resources are prepared offline and their overall gain is controlled passively by a single beamsplitter parameter.
We compare our model to other QI receivers and demonstrate its operation regime where it outperforms others and achieves optimal performance. Although the main focus of this study is microwave quantum sensing applications, our proposed device can be built as well in the optical domain, thus rendering it as a new addition to the quantum sensing toolbox in a wider sense.
Continuous variable (CV) quantum information, continuous variable controlled not gate (CV CNOT), entanglement, quantum illumination (QI), two mode squeezed vacuum (TMSV).
=-21pt
Microwave Gaussian quantum sensing with a CNOT gate receiver
Hany Khalifa1,2 0000-0002-1276-5428,
Kirill Petrovnin2, Riku Jäntti1 Senior, IEEE, Gheorghe Sorin Paraoanu 2,3
Received ; accepted
===================================================================================================================
§ INTRODUCTION
Quantum sensing is a new paradigm for information detection that utilises non-classical features of the electromagnetic (EM) radiation to push detection sensitives beyond the classical limits. Entanglement, squeezing and superposition of quantum states are the main resources upon which many of the quantum sensing architectures are built <cit.>. Besides quantum sensing, quantum entanglement is an essential feature of many other quantum-technology applications, as it allows us to establish remote correlations between two jointly prepared EM radiation modes. Quantum cryptography <cit.>, quantum computing <cit.>, quantum communication <cit.> and quantum-enhanced metrology <cit.>, have all exploited quantum entanglement to outperform their classical counterparts. Nonetheless, quantum entanglement is a fragile phenomenon, susceptible to environment-induced decoherence in the form of excess noise photons.
Quantum illumination is a quantum sensing protocol that can retrieve information sent over a noisy, entanglement-breaking channel <cit.>. The protocol utilizes entangled two mode squeezed vacuum (TMSV) states to detect the presence or absence of a target embedded in a thermal environment <cit.>. A probe (denoted as signal) is sent to illuminate a target, while its twin (denoted as idler) is retained in order to perform a correlation measurement on the target's return. The operational domain of QI is the low signal-to-noise ratio (SNR) limit, where the optimum QI receiver enjoys a 6dB advantage in error exponent over the optimum classical one <cit.>. This suggests that the microwave domain is probably the most natural setting for QI experiments. Unfortunately, up to this moment there is no known physical realization of the optimum QI receiver. Currently, up to 3 dB error exponent enhancement can be attained theoretically with the available hardware. The optical parametric amplifier (OPA), and phase conjugate (PC) receivers <cit.> are the most remarkable receiver architectures that had been demonstrated experimentally to reap this sub-optimal advantage. The full 6 dB advantage can only be hypothetically attained with the complicated sum frequency generation (SFM) receiver and its extremely intricate upgrade, feed-forward SFG (FF-SFG) <cit.>. Further, when non ideal storage of the idler mode is considered, the performance of all the mentioned receivers is greatly affected <cit.>: 6 dB of idler loss is enough to rule out any quantum advantage. However, all of the aforementioned designs relied on ideal photon counters operating in the low SNR regime in order to acknowledge a successful detection event. For quantum optical experiments, despite the insignificance of thermal background noise, efficient photon counting with low dark counts requires the use of superconductors and therefore operation at low temperatures. For microwave-frequency experiments, due to the extremely small powers at the single quantum level, microwave photon counters are only at the proof of concept stage.
In this article we propose a new microwave QI receiver that operates without the need for ideal single photon counters. Our proposed model is based on the CV CNOT operation <cit.>. Under this unitary gate the signal and idler quadratures transform into a superposition that is directly related to their cross-correlations features <cit.>.
Operationally speaking, a CV CNOT gate utilises two quadrature-squeezed ancillary modes, such that one is position-squeezed, while the other is momentum-squeezed. In order to avoid the cumbersome process of nonlinear coupling upon a receiving event, it has been demonstrated in <cit.> that an offline preparation of the squeezed resources is both equivalent and more efficient than an online nonlinear coupling of a mode pair. The overall interaction gain can be controlled by a single beamsplitter parameter.
This controlled operation gives us the ability to smoothly choose the operational domain where our device can outperform other QI receivers.
The basic idea behind the operation of the proposed receiver is simple: In the event of receiving a small fraction of the signal-idler initial correlations, the CNOT receiver strengthens it by a scalar value equal to the receiver's controllable interaction again. This is made possible due to the entangling properties of the universal CNOT gate. On the other hand, when these correlations are lost, the receiver outputs uncorrelated noise beams. Then the signal levels of both possible cases are determined by homodyning the receiver's output field quadratures <cit.>. However, since the average homodyne currents of the quadratures of a TMSV necessarily vanish, we propose feeding the output homodyne current to a square law detector, a spectrum analyzer (SA) for instance, in order to overcome this problem. This had been the standard method in the optical domain <cit.> and can be straightforwardly replicated in microwave quantum optics. Further, our device considers the non ideal storage of the idler mode Fig. (<ref>), modelled by a beamsplitter with transmisivitty T, where the beamsplitter's unused port injects vacuum noise.
Finally it is worth mentioning that recently in the domain of microwave circuit quantum electrodynamics (CQED), there have been other successful implementations of the universal CNOT gate <cit.>.
We have opted for this specific implementation since its performance can be tracked analytically in a straightforward manner and can serve as a good model to calculate the receiver's internal noise.
Thus, as long as a CNOT gate platform is capable of performing the gain-controlled, generalized CNOT interaction, one should be able to replicate the results of this study in both the microwave and optical domains.
This article is organized as follows: in Section 2 we briefly describe the QI enhanced sensing protocol, then in section 3 we describe the theory of operation of our CNOT receiver. We are mostly concerned with showing the ability of our receiver design to extract the signal-idler cross-correlations. Section 4 is mainly focused on the performance analysis of our device, a comparison between our model and OPA, PC and SFG receivers is carried out in great detail, such that the section's main objective is to demonstrate the operational regime where our device can outperform others. Finally, Section 5 will be our conclusion.
§ QI PROTOCOL
We consider applications where a transmitter is sending its information over a noisy and lossy channel. The receiver, potentially co-located with the transmitter, stores a mode that shares quantum correlations with the transmitted signal. Quantum target detection <cit.>, quantum radars <cit.> and quantum backscatter communications<cit.> are perfect examples of these applications.
At the transmitter a pump field excites a non-linear element to generate K independent signal-idler mode pairs via spontaneous parametric down conversion (SPDC), {a^j_S, a^j_I}, 0 ≤ j ≤ M <cit.>. The total number of probe signals is equal to K = τ W, where τ is
the duration of a transmission event and W is the phase-matching bandwidth of the nonlinear element. For our purposes the archetypal nonlinear element in the microwave domain is the Josephson parametric amplifier (JPA). Depending on the design, the operating frequency of a JPA is in the range of 4-8 GHz <cit.>. Another celebrated device that can generate microwave signal-idler entangled pairs is the Josephson ring modulator (JRM) <cit.>. In the case of a JPA source, the bandwidth of the generated twin pairs is typically 1 MHz, hence for a total number of probe pairs K=10^6, the protocol duration would be τ≈ 1s. However, the bandwidth can be substantially increased up to 100MHz when a travelling wave parametric amplifier (TWPA) <cit.> source is utilized. This would dramatically result in a faster QI protocol.
Each signal-idler pair is in a TMSV that admits a number state representation,
|Ψ⟩_SI = n=0∞∑√(N_S^n/(N_S+1)^n+1)|n⟩_S|n⟩_I,
where N_S is the mean photon number in each of the signal and idler modes, i.e., ⟨ a^†_Sa_S⟩=⟨ a^†_Ia_I⟩ =N_S.
It is also useful to express the above TMSV as a squeezing operation applied to a vacuum state,
|Ψ⟩ _SI = 𝒮(γ)| 0,0⟩_SI,
S(γ)=e^(γ a^†_Sa^†_I-γ^*a_Sa_I),
where the complex squeezing parameter r is defined as γ = r e^i φ, and φ is the angle of the squeezing axis. Relating the two expressions to each other, the number of photons in either the signal or idler mode can be redefined in terms of the complex squeezing parameter as ⟨ a^†_Sa_S⟩ = ⟨ a^†_Ia_I⟩ =sinh^2r.
The entanglement between each pair is quantified by their 4 × 4 covariance matrix.
C( S, I)= [ A B; B^⊤ D ],
where the matrices A, B, B^⊤, D are defined as follows:
A_kl =(1/2) [⟨ q^k_Sq^l_S+q^l_Sq^k_S⟩-⟨ q^k_S⟩⟨ q^l_S⟩ ] = diag(N_S+1/2, N_S+1/2), B_kl = (1/2)[⟨ q^k_Sq^l_I+q^l_Sq^k_I⟩ -⟨ q^k_S⟩⟨ q^l_I⟩ ]= diag([N_S(N_S+1)]^1/2, [N_S(N_S+1)]^1/2), B^⊤ _kl = (1/2)[⟨ q^k_Iq^l_S+q^l_Iq^k_S⟩ -⟨ q^k_I⟩⟨ q^l_S⟩ ] = diag([N_S(N_S+1)]^1/2, [N_S(1+N_S)]^1/2), D_kl = (1/2)[⟨ q^k_Iq^l_I+q^l_Iq^k_I⟩-⟨ q^k_I⟩⟨ q^l_I⟩ ] = diag(N_S+1/2, N_S+1/2), k,l =1,2, q^1= X = (a+a^†)/√(2), q^2 = Y = -i(a-a^†)/√(2) are the mode quadratures, B^⊤ is the transpose of B and diag is a 2 × 2 diagonal matrix.
As for channel considerations, we will assume a lossy transmission medium overwhelmed by noise photons. Hypothetically, two transmission scenarios might arise under this channel model.
Hypothesis 1 (alternative hypothesis)(H=1):
The transmitted signal reaches the receiver with a very small probability. This can be modelled by a low transmissivity beamsplitter that mixes the signal with a bath mode
a_R = √(η) a_S + √(1-η) a_B ,
where a_R is the received mode, η is the beamsplitter's transmissivity, and a_B is a zero mean Langevin bath mode, ⟨ a_B⟩=⟨ a_B^†⟩=0, with mean photon number ⟨ a_B^†a_B⟩ = N_B,
and zero cross correlations ⟨ a_B_ka_B_l⟩=0, ∀ k≠ l, since a thermal state is diagonal in the number basis.
Hypothesis 0 (null hypothesis) (H=0): The transmitted signal is completely lost and replaced by a bath mode, a_B, i.e, a_R=a_B.
Simultaneously, we consider storing the idler mode in a leaky memory element, which can be represented by a pure loss channel with transmisivitty T
a_M = √(T)a_I+√(1-T)a_V
where a_V is an environment vacuum mode. It is worth noting that recently there has been some notable progress regarding microwave quantum memories, with efficiency as high as 80% <cit.>.
§ THEORY OF OPERATION
§.§ The CNOT receiver
The preferred operating regime where QI's advantage is manifest is the low SNR, such that, N_S≪ 1 ≪ N_B. In this domain the signal-idler cross correlations of a TMSV, ⟨ a_Sa_I⟩ + ⟨ a^†_Sa^†_I⟩, where ⟨ a_Sa_I⟩=(1/2) ⟨ (X_S+iY_S)(X_I+iY_I)⟩ = ⟨ X_S X_I⟩ - ⟨ Y_SY_I⟩ = √(N_S(1+N_S)), ⟨ a^†_Sa^†_I⟩ =(1/2) ⟨ (X_S-iY_S)(X_I-iY_I)⟩ = ⟨ X_S X_I⟩ - ⟨ Y_SY_I⟩ = √(N_S(1+N_S)), ⟨ a_Sa_I⟩ + ⟨ a^†_Sa^†_I⟩ = 2√(N_S(1+N_S)), exceed the maximum that can be attained classically with equal strength, uncorrelated coherent pairs, with average photon number N_S each, ⟨α| X_m|α⟩ =√(2)Re(α), ⟨α| Y_m|α⟩= √(2)Im(α), ⟨α,α| X_mX_m|α,α⟩ = 2 [Re(α)]^2, ⟨α,α| Y_mY_m|α,α⟩ = 2 [Im(α)]^2, ⟨α,α| X_mY_m|α,α⟩ = 2 |α|^2 =2 N_S, where m = S,I, α is the coherent field complex amplitude, and [X_m, Y_n] =i δ_mn is the field quadratures commutation relation, where the reduced Planck's constant is set equal to one, ħ = 1.
A receiver's main task in QI is to extract the aforementioned signal-idler cross correlations from a target return and its retained idler twin, ⟨ a_Ra_M⟩. In terms of the quadrature operators, this quantity can be expanded into four terms (1/2) [ X_RX_M+iX_RY_M+iY_RX_M-Y_RY_M]. Attempting to perform a heterodyne measurement on each mode separately is probably the most economic way to gain full access to the field's quadratures <cit.>. However, the splitting of each mode first on a balanced beamsplitter, would add an additional 3dB loss to the overall output SNR <cit.>. Along with detectors inefficiencies, this would rule out any quantum advantage over the optimal classical illumination (CI) receiver. Further, the Gaussian Wigner statistics of a directly homodyned squeezed state is non-negative and a nonlinear detection scheme, such as photon counting, is needed to reveal the non classical signal-idler signature <cit.>. In this regard, the previous installments of the QI protocol opted for single photon counters as detectors for their receiver designs.
In order to avoid the extra losses of double homodyning, and the status-quo technological infeasibility of microwave single photon counters, our proposed CV CNOT receiver mediates a controllable interaction between a target's return and a stored idler that can access their non-classical correlations by creating an observable quantity corresponding to their relative momentum and total position quadratures. Further, as mentioned in the introduction, the controllable gain of the receiver can in fact strengthen these correlations rendering them more visible for successful detection. Finally, in order to satisfy the required detector's non linearity described previously, our receiver utilises a square law detection chain <cit.>, composed of a balanced double port homodyne detector and a spectrum analyzer, where the corresponding measurement outcome is the quadratures variances or powers <cit.>. This has been the standard method of detection in quantum optical experiments <cit.>. For completeness we expose the details of this method in appendix <ref>. Besides making the above arguments more rigorous, we now focus on the mathematical representation of our proposed device. We first show how it extracts the signal-idler cross correlation signature, then we see how the device's controllable gain can enhance it.
The CNOT receiver transforms a returned mode and its stored idler twin as follows,
X^(out)_R = e^i G Y_M X_R X_R e^-i G Y_M X_R
= X_R
Y^(out)_R =e^i G Y_M X_R Y_R e^-i G Y_M X_R
=Y_R + [i G X_R Y_M,Y_R]+0,
= Y_R- G Y_M
X^( out)_M = e^i G Y_M X_R X_M e^-i G Y_M X_R
=X_M + [i G X_R Y_M,X_M]+0,
= X_M+GX_R,
Y^(out)_M = e^i G Y_M X_R Y_M e^-i G Y_M X_R
=Y_M
where we have utilised the operator expansion formula for any two non commuting operators [A,B] ≠ 0, e^λ ABe^-λ A=B+λ [A,B]+(λ^2/2!) [A,[A,B]]+...., the commutation relation between the field's quadratures [X, Y]=i, such that ħ =1, and G is the interaction gain <cit.>. Note that [X^(out)_R , Y^(out)_R ] = [X^(out)_M , Y^(out)_M ] = i and the rest of the commutators are zero, as expected for a unitary transformation.
In Fig. (<ref>) we depict the CNOT receiver as a unitary gate. As can be seen, the receiver has two different quadrature input tuples, of two elements each, and their corresponding two observable output tuples. The first element of the first output tuple, X^(out)_R, is the unaffected mode, whereas the second, Y^(out)_R, carries information on both the returned and stored momentum quadratures. On the other hand, the first element of the second output tuple, X^out_M, carries the position information of both the return and stored modes, whereas the second, Y^(out)_M, is the unaffected mode. Ideally this sort of interaction is probed in order to perform a non demolition measurement on the unaffected quadratures by only measuring the translated ones <cit.>. After that each output is mixed on a balanced beamsplitter with a vacuum mode, defined by its corresponding conjugate quadratures tuple, (X_v,Y_v). Finally, the powers of the four outputs are measured by a spectrum analyzer module (SA) comprising a double port homodyne detector (see details in appendix <ref>). In order to verify a successful implementation of the receiver's operation, and hence a successful capturing of sought cross correlations, the four conjugate quadratures corresponding to the output return and memory modes have to be measured simultaneously. As can be deduced from Eq. (<ref>), the power (second moment) of the unaffected quadrature is added to the translated one respectively for each output. In the event of a failed operation, that is, G=0, the powers are equal respectively.
We now proceed with calculating the mode variances, i.e., (signal powers), of the involved quadratures as being measured practically, and demonstrate the previous ideas mathematically.
§.§ Extracting the signal-idler cross correlation
The receiver's outputs as demonstrated in Fig. (<ref>) are mode quadratures. The information contained in the signal-idler cross correlations can be accessed by measuring their respective variances, which in the present case coincides with the signal power (second moment), since we are dealing with zero mean fields. As pointed out earlier, in order to measure the quadratures variances simultaneously from the signal return and stored idler, a^(out)_R, a^(out)_M, the modes are first split individually on a balanced beamsplitter, where a vaccuum mode enters from the unused port. Then detected by a square law detector. This would result in a 3dB loss of the measured quadrature. For illustration let's consider the first receiver's output, after a balanced beamsplitter the first output is homodyned for the position quadrature, thus a̅^(out)_R = (1/√(2)) (a^(out)_R+a_v), X̅^(out)_R = (1/2)(a^(out)_R+a_v+ a^(out)_R^†+a^†_v ) ,⟨ [X^(out)_R]^2⟩ = (1/2) ⟨ (X^(out)_R+X_v)(X^(out)_R+X_v) ⟩ =(1/2) (⟨ [X^(out)_R]^2⟩ + ⟨ X^2_v⟩ ). Similarly the second beamsplitter output can be homodyned for the momentum quadrature Y^(out)_R. As can be seen the 3dB noise penalty is visible now in the signal's power as the intensity of the original field is halved. The rest of the quadratures corresponding to the receiver's second output are treated in a similar manner. Thus, to keep the notations simple, we include the noise penalty directly to the following calculations, whereas the overall vacuum noise is added at the end of this derivation.
Suppose that the alternative hypothesis is true, i.e, H=1, then
⟨ [X^(out)_M]^2⟩ = ⟨ [X^2_M+2GX_MX_R+G^2X^2_R] ⟩,
where,
⟨ X^2_M⟩ = 1/4⟨ (√(T) a_I+√(1-T)a_V+√(T)a^†_I+√(1-T)a^†_V)
(√(T) a_I+√(1-T)a_V+√(T)a^†_I+√(1-T)a^†_V) ⟩
=(2TN_S+1)/4
⟨ X_MX_R⟩ = 1/4⟨ (√(T) a_I+√(1-T)a_V+√(T)a^†_I+√(1-T)a^†_V)
(√(η) a_S+√(1-η)a_B +√(η)a^†_S+√(1-η)a^†_B)⟩
= √(η T N_S(1+N_S))/2
⟨ X^2_R⟩ = 1/4⟨ (√(η) a_S+√(1-η)a_B+√(η)a^†_S+√(1-η)a^†_B)
(√(η) a_S+√(1-η)a_B+√(η)a^†_S+√(1-η)a^†_B) ⟩
=1/4( η [1 +2⟨ a^†_Sa_S⟩]+(1-η)[1 +2 ⟨ a^†_Ba_B⟩])
=( η(1+ 2N_S) +(1-η)(1+2N_B))/4
where in Eq. (<ref>) we have used the following, ⟨ a_Sa_S⟩ = ⟨ a_Ia_I⟩ = ⟨ a_B a_B⟩ = ⟨ a^†_Sa^†_S⟩ =⟨ a^†_Ia^†_I⟩ = ⟨ a^†_Ba^†_B⟩= ⟨ a^†_Sa^_I⟩ = ⟨ a^_Sa^†_I⟩ = 0, ⟨ a_Sa_B⟩ = ⟨ a_Ia_B⟩ = ⟨ a^†_Sa_B⟩ = ⟨ a^†_Ia^_B⟩ = ⟨ a^†_Sa^†_B⟩ = ⟨ a^†_Ia^†_B⟩ =0, ⟨ a_Ia_V⟩ = ⟨ a^†_Ia^_V⟩ = ⟨ a^†_Ia^_V⟩ = ⟨ a^†_Ia^†_V⟩ = ⟨ a^†_Va^_V⟩ =0, ⟨ a^†_Sa^_S⟩ = ⟨ a^†_Ia^_I⟩ = N_S, [a^_S,a^†_S]=[a^_I,a^†_I]= [a^_V,a^†_V]=1, ⟨ a_Sa_I⟩ = ⟨ 0,0 | (a^_Scoshr+e^i φsinh(r)a^†_I)(a^_Icoshr+e^i φsinh(r)a^†_S) | 0,0⟩ = sinh(r)cosh(r) = √(N_S(1+N_S)), ⟨ a^†_Sa^†_I⟩ = ⟨ 0,0 | (a^†_Scoshr+e^-i φsinh(r)a^_I)(a^†_Icoshr+e^-i φsinh(r)a^_S) | 0,0⟩ = sinh(r)cosh(r) = √(N_S(1+N_S)), sinh^2(r)=N_S, and we have set φ=0.
Then a similar calculation of the momentum translated output yields
⟨ [Y^(out)_M]^2⟩ = ⟨ [Y^2_R-2GY_R Y_M+G^2Y^2_M] ⟩,
where,
⟨ Y^2_R⟩ = -1/4⟨ (√(η) a_S+√(1-η)a_B-√(η)a^†_S-√(1-η)a^†_B)
(√(η) a_S+√(1-η)a_B-√(η)a^†_S-√(1-η)a^†_B) ⟩
=1/4( η [1 +2⟨ a^†_Sa_S⟩] +(1-η)[1 +2 ⟨ a^†_Ba_B⟩])
=( η(1+ 2N_S) +(1-η)(1+2N_B))/4 ,
⟨ Y_RY_M⟩ =
-1/4⟨ (√(T) a_I+√(1-T)a_V-√(T)a^†_I-√(1-T)a^†_V)
(√(η) a_S+√(1-η)a_B -√(η)a^†_S-√(1-η)a^†_B) ⟩
=-√(η T N_S(1+N_S))/2
⟨ Y^2_M⟩ =
-1/4⟨ (√(T) a_I+√(1-T)a_V-√(T)a^†_I -√(1-T)a^†_V)
(√(T) a_I+√(1-T)a_V-√(T)a^†_I-√(1-T)a^†_V) ⟩
=(2TN_S+1)/4
As for the unaffected modes they are equal to, ⟨ [X^(out)_R]^2⟩ = ⟨ X^2_R⟩, ⟨ [Y^(out)_M]^2⟩ = ⟨ Y^2_M⟩. It is clear now from Eqs. (<ref>) and (<ref>) that the receiver's translated modes ⟨ [X^(out)_M]^2⟩, ⟨ [Y^(out)_R]^2⟩ indeed carry the total signal-idler cross correlation signature ⟨ X_MX_R⟩, ⟨ Y_R Y_M⟩, nonetheless accompanied by unwanted noise. The receiver's output when H=1, is the sum of the signal powers of all the receiver's output quadratures
ℐ_1 = ⟨ X^2_M⟩ + ⟨ Y^2_M⟩+2G[⟨ X_MX_R⟩-⟨ Y_RY_M⟩]
+ G^2 [⟨ X^2_R⟩+⟨ Y^2_M⟩] + ⟨ X^2_R⟩ +⟨ Y^2_R⟩ + ⟨ X^2_V⟩+⟨ Y^2_V⟩
where the vacuum contribution stems from the noise penalty on all measurements.
When the null hypothesis is true, H=0, the target return is replaced with a bath mode and the four receiver's outputs become
X^(out)_R = X_B
Y^(out)_R = Y_B-GY_M
X^(out)_M = X_M +GX_B
Y^(out)_M = Y_M
Then,
⟨ [X^(out)_M]^2⟩ = ⟨ X^2_M⟩ +2G ⟨ X_MX_B⟩ +G^2⟨ X^2_B⟩,
⟨ [X^(out)_R]^2⟩ =⟨ X^2_B⟩ = 1/2⟨ (a_B+a_B^†)(a_B+a_B^†)⟩ =(1+2N_B)/4 ,
⟨ X^2_M⟩ =(2TN_S+1)/4,
⟨ [Y^(out)_R]^2⟩ = ⟨ Y^2_B⟩ -2G⟨ Y_BY_M⟩ + G^2⟨ Y^2_M⟩,
⟨ Y^2_B ⟩ =-1/4⟨ (a_B-a^†_B)(a_B-a^†_B) ⟩
=(1+2N_B)/4 ,
⟨ [Y^(out)_M]^2⟩ = ⟨ Y^2_M⟩ =(2TN_S+1)/4
where ⟨ X_MX_B⟩ = ⟨ Y_BY_M⟩ =0, since the bath mode is not correlated with the stored idler.
Correspondingly, it can be seen that the receiver's output when the null hypothesis is true becomes
ℐ_0 = ⟨ X^2_M⟩ + ⟨ Y^2_M⟩+ G^2 [⟨ X^2_B⟩+⟨ Y^2_M⟩] + ⟨ X^2_B⟩
+⟨ Y^2_B⟩ + ⟨ X^2_V⟩+⟨ Y^2_V⟩
Since in the low brightness regime the following approximation is valid, ⟨ X^2_R⟩ = ⟨ Y^2_R⟩ = ⟨ X^2_B⟩ = ⟨ Y^2_B⟩, it can be concluded that the effective signal power of the CNOT receiver is
ℐ_1-ℐ_0≈ 2G√(η T N_S(N_S+1))
In summary we have demonstrated the details of the process of extracting the signal-idler cross correlation in Eq. (<ref>). This wil be the relevant quantity when we start discussing the receiver's error exponent. We have further shown that the receiver's output signal power is enhanced by the receiver's gain. Thus indeed the CNOT receiver can in principle offer a better performance than the other QI protocols. In order to quantify practically the amount of gain that can be controlled to enhance the detection process, we have to consider the effect of background noise on the device operation. This will be the task of the next section.
§.§ Background noise of CNOT receiver
As shown in appendix <ref>, a double port homodyne measurement is capable of extracting the input field power, which is displayed on a spectrum analyzer's screen. In order to calculate the error exponent of our receiver, we need to calculate its noise power. This corresponds to calculating the PSD of a bath mode. Consider the case where the null hypothesis is true, that is, a returned mode is replaced with a bath one. Under the AWGN channel model, the bath mode enters the receiver as a white Gaussian random process, whereas the stored idler is an a thermal state with average photon N_S after tracing out its signal twin. In such case the receiver output is Eq. (<ref>).
In the low brightness regime we can approximately neglect the power of the memory mode, and that corresponding to the vacuum noise penalty, thus we calculate the bath quadrature noise power according to Eqs.
(<ref>-<ref>) as
⟨ X_B^2⟩ ≈⟨ (a_B+a^†_B)(a_B+a^†_B)⟩/4
≈1+2N_B/4≈N_B/2
Similarly, the noise power of the momentum quadrature is calculated as
⟨ Y_B^2⟩ ≈-⟨ (a_B-a^†_B)(a_B-a^†_B)⟩/4
≈1+2N_B/4≈N_B/2
where in the above equations we have used the bath properties ⟨ a_B⟩ = ⟨ a^†_B⟩ = 0, and we assumed that powers are measured in a narrow bandwidth.
Thus the overall noise power becomes
P_N(ℐ_0) = ⟨ [X^(out)_R]^2⟩ + ⟨ [Y^(out)_R]^2⟩+⟨ [X^(out)_M]^2⟩
+ ⟨ [Y^(out)_M]^2⟩
= G^2⟨ X^2_B⟩ + ⟨ X^2_M⟩ + ⟨ X^2_B⟩+ ⟨ Y^2_M⟩
+G^2⟨ Y^2_M⟩+⟨ Y^2_B⟩
Thus,
P_N(ℐ_0) ≈ N_B + G^2N_B/2 = N_B(1+G^2/2)
where we recall that ⟨ X_B⟩ = ⟨ Y_B⟩ =⟨ X_M⟩ = ⟨ Y_M⟩=0.
Since the background noise is identical for both transmission hypotheses, we will assume equal hypotheses noise power, P_N(ℐ_0)=P_N(ℐ_1)=P_N. This is a reasonable approximation in communication systems, when a thermal bath is the dominant noise source <cit.>. Hence the device background noise power is
P_N ≈ N_B(1+G^2/2)
The previous expression is used to calculate the receiver's error exponent as shown in the next section.
§ PERFORMANCE ANALYSIS
The objective of this section is to demonstrate the operational regime where our device can outperform other QI receivers. Theoretically we have shown in the last section that our receiver's gain can indeed strengthen the signal-idler cross correlations, which in principle should translate into a better device SNR. However, practically the device internal interactions add extra noise to that in the channel. Thus it is imperative to study the effect of the overall noises in the system on the performance of our device. We will use the practical setup described in appendix <ref> as our model of device noise, this will help us understand exactly how the receiver's gain can be manipulated to achieve the desired enhancement. This section is organized as follows: we begin with a simplified background on the basics of error probability in transmission problems tailored to the QI scenario. Then we derive the error probability formula of our device. Finally we plot our device's error bounds in different device settings and compare them to the other QI protocols in order to highlight our areas of improvement.
A good performance metric of a QI receiver is its ability to circumvent high error rates when discriminating between the two possible transmission hypotheses. This binary decision situation is identical to on-off communications systems <cit.>, such that when H=1 is true, the 1-bit signal (j_1) is sent, whereas when H=0 is true the 0-bit (j_0) signal is sent. Further, we suppose that the receiver's environment, that is, the channel's noise, is additive white Gaussian noise (AWGN). Empirically this is a reasonable assumption in most practical cases, since the electrons random motion inside the receiver's front end conductors is modelled as a stationary Gaussian random process. The receiver's total bit error rate (BER) <cit.> is then defined as P_e = p(1)p(0 | 1)+p(0)p(1 | 0), where p(1) (prior) is the probability that the target is there, p(0 | 1) (conditional) is the probability of a miss, i.e., deciding that the target is not there while in reality it is there, p(0) is the probability that the target is not there, p(1 | 0) is the probability of a false alarm, i.e., deciding that target is there while in reality it is not there. The conditionals are calculated with respect to a decision threshold j_d as p(0 | 1) = (1/√(2 πσ^2_1))-∞j_d∫exp(-(j-j_1)/2 σ^2_1) dj = (1/2) erfc((j_1-j_d)/√(2)σ_1),
p(1 | 0) = (1/√(2 πσ^2_0))j_d∞∫exp(-(j-j_0)/2 σ^2_0) dj = (1/2 )erfc((j_d-j_0)/√(2)σ_0), where the means of the 0 & 1 bit signals are j_0, j_1 respectively, whereas σ^2_0, σ^2_1 are the filtered power spectral density (PSD) of the zero mean white Gaussian noise process comprising the receiver's environment when H=0, H=1 respectively, and erfc(x) =(2/π) x∞∫e^-y^2dy is the complementary error function. Assuming equal priors, p(0)=p(1)=1/2, BER reads P_e = 1/4 [ erfc((j_1-j_d)/√(2)σ_1)+erfc((j_d-j_0)/√(2)σ_0)]. We note that a filtered zero mean white Gaussian random process has a variance equals to its PSD, this complies with empirical observations. The BER minimum occurs when j_d is chosen such that, (j_d-j_0)^2/2σ^2_0 = (j_1-j_d)^2/2σ^2_1 + ln( σ_1/ σ_0). Under the assumption that the noise PSD is equal for both hypotheses, σ_0=σ_1=σ, we arrive at an expression for the decision threshold as (j_d-j_0)/σ = (j_1-j_d)/σ = R_Q, j_d =(j_0+j_1)/2, where R_Q is denoted by the error exponent. An expression for the error exponent can be written as R_Q= (j_1-j_0)/2σ. A more general form of the error exponent is adopted when the noise PSD is different for the two transmission hypotheses, R_Q =(j_1-j_0)/(σ_0+σ_1). Moreover, the minimum BER as a function of the error exponent can be provided as, P_e, min = (1/2) erfc(R_Q/√(2)). By exploiting a series expansion of the error function, the minimum BER can be written as P_e, min = 1/R_Q√(2π)exp(-R^2_Q/2)+R_Q/2√(2π)[ √(2 π)/R_Q-2- n=1∞∑( -R^2_Q/2)^n1/(n+1)!(2n+1)!], such that it can be approximated as, P_e, min≈1/R_Q√(2π)exp(-R^2_Q/2). A practical upper bound on the minimum error probability neglects the denominator of this expression, as we shortly see. After this brief motivation, the error exponent of the CNOT receiver can be defined as
R_Q_CNOT = R_Q^2/2= 1/2[ℐ_1-ℐ_0/√(P_N(ℐ_1))+√(P_N(ℐ_0))]^2,
where ℐ_1 is the average receiver's output when H=1, given by the expression in Eq. (<ref>), whereas ℐ_0 is given by <ref>, which is the receiver's output when the null hypothesis is true. Their associated noise powers are defined as P_N(ℐ_1), P_N(ℐ_0) respectively. Similarly, for equal noise powers we define the SNR_CNOT as 4 R_Q_CNOT.
Further, it has been shown that for K probe signals, the minimum bit error probability is upper bounded by the classical Bhattacharyya bound <cit.>
P^K_e, min≤1/2exp[-KR_Q],
where K is the total number of signal-idler pairs generated at the transmitter.
We are now ready to compare between the error probability upper bounds of different QI receivers. Following <cit.>, and <cit.>, the error exponents of the OPA, PC and SFG are
R_Q_OPA, PC = η T N_S/ 2N_B
R_Q_SFG = η T N_S/ N_B
respectively.
For the CNOT receiver, we assume that both hypotheses have equal noise power based on the analysis presented in appendix <ref>. Thus its error exponent expression according to Eq. (<ref>) becomes,
R_Q_CNOT = η G^2T N_S/ 2 P_N
where P_N(ℐ_0)=P_N(ℐ_1)=P_N(ℐ) is defined by Eq. (<ref>).
We further assumed for all receivers that η = 0.01, T=0.7, N_S =0.01, and N_B =20.
In Fig. (<ref>) we have plotted the minimum bit error probability against the total number of probe pairs. We have also included the the optimum CI receiver to demonstrate the quantum advantage in the low SNR setting. Let us consider first the trivial case of zero gain operation. In the event of zero interaction gain, G=0, the CNOT receiver homodynes both the return and the stored idler individually. Consequently, the 3dB noise penalty due to the simultaneous measurement of the two non commuting quadratures eradicates any quantum advantage. We now focus on non zero gain operation of our receiver. We considered three different gain values for our CNOT receiver, namely, unity gain, G=3, and G=6. By substituting with G=1 in Eq. (<ref>), it can be seen that the total number of added noise photons in this case is 64. As can be seen from Fig. (<ref>a), the performance of the CNOT receiver was only able to outperform the optimum CI in both the LL operation, that is, T=1 and the L operation, that is, T=0.7. Nonetheless, it was outperformed by all QI receivers in both cases respectively. We note that the SFG receiver has the best performance among all receivers in both operations for this case. Further, the unity gain case is interesting in itself, since it represents the domain of operation where the device operates typically as a qubit CNOT gate. Thus we conclude that any other realization of the CNOT operation based on a different platform would replicate the same performance.
In Fig. (<ref>b) the CNOT receiver operates with a gain above unity, i.e., G=3. The total number of added noise photons is ≈ 224. In this domain of operation it can be seen that the CNOT outperforms both the OPA and PC in the LL and L cases respectively. However, it is still being outperformed by the SFG receiver.
In Fig. (<ref>c) the CNOT receiver operates with a gain equal to G=6. The total number of added noise photons in this case is ≈ 764. It can be seen from the plot, that in this case the CNOT receiver is only comparable to the SFG, although, still being outperformed by it in the LL and L cases respectively. We further observe that the CNOT outperformed both LL OPA and LL PC even when operating in its lossy operation. Thus, we can conclude from the plots in Fig. (<ref>), that by increasing the CNOT gain its performance approaches that of the SFG. Further, by analyzing Eq. (<ref>) in the limit of large gain, G>>1, and assuming negligible internal noise, that is, strong squeezing and negligible homodyne detection inefficiencies, the variance of the receiver's output becomes Var(ℐ) ≈ N_BG^2/2, and consequently the error exponent would be R_Q_CNOT = η T N_S/N_B, therefore coinciding with that of SFG.
§ CONCLUSION
In this paper we have considered a new QI receiver design for microwave applications. Due to the technological difficulty of realizing single photon counters, the proposed device relies completely on homodyne measurements and square law detectors. The receiver is built upon an offline controlled gain CV CNOT gate in order to extract the signal-idler cross correlations.
We have investigated different gain operational values of our CNOT receiver. In the unity gain scenario, we have shown that the CNOT offers no performance advantage over any of the QI receivers, while only managing to edge past the optimum CI receiver. We expect similar performance from any other realization of a unity gain CNOT gate. On the other hand when operating with above unity gain, we showed that our device approached the best QI receiver gradually as the gain increases. Ideally when squeezing and vacuum noises are suppressed, a high gain operating point matches the SFG receiver. We further noticed that with squeezing noise, an above unity gain CNOT can still offer a decent performance, comparable to SFG, especially in the radar domain, where the maximum number of utilised probe pairs is ≈ 10^5-10^6 <cit.>. This is visible in the error probability curves in Figs. (<ref>b), and (<ref>c).
Two final remarks on the engineering challenges of implementing the protocol in the microwave domain. Tailoring a desired high gain operational point requires a small and controllable beamsplitter coefficient g by virtue of the relation G=(1-g)/√(g). Recently significant progress has been made towards engineering devices capable of achieving this level of controlled transformations <cit.>. Further, we have also observed that a high gain operating point is usually accompanied by excess noise photons, this may result in an elongated dead time of our receiver; however, recent techniques in cQED can mediate excess noise by utilising circuit refrigeration procedures <cit.>. These are all clear signs that the proposed model can be practically implemented by the existing quantum microwave technologies.
§
§ ANALYSIS OF CNOT RECEIVER'S HOMODYNE MEASUREMENT
The CNOT receiver as described in the main text operates on the fields non-commuting quadrature operators. The noise penalty of measuring two non-commuting observables is 3dB. This can be seen when we consider splitting individually each of the returned mode and the stored idler, a_R, a_M, on a balanced beamsplitter, where a vacuum mode enters from its unused port, a̅_R= (1/ √(2))(a_R-ia_v), a̅_v =(1/√(2))(a_v +ia_R), such that X̅_R = (1/ √(2))(a̅_R+a̅^†_R) = (1/4)(a_R-ia_v+a^†_R+ia^†_v), and Y̅_v = (1/ i√(2))(a_v-a̅^†_v)=(1/i4)(a_v+ia_R-a^†_v+ia^†_R), then [X_R, Y_v] = (1/i16)(i[a^_R,a^†_R]+i[a^†_R,a^_R]+i[a^_v,a^†_v]+i[a^†_v,a^_v])=0, and hence these two observables can be measured simultaneously <cit.>. However, as can be seen, the noise penalty due to splitting the mode first on a balanced beamsplitter is present as an attenuation factor of 1/2, where only half of the original intensity is contained in X̅_R, Y̅_V. One might wonder now that the cross correlation output of our CNOT receiver would suffer a similar fate. This would have been the case if our receiver measures the return and the stored modes individually without mixing them first. However the interaction between the two modes as described by the gate transformations in Eq. (<ref>) and the analysis that followed, showed that the cross correlation signature was preserved.
We now focus on outlining the details of our receiver's homodyne chain.
Following <cit.>, Fig. (<ref>) presents a schematic of the detection circuit used by our receiver to output the measured values of the observable quantities in Eqs. (<ref>) and (<ref>) (see also Fig. (<ref>)). As can be seen, the input field is mixed on a balanced beamsplitter with a local oscillator field followed by two detectors, D_1, D_2, a subtraction circuit that calculates the difference between the generated photo-currents, and a a spectrum analyzer display that shows the measured field's variance (power). Without loss of generality, let's consider an arbitrary returned mode a_R, not necessarily in a TMSV with an idler. We further assume a noiseless transmission of this return. On the other hand, at the receiver, our local oscillator is tuned to extract the unaffected quadrature Y_R. The output of the detection chain is defined as follows,
a^(out)_R = 1/√(2)(a^(in)_R+i d^(in)_)
d^(out)_ = 1/√(2)(d^(in)_-i a^(in)_R)
where d^(in) is the local oscillator mode.
Then, the output of the subtraction circuit is,
I = a^(out)_R^† a^(out)_R- d^(out)_^† d^(out)_,
N^(out)_R = 1/2(N^(in)_R+N^(in)_d+id^(in)^†a^(in)_R-id^(in)a^(in)_R^†),
N^(out)_d = 1/2(N^(in)_R+N^(in)_d-id^(in)^†a^(in)_R+id^(in)a^(in)_R^†)
I = i (d^(in)^†a^(in)_R-d^(in)a^(in)_R^†)
where N^(in)_R = a^(in)_R^† a^(in)_R, and N^(in)_d = d^(in)^†d^(in).
By assuming that the local oscillator mode is a complex number, d^(in)→D̃=|α_L| e^i ϕ_L, we can extract the field's Y quadrature by setting the LO phase to π and normalizing the output current,
Y_R =I/|α _L|√(2) = -i (a_R-a^†_R)/√(2)
where |α_L| is the LO field strength, and ϕ_L is its phase.
One of the powerful features of double port homodyning is that the subtraction circuit eliminated the noise associated with the LO field. This results in the homodyned output noise power being only dependent on the input's variance, as we shall see now. In order to estimate the overall noise accompanied with the process of double port homodyning <cit.>, we split the returned mode into a signal carrying part plus fluctuations, a_R = ⟨ a_R⟩+Δ a_R, such that ⟨ a_R⟩ = A_R, ⟨Δ a_R⟩ = 0, where A_R = A^X_R+iA^Y_R, Δ a_R = Δ a^X_R+iΔ a^Y_R, A^X_R, A^Y_R are the X, and Y quadrature amplitude values respectively, and Δ a^X_R, Δ a^Y_R are their associated fluctuations. Thus
⟨I⟩ = i|α_L| (⟨ a_R⟩^*+ ⟨Δ a^†_R⟩- ⟨ a_R⟩-⟨Δ a_R⟩)
= i |α_L|(A^*_R-A^_R) = 2i |α|Im [A^*_R]
= 2|α_L| A^Y_R
⟨ΔI^2⟩ = ⟨I^2⟩-⟨I⟩^2,
⟨I^2⟩ =⟨ [ i |α_L| (A^*_R +Δ a^†_R -A_R-Δ a_R) ]^2⟩
= ⟨ [ 2i |α_L|(Im [A^*_R]+ Im[Δ a^†_R]) ]^2⟩
= 4 |α_L|^2⟨ ( A^Y_R+Δ a^Y_R)^2⟩
≈ 4 |α_L|^2A^Y_R^2+ 4 |α_L|^2⟨Δa^Y_R^2⟩ ,
⟨ΔI^2⟩ ≈ 4 |α_L|^2⟨Δa^Y_R^2⟩
⟨Δ Y_R^2⟩ ≈⟨ΔI^2⟩/ 4 |α_L|^2 = ⟨Δa^Y_R^2⟩
The above expressions show that balanced double port homodyning can extract both the mean and second moment (power) of a returned mode. Consider now the double port homodyning of a target return that is a part of a TMSV generated at the transmitter. Since our protocol operates in the microwave domain, the detectors that produce N^(out)_R, and N^(out)_d respectively are square law detectors <cit.>, such as bolometers <cit.>, for instance. Unlike single photon counters, the detector's medium in the case of a square law detector responds to the incident signal power. On the other hand, in single photon counters it responds to the incident photon intensity or flux. Thus the former is a scalar quantity, while the later is a vector one.
As pointed out in the main text, the expected value of the quadratures of a squeezed vacuum field vanish, that is to say, the average of the current generated after the subtraction circuit is zero, ⟨I⟩ =0. However, the variance of a zero mean squeezed vacuum field is non-zero. Thus we seek a device that can display these variances. This can be achieved by a spectrum analyzer, since in the case of a TMSV, the field variance of the input coincide with the field's second moment, i.e., its power, ⟨ΔI^2⟩ = ⟨I^2⟩ as shown in Eq. (<ref>). Thus the spectral output of the spectrum analyzer is proportional to the input field power. In summary, the homodyne measurement chain deployed by our CNOT receiver is composed of two steps; first the balanced double port homodyning captures the variance of the input signal, while suppressing the LO noise. Hence the detection noise is forced to be shot limited. Then the spectrum analyzer displays the measured power. It is worth mentioning that modern spectrum analyzer devices have a built in double port homodyne circuit and displays the input power at the end of the measurement.
We consider a similar process to extract the rest of the gates outputs. For the sake of completeness we show this for the other unaffected quadrature, that is, the memory mode position quadrature. While the rest of the outputs are just a linear superposition of the return and memory modes and can be deduced similarly in a straight forward manner. Consider now performing a double port homodyne measurement on the memory mode to extract its position quadrature. Similarly as before, the mode transforms at the detection chain as
a^(out)_M = 1/√(2)(a^(in)_M+i d^(in)_)
d^(out)_ = 1/√(2)(d^(in)_-i a^(in)_M)
Then, the output of the subtraction circuit is,
I = a^(out)_M^† a^(out)_M- d^(out)_^† d^(out)_,
N^(out)_M = 1/2(N^(in)_M+N^(in)_d+id^(in)^†a^(in)_M-id^(in)a^(in)_ M^†),
N^(out)_d = 1/2(N^(in)_M+N^(in)_d-id^(in)^†a^(in)_M+id^(in)a^(in)_M^†)
I = i (d^(in)^†a^(in)_M-d^(in)a^(in)_M^†)
Thus we can extract the field's X quadrature by setting the LO phase to π /2 and normalizing the output current,
X_M =I/|α _L|√(2) = (a_M+a^†_M)/√(2)
The field's power can be extracted as described before.
§
§ PRACTICAL MODEL OF THE CNOT RECEIVER
In this section we present an implementation of the CNOT gate receiver in the main text. This model will also serve as a practical representation of the receiver's internal noise, which will eventually play a role when calculating the receiver's overall noise variance. Following the experimental implementation presented in <cit.>, and the theoretical study in <cit.>, the first beamsplitter (BS1) in Fig. (<ref>) is described by
[ √(g/1+g) √(1/1+g); -√(1/1+g) √(g/1+g) ]
The signal-idler quadratures transform as
X^1_M = √(g/1+g) X_M + √(1/1+g)X_R
X^1_R = √(g/1+g) X_R - √(1/1+g)X_M
Y^1_M = √(g/1+g) Y_M + √(1/1+g) Y_R
Y^1_R = √(g/1+g) Y_R - √(1/1+g) Y_M
Then each output of the first beamsplitter is mixed on another beamsplitter of transmissivity 1-g with the outputs of two single mode squeezers, such that squeezer A is momentum squeezed, i.e., X^(HD)_A e^r_A, Y^(HD)_A e^-r_A, on the other hand squeezer B is position squeezed, i.e., X^(HD)_B e^-r_B, Y^(HD)_B e^r_B. The beamsplitters are denoted by BS4, BS3 respectively. For ease of readability, we omit the exponential factors from the squeezed mode in the upcoming derivation, then add them back in the last step.
[ √(1-g) √(g); √(g) -√(1-g) ] .
Thus the modes transform as,
X^(2)_M = g/√(1+g) X_M + √(g/1+g)X_R + √(1-g) X^(HD)_A
X^(HD)_A = √(g) X^(HD)_A - √(g(1-g)/1+g) X_M - √(1-g/1+g)X_R
X^(2)_R = g/√(1+g) X_R - √(g/1+g)X_ M + √(1-g) X^(HD)_B
X̃^(HD)_B = √(g)X^(HD)_B -√((1-g)g/1+g)X_R+ √(1-g/1+g)X_M
Similarly,
Y^(2)_M = g/√(1+g) Y_M + √(g/1+g)Y_R + √(1-g) Y^(HD)_A
Y^(HD)_A = √(g) Y^(HD)_A - √(g(1-g)/1+g) Y_M - √(1-g/1+g)Y_R
Y^(2)_R = g/√(1+g) Y_R - √(g/1+g)Y_M + √(1-g) Y^(HD)_B
Ỹ^(HD)_B = √(g)Y^(HD)_B -√((1-g)g/1+g)Y_R+ √(1-g/1+g)Y_M
Finally the modes labeled by the superscript '(HD)' are homodyned with a local oscillator field (LO), whereas the other modes are directed towards a final beamsplitter (BS2) of transmissivity 1/1+g
[ √(1/1+g) √(g/1+g); -√(g/1+g) √(1/1+g) ]
Let us consider first the position quadratures and see how they evolve,
X^(out)_R = √(1/1+g)X^(2)_R + √(g/1+g) X^(2)_M
= ( 2g/1+g)X_R - ( √(g)(1-g)/1+g)X_M
+ √(1-g/1+g)X^(HD)_B + √(g(1-g)/1+g)X^(HD)_A
X^(out)_M = √(1/1+g) X^(2)_M- √(g/1+g)X^(2)_R
=( 2g/1+g)X_M + ( √(g)(1-g)/1+g)X_R
+ √(1-g/1+g)X^(HD)_A - √(g(1-g)/1+g)X^(HD)_B,
Suppose now that the mode X^(HD)_A in Eq. (<ref>) was homodyned with efficiency γ, that is, √(γ)X^(HD)_A - √(1-γ)X_V, where X_V is a vacuum position quadrature, then after being re-scaled appropriately is utilised to perform the following post-correction operation in order to eliminate the anti-squeezed position quadrature X^(HD)_A,
X^(out)_R → X^(out)_R - √(1-g/γ (1+g))X^(HD)_A
→ X^(out)_R- √(g(1-g)/1+g) X^(HD)_A+ 1-g/1+gX_R
+√(g)(1-g)/(1+g) X_M + √((1-γ)(1-g)/γ g(1+g))X_V,
X^(out)_R = X_R + √(1-g/1+g)X^(HD)_B+ √((1-γ)(1-g)/γ g(1+g))X_V,
We follow a similar approach to derive the expression of X^(out)_M, where a different appropriate scaling of X^(HD)_A is assumed as follows;
X^(out)_M → X^(out)_M - √((1-g)/γ g(1+g))X̃^(HD)_A
→ X^(out)_M -√((1-g)/(1+g))X^(HD)_A+1-g/1+g X_M
+(1-g)/√(g)(1+g) X_ R+√((1-γ)(1-g)/γ (1+g))X_V
= X_M+ (1-g/1+g(√(g)+1/√(g)) )X_R-√(g(1-g)/1+g)X^(HD)_B
+√((1-γ)(1-g)/γ g(1+g))X_V
=X_M+ (1-g/√(g))X_R-√(g(1-g)/1+g)X^(HD)_B
+√((1-γ)(1-g)/γ g(1+g))X_V
where G=1-g/√(g).
Focusing now on the momentum quadratures we follow a similar derivation to that in Eqs. (<ref>-<ref>),
Y^(out)_R = √(1/1+g)Y^(2)_R + √(g/1+g) Y^(2)_M
= ( 2g/1+g)Y_R - ( √(g)(1-g)/1+g)Y_M + √(1-g/1+g)Y^(HD)_B
+ √(g(1-g)/1+g)Y^(HD)_A
Y^(out)_M = √(1/1+g) Y^(2)_M- √(g/1+g)Y^(2)_R
=( 2g/1+g)Y_M + ( √(g)(1-g)/1+g)Y_R + √(1-g/1+g)Y^(HD)_A
- √(g(1-g)/1+g)Y^(HD)_B,
Then similarly, we assume that Ỹ^(HD)_B in Eq. (<ref>) was homodyned with efficiency γ, and used to perform the following post-correction operation after proper re-scaling in order to eliminate the anti-squeezed momentum quadrature Y^(HD)_B
Y^(out)_R → Y^(out)_R - √(1-g/γ g(1+g))Y^(HD)_B
→ Y^(out)_R- √(1-g/1+g) Y^(HD)_B+ 1-g/1+gY_R
-(1-g)/√(g)(1+g) Y_M + √((1-γ)(1-g)/γ g(1+g))Y_V
= Y_R - (1-g/1+g(√(g)+1/√(g)) )Y_M+ √(g(1-g)/1+g)Y^(HD)_A
+√((1-γ)(1-g)/γ g(1+g))Y_V
= Y_R - G Y_M+ √(g(1-g)/1+g)Y^(HD)_A
+√((1-γ)(1-g)/γ g(1+g))Y_V
Similarly,
Y^(out)_M → Y^(out)_M+√((1-g)/γ (1+g))Ỹ^(HD)_B
→ Y^(out)_M+1-g/1+gY_M-√(g)(1-g)/(1+g) Y_R
+ √((1-γ)(1-g)/γ (1+g))Y_V,
Y^(out)_M =Y_M - √(g(1-g)/1+g)Y^(HD)_A + √((1-γ)(1-g)/γ g(1+g))Y_V
Therefore the four receiver's output can be written as
X^(out)_R = X_R + √(1-g/1+g)X^(HD)_Be^-r_B+ √((1-γ)(1-g)/γ g(1+g))X_V,
X^(out)_M =X_M+ GX_R-√(g(1-g)/1+g)X^(HD)_Be^-r_B
+√((1-γ)(1-g)/γ g(1+g))X_V
Y^(out)_R = Y_R - G Y_M+ √(g(1-g)/1+g)Y^(HD)_Ae^-r_A
+√((1-γ)(1-g)/γ g(1+g))Y_V
Y^(out)_M =Y_M - √(g(1-g)/1+g)Y^(HD)_Ae^-r_A + √((1-γ)(1-g)/γ g(1+g))Y_V
It can be seen from the above equation that the ideal transformation in Eq.(<ref>) can be retrieved in the limit of large squeezing parameters r_A, r_B and unity homodyne detection efficiency γ.
We now consider the effect of finite squeezing and inefficient homodyne detection on the overall number of added noise photons. From the previous equation the total noise power can be calculated as
⟨ [X^(out)_R]^2⟩ = ⟨ X^2_B⟩/2 +1-g/2(1+g)⟨ [X^(HD)_B]^2⟩ +⟨ X^2_V⟩/2
+ (1-γ)(1-g)/2γ g (1+g)⟨ X^2_V⟩
⟨ [X^(out)_M]^2⟩ = ⟨ X^2_M⟩/2 + g(1-g)/2(1+g)⟨ [X^(HD)_B]^2⟩ +G^2⟨ X_B⟩/2
+ (1-γ)(1-g)/2γ g(1+g)⟨ Y^2_V⟩ + ⟨ Y^2_V⟩/2
⟨ [Y^(out)_R]^2⟩ = ⟨ Y^2_B⟩/2 + g(1-g)/2(1+g)⟨ [Y^(HD)_A]^2⟩ + ⟨ X^2_V⟩/2
+ (1-γ)(1-g)/2γ g(1+g)⟨ X^2_V⟩ + G^2Y_M/2
⟨ [Y^(out)_M]^2⟩ = ⟨ Y^2_M⟩/2 + ⟨ Y^2_V⟩/2+g(1-g)/2(1+g)⟨ [Y^(HD)_A]^2⟩
+ (1-γ)(1-g)/2γ g (1+g)⟨ Y^2_V⟩
The homodyne inefficiency and the finite squeezing of the utilized squeezer circuits enter the picture as extra added noise, and we have added the 3dB loss penalty due to measuring non commuting quadratures. By recalling that the value of of the beamsplitter parameter is 0<g<1, and the low brightness regime, it can be seen that the bath noise power dominates and the overall noise power is the expression derived earlier in Eq. (<ref>).
For practical considerations, the following experimental parameters can be assumed for physical implementation of the CNOT receiver. A realistic squeezing that can be achieved in a laboratory is approximately equal to ≈ -3 dB, that is, e^-2r≈ 0.5, such that r = ln2 / 2 <cit.>. It is also possible to achieve up to -6 dB experimentally <cit.>. As for practical gain values when a JPA is utilised as a squeezing resource, the optimal gain values are approximately ≈ 15 ± 3dB. In this regime the JPA remains quantum limited, i.e. only adds half a quantum of noise. Finally, in the optical domain the homodyne detector's efficiency is approximately γ≈ 0.97 <cit.>. Recently graphene-based microwave bolometers <cit.> have enjoyed similar successes, and thus in either cases it is pretty reasonable to assume ideal operation. Thus by adding the squeezing and vaccuum noise contributions in Eq. (<ref>), we estimate that the device internal noise adds approximately ≈ 2 noise photons.
§ ACKNOWLEDGMENT
The authors are grateful to Maximilian Reichert, Roberto Di Candia, Robert Jonsson, and Stefano Pirandola for discussions at various stages of this work.
10
RevModPhys.89.035002
C. L. Degen, F. Reinhard, and P. Cappellaro.
Quantum sensing.
Rev. Mod. Phys., 89:035002, Jul 2017.
pirandola2018advances
Stefano Pirandola, B Roy Bardhan, Tobias Gehring, Christian Weedbrook, and Seth
Lloyd.
Advances in photonic quantum sensing.
Nature Photonics, 12(12):724–733, 2018.
PhysRevLett.67.661
Artur K. Ekert.
Quantum cryptography based on bell's theorem.
Phys. Rev. Lett., 67:661–663, Aug 1991.
nielsen2002quantum
Michael A Nielsen and Isaac Chuang.
Quantum computation and quantum information, 2002.
gisin2007quantum
Nicolas Gisin and Rob Thew.
Quantum communication.
Nature photonics, 1(3):165–171, 2007.
giovannetti2011advances
Vittorio Giovannetti, Seth Lloyd, and Lorenzo Maccone.
Advances in quantum metrology.
Nature photonics, 5(4):222–229, 2011.
lloyd2008enhanced
Seth Lloyd.
Enhanced sensitivity of photodetection via quantum illumination.
Science, 321(5895):1463–1465, 2008.
tan2008quantum
Si-Hui Tan, Baris I Erkmen, Vittorio Giovannetti, Saikat Guha, Seth Lloyd,
Lorenzo Maccone, Stefano Pirandola, and Jeffrey H Shapiro.
Quantum illumination with gaussian states.
Physical review letters, 101(25):253601, 2008.
guha2009gaussian
Saikat Guha and Baris I Erkmen.
Gaussian-state quantum-illumination receivers for target detection.
Physical Review A, 80(5):052310, 2009.
zhuang2017optimum
Quntao Zhuang, Zheshen Zhang, and Jeffrey H Shapiro.
Optimum mixed-state discrimination for noisy entanglement-enhanced
sensing.
Physical review letters, 118(4):040801, 2017.
barzanjeh2015microwave
Shabir Barzanjeh, Saikat Guha, Christian Weedbrook, David Vitali, Jeffrey H
Shapiro, and Stefano Pirandola.
Microwave quantum illumination.
Physical review letters, 114(8):080503, 2015.
furusawa2011quantum
Akira Furusawa and Peter Van Loock.
Quantum teleportation and entanglement: a hybrid approach to
optical quantum information processing.
John Wiley & Sons, 2011.
filip2005measurement
Radim Filip, Petr Marek, and Ulrik L Andersen.
Measurement-induced continuous-variable quantum interactions.
Physical Review A, 71(4):042308, 2005.
yoshikawa2008demonstration
Jun-ichi Yoshikawa, Yoshichika Miwa, Alexander Huck, Ulrik L Andersen, Peter
Van Loock, and Akira Furusawa.
Demonstration of a quantum nondemolition sum gate.
Physical Review Letters, 101(25):250501, 2008.
PhysRevA.71.055801
Samuel L. Braunstein.
Squeezing as an irreducible resource.
Phys. Rev. A, 71:055801, May 2005.
gu2009quantum
Mile Gu, Christian Weedbrook, Nicolas C Menicucci, Timothy C Ralph, and Peter
van Loock.
Quantum computing with continuous-variable clusters.
Physical Review A, 79(6):062318, 2009.
bruss2019quantum
Dagmar Bruss and Gerd Leuchs.
Quantum Information, 2 Volume Set: From Foundations to Quantum
Technology Applications.
John Wiley & Sons, 2019.
haus2000electromagnetic
Hermann A Haus.
Electromagnetic noise and quantum optical measurements.
Springer Science & Business Media, 2000.
braunstein2005quantum
Samuel L Braunstein and Peter Van Loock.
Quantum information with continuous variables.
Reviews of modern physics, 77(2):513, 2005.
wu1986generation
Ling-An Wu, HJ Kimble, JL Hall, and Huifa Wu.
Generation of squeezed states by parametric down conversion.
Physical review letters, 57(20):2520, 1986.
furusawa2015quantum
Akira Furusawa and Akira Furusawa.
Quantum states of light.
Springer, 2015.
ukai2014multi
Ryuji Ukai.
Multi-step multi-input one-way quantum information processing
with spatial and temporal modes of light.
Springer, 2014.
rosenblum2018cnot
Serge Rosenblum, Yvonne Y Gao, Philip Reinhold, Chen Wang, Christopher J
Axline, Luigi Frunzio, Steven M Girvin, Liang Jiang, Mazyar Mirrahimi,
Michel H Devoret, et al.
A cnot gate between multiphoton qubits encoded in two cavities.
Nature communications, 9(1):652, 2018.
PhysRevX.12.011008
Kevin Reuer, Jean-Claude Besse, Lucien Wernli, Paul Magnard, Philipp Kurpiers,
Graham J. Norris, Andreas Wallraff, and Christopher Eichler.
Realization of a universal quantum gate set for itinerant microwave
photons.
Phys. Rev. X, 12:011008, Jan 2022.
krantz2019quantum
Philip Krantz, Morten Kjaergaard, Fei Yan, Terry P Orlando, Simon Gustavsson,
and William D Oliver.
A quantum engineer's guide to superconducting qubits.
Applied physics reviews, 6(2), 2019.
shapiro2009quantum
Jeffrey H Shapiro and Seth Lloyd.
Quantum illumination versus coherent-state target detection.
New Journal of Physics, 11(6):063045, 2009.
lanzagorta2011quantum
Marco Lanzagorta.
Quantum radar.
Synthesis Lectures on Quantum Computing, 3(1):1–139, 2011.
jantti2020quantum
Riku Jantti, Ruifeng Duan, Jari Lietzen, Hany Khalifa, and Lajos Hanzo.
Quantum-enhanced microwave backscattering communications.
IEEE Communications Magazine, 58(1):80–85, 2020.
esposito2022observation
Martina Esposito, Arpit Ranadive, Luca Planat, Sébastien Leger, Dorian
Fraudet, Vincent Jouanny, Olivier Buisson, Wiebke Guichard, Cécile Naud,
José Aumentado, et al.
Observation of two-mode squeezing in a traveling wave parametric
amplifier.
Physical Review Letters, 128(15):153603, 2022.
perelshtein2022broadband
MR Perelshtein, KV Petrovnin, Visa Vesterinen, S Hamedani Raja, Ilari Lilja,
Marco Will, Alexander Savin, Slawomir Simbierowicz, RN Jabdaraghi,
JS Lehtinen, et al.
Broadband continuous-variable entanglement generation using a
kerr-free josephson metamaterial.
Physical Review Applied, 18(2):024063, 2022.
petrovnin2023generation
Kirill Viktorovich Petrovnin, Michael Romanovich Perelshtein, Tero Korkalainen,
Visa Vesterinen, Ilari Lilja, Gheorghe Sorin Paraoanu, and Pertti Juhani
Hakonen.
Generation and structuring of multipartite entanglement in a
josephson parametric system.
Advanced Quantum Technologies, 6(1):2200031, 2023.
PhysRevLett.109.183901
E. Flurin, N. Roch, F. Mallet, M. H. Devoret, and B. Huard.
Generating entangled microwave radiation over two transmission lines.
Phys. Rev. Lett., 109:183901, Oct 2012.
abdo2013nondegenerate
Baleegh Abdo, Archana Kamal, and Michel Devoret.
Nondegenerate three-wave mixing with the josephson ring modulator.
Physical Review B, 87(1):014508, 2013.
flurin2015superconducting
Emmanuel Flurin, Nicolas Roch, Jean-Damien Pillet, François Mallet, and
Benjamin Huard.
Superconducting quantum node for entanglement and storage of
microwave radiation.
Physical review letters, 114(9):090503, 2015.
moiseev2018broadband
SA Moiseev, KI Gerasimov, RR Latypov, NS Perminov, KV Petrovnin, and
ON Sherstyukov.
Broadband multiresonator quantum memory-interface.
Scientific reports, 8(1):3982, 2018.
yuen1980optical
Horacce Yuen and J Shapiro.
Optical communication with two-photon coherent states–part iii:
Quantum measurements realizable with photoemissive detectors.
IEEE Transactions on Information Theory, 26(1):78–92, 1980.
schumaker1984noise
Bonny L Schumaker.
Noise in homodyne detection.
Optics letters, 9(5):189–191, 1984.
knill2001scheme
Emanuel Knill, Raymond Laflamme, and Gerald J Milburn.
A scheme for efficient quantum computation with linear optics.
nature, 409(6816):46–52, 2001.
karsa2022noiseless
Athena Karsa, Masoud Ghalaii, and Stefano Pirandola.
Noiseless linear amplification in quantum target detection using
gaussian states.
Quantum Science and Technology, 7(3):035026, 2022.
shapiro2020quantum
Jeffrey H Shapiro.
The quantum illumination story.
IEEE Aerospace and Electronic Systems Magazine, 35(4):8–20,
2020.
barzanjeh2020microwave
Shabir Barzanjeh, Stefano Pirandola, David Vitali, and Johannes M Fink.
Microwave quantum illumination using a digital receiver.
Science advances, 6(19):eabb0451, 2020.
caves1980measurement
Carlton M Caves, Kip S Thorne, Ronald WP Drever, Vernon D Sandberg, and Mark
Zimmermann.
On the measurement of a weak classical force coupled to a
quantum-mechanical oscillator. i. issues of principle.
Reviews of Modern Physics, 52(2):341, 1980.
desurvire1995erbium
Emmanuel Desurvire and Michael N Zervas.
Erbium-doped fiber amplifiers: principles and applications.
Physics Today, 48(2):56, 1995.
gallager2013stochastic
Robert G Gallager.
Stochastic processes: theory for applications.
Cambridge University Press, 2013.
agrawal2012fiber
Govind P Agrawal.
Fiber-optic communication systems.
John Wiley & Sons, 2012.
hoffmann2010superconducting
E Hoffmann, F Deppe, T Niemczyk, T Wirth, EP Menzel, G Wild, H Huebl,
M Mariantoni, T Weißl, A Lukashenko, et al.
A superconducting 180 hybrid ring coupler for circuit quantum
electrodynamics.
Applied Physics Letters, 97(22):222508, 2010.
lu2023high
Yao Lu, Aniket Maiti, John WO Garmon, Suhas Ganjam, Yaxing Zhang, Jahan Claes,
Luigi Frunzio, SM Girvin, and Robert J Schoelkopf.
A high-fidelity microwave beamsplitter with a parity-protected
converter.
arXiv preprint arXiv:2303.00959, 2023.
tan2017quantum
Kuan Yen Tan, Matti Partanen, Russell E Lake, Joonas Govenius, Shumpei Masuda,
and Mikko Möttönen.
Quantum-circuit refrigerator.
Nature communications, 8(1):15189, 2017.
lee2020graphene
Gil-Ho Lee, Dmitri K Efetov, Woochan Jung, Leonardo Ranzani, Evan D Walsh,
Thomas A Ohki, Takashi Taniguchi, Kenji Watanabe, Philip Kim, Dirk Englund,
et al.
Graphene-based josephson junction microwave bolometer.
Nature, 586(7827):42–46, 2020.
walsh2017graphene
Evan D Walsh, Dmitri K Efetov, Gil-Ho Lee, Mikkel Heuck, Jesse Crossno,
Thomas A Ohki, Philip Kim, Dirk Englund, and Kin Chung Fong.
Graphene-based josephson-junction single-photon detector.
Physical Review Applied, 8(2):024022, 2017.
kokkoniemi2020bolometer
Roope Kokkoniemi, J-P Girard, Dibyendu Hazra, Antti Laitinen, Joonas Govenius,
RE Lake, Iiro Sallinen, Visa Vesterinen, Matti Partanen, JY Tan, et al.
Bolometer operating at the threshold for circuit quantum
electrodynamics.
Nature, 586(7827):47–51, 2020.
qiu2023broadband
Jack Y Qiu, Arne Grimsmo, Kaidong Peng, Bharath Kannan, Benjamin Lienhard,
Youngkyu Sung, Philip Krantz, Vladimir Bolkhovsky, Greg Calusine, David Kim,
et al.
Broadband squeezed microwaves and amplification with a josephson
travelling-wave parametric amplifier.
Nature Physics, pages 1–8, 2023.
fedorov2016displacement
Kirill G Fedorov, L Zhong, S Pogorzalek, P Eder, M Fischer, J Goetz, E Xie,
F Wulschner, K Inomata, T Yamamoto, et al.
Displacement of propagating squeezed microwave states.
Physical review letters, 117(2):020502, 2016.
Hany Khalifa (hany.khalifa@aalto.fi) received his M.Sc. from Ain Shams University, Faculty of Engineering, Cairo, Egypt in 2018. Currently he is finalizing his doctoral studies at Aalto University. His research interests include quantum optics, quantum information theory and quantum communications.
Kirill Petrovnin (kirill.petrovnin@aalto.fi) received his PhD (2019) in Radiophysics and Optics from Kazan Federal University, Russia. He is currently a Postdoctoral researcher in Department of Applied Physics at Aalto University, Finland. His research mainly focused on continuous variable entanglement, sensing and information protocols, achievable with superconducting quantum parametric devices, which operate in microwave frequencies.
Riku Jäntti [SM'07] (riku.jantti@aalto.fi) received his M.Sc. degree (with distinction) in electrical engineering in 1997 and his D.Sc. degree (with distinction) in automation and systems technology in 2001, both from Helsinki University of Technology (now Aalto), Finland. He is a full professor of communications engineering and the head of the Department of Communications and Networking at Aalto University School of Electrical Engineering, Finland. He is an associate editor of IEEE Transactions on Vehicular Technology. He is also IEEE VTS Distinguished Lecturer (Class 2016). His research interests include radio resource control and optimization for machine-type communications, cloud-based radio access networks, spectrum and coexistence management, and quantum communications.
G.S. Paraoanu (sorin.paraoanu@aalto.fi) did his B.Sc. and M.Sc. in physics at the University of Bucharest, Romania, followed by a Ph.D. in theoretical physics (2001) from the University of Illinois at Urbana-Champaign, U.S.A. He went then for a postdoc at the University of Jyväskylä, Finland - followed by several academic positions (Marie Curie scholarship, senior research scientist, Academy of Finland Research Fellow). He also had visiting positions in Austria and Switzerland. He is presently at Aalto University in Finland (Senior University Lecturer) where he leads a group specialized in superconducting quantum circuits.
|
http://arxiv.org/abs/2307.01713v1
|
20230704133156
|
Logic meets Wigner's Friend (and their Friends)
|
[
"Alexandru Baltag",
"Sonja Smets"
] |
quant-ph
|
[
"quant-ph",
"math.LO",
"81P10",
"F.4"
] |
Logic meets Wigner's Friend (and their Friends)]Logic meets Wigner's Friend (and their Friends)
[1]Alexandru Baltagthealexandrubaltag@gmail.com
These authors contributed equally to this work.
[2]Sonja Smetss.j.l.smets@uva.nl
These authors contributed equally to this work.
[1,2]ILLC, University of Amsterdam, Science Park 107, Amsterdam, 1098 XG, the Netherlands
We take a fresh look at Wigner's Friend thought-experiment and some of its more recent variants and extensions, such as the Frauchiger-Renner (FR) Paradox. We discuss various solutions proposed in the literature, focusing on a few questions: what is the correct epistemic interpretation of the
multiplicity of state assignments in these scenarios; under which conditions can one include classical observers into the quantum state descriptions, in a way that is still compatible with traditional Quantum Mechanics?; under which conditions can one system be admitted as an additional `observer' from the perspective of another background observer?; when can the standard axioms of multi-agent Epistemic Logic (that allow “knowledge transfer” between agents) be applied to quantum-physical observers? In the last part of the paper, we propose a new answer to these questions, sketch a particular formal implementation of this answer, and apply it to obtain a principled solution to Wigner Friend-type paradoxes.
[
*
August 1, 2023
==================
§ INTRODUCTION
In this paper we focus on Wigner's Friend thought-experiment <cit.>, and the proposed variations <cit.> and extensions such as the Frauchiger-Renner (FR) Paradox <cit.>, that have recently shaken-up the debate in the foundations of quantum theory. Such thought experiments seem to indicate that, if quantum theory is assumed to be universally valid (and hence can be applied to complex systems that are composed of quantum systems as well as their classical observers), then different agents are rationally entitled to ascribe different (pure) states to the same system, and as a result they cannot share their information in a consistent manner. More precisely, the result in <cit.> is stated in the format of a no-go theorem, stating that any theory which satisfies conditions (Q) (the universal validity of quantum theory), (C) (the consistency between agents about their predictions of measurement outcomes) (S) (the unique outcomes of measurements) will lead to a contradiction.
The authors in <cit.> conclude that one cannot consistently maintain all three conditions (S, C and Q), and hence that `quantum theory cannot be extrapolated to complex systems, at least not in a straightforward manner'. For some, the result seems to point towards subjectivist or relational interpretations of quantum mechanics, in which the state of a given system is observer-dependent, and thus in some sense “unreal”.
According to the subjectivist (QBist) view, a quantum state does not represent any objective fact of the world, or any true (factive) information that can be possessed by an agent (`knowledge'), but it simply captures the agent's subjective probabilistic beliefs about the possible outcomes of her measurements. These beliefs do not have any truth value, they are not right or wrong, they are just (in some sense) `rational' predictions with no objective content. Moreover, the different observers simply cannot incorporate each other's information in a consistent picture: they are condemned to “agree to disagree”, on pain of contradiction. They seem to live in different universes, and no amount of communication can heal their disagreement!
The relational view <cit.> is more nuanced, as it aims at keeping a form of realism. State descriptions are now taken to reflect true `facts' of the world, though of a relative nature: they are interactive properties, that a system S possesses only in relation to another system O. Such binary, interactive properties are no less objective or `factual' than the classical unary properties. There is nothing inherently subjective about them, since the distinction between observed system and observer has nothing to do with subjectivity or beliefs, and it is just a methodological convention needed to describe the world from a situated perspective; in Relational Quantum Mechanics, there is no “view from nowhere”. But the distinction between S and O is arbitrary and the roles can be reversed: any system can play the role of `observer', in the same way that any system of coordinates can play the role of a reference system in Relativity Theory.
In another series of papers <cit.>, authors analyze condition (C) in the language of modal logic so that it can be further specified in relation to (Q) and (S). The use of a precise logical language can indeed reveal insights into which parts of the paradox can lead to an inconsistency. Of course, if the set of assumptions one starts from are inconsistent then reasoning about them on the basis of a consistent set of axioms will still lead to a contradiction. In the case of the FR paradox scenario, it is clear that one can reason about each of the assumptions (C),(Q) and (S) separately in a consistent way while the problem appears when we merge (C),(Q) and (S) and their underlying axiom systems together. None of the logic-based approaches in the literature actually shows what an adequate constraint on the merged axiom systems should be to obtain a consistent set of axioms under which we can reason about all assumptions together. Simply denying the validity of the basic axioms of Epistemic Logic (as suggested in <cit.>), just because they imply condition (C), seems to us a rather ad hoc and unhelpful approach, since it begs the question: what other epistemic principles are we supposed to use when talking about quantum agents? And why do the standard postulates of Epistemic Logic seem to work for reasoning about practically all multi-agent protocols used in (classical or quantum) computation and communication.
In this paper we assume the standard Hilbert-space formalism of quantum theory (as formalized by von Neumann in <cit.>), as well as the common-sense use of higher-level reasoning of classical agents about each other (as formalized in basic Epistemic Logic <cit.>). But we aim to provide a clear-cut criterion for when is such epistemic reasoning applicable to `agents' that are quantum-physical systems. This requires us to address some related questions:
first, what is the correct epistemic interpretation of the multiplicity of state assignments? And secondly we ask under which conditions can one actually include classical observers in the quantum state descriptions in a way that is still compatible with traditional Quantum Mechanics? As we show in this paper, once these questions are answered, the issue of communication and agreement between different observers can be easily addressed.
To answer these questions, we will need to clarify a topic that the standard formalism of quantum mechanics has left open: what is a precise specification of what counts as an `observer' and what counts as `an observed system' in scenarios where we have more than 1 observer? In the case of modelling the quantum observation performed by a single classical observer, both Heisenberg <cit.> and von Neumann <cit.> have indicated that it is necessary to start by specifying a clearly dividing line, a cut c_SO between what counts as the system S under observation and what counts as the `observer' O.
The exact place of the cut has been a topic of debate in the foundations of quantum theory, it is a problem that has been referred to as the “shifty split” problem by J.S. Bell. Von Neumann specifies that the cut c_SO can, at least to a very large extent, be arbitrarily chosen, and hence that the system under observation could well include the measurement apparatus or even the chemical processes that happen in one's brain. Similarly, Heisenberg indicates that the cut is movable and can be “shifted arbitrarily far in the direction of the observer in the region that is otherwise described according to the laws of classical physics” <cit.>. The question Heisenberg is concerned with is to show that “the quantum mechanical predictions about the outcome of an arbitrary experiment are independent of the location of the cut...” <cit.>.
This seems to work fine as long as we consider only one observer, but Wigner's Friend scenarios require us to reason about the cuts that are tied to different observers. How we should reason about another observer's measurement results? In other words, given a specific cut c_SO, when is an observer O entitled to admit some (other part of the) observed system S as an additional observer O' (who comes with her own cut c'_O'S'), in a way that is still consistent with the predictions made by standard quantum theory?[We refer to <cit.> for an overview of how different interpretations of quantum mechanics treat the cut between an observer an a system under observation, where in particular the neo-Copenhagen interpretation can be aligned with the view that takes the cut to be subjective so that each observer induces her own cut.] Without first answering this question, it doesn't even make sense to talk about higher-order reasoning (of observers about other observers): such a specification (of when another physical system can count as an observer for us) needs to be first checked before we can proceed to reason about other systems as observers.
The solution we propose in this paper can be understood as extending the relational view on Quantum Mechanics to the property of “being an observer”, which becomes itself relativized. More specifically, we define a relational notion of admissible observer (that is always relative both to another background observer O and to a given history h or protocol π). Roughly speaking, a system A is an “admissible observer” with respect to O and h if and only if it is always possible that (as far as the background observer O can know) that none of the information carried by A will be fully erased from the universe at any moment of the given history h. This can happen either because that A's memory stays intact until the end of h, or because the information keeps “leaking” out of A, e.g. in the form of copies or records that are being disseminated and copied again before they can be destroyed.[This second case is actually the most general one, since it subsumes the first case: A can in a trivial sense be said to be a record of itself.]
A further step is to require mutuality of this relationship: a family of subsystems forms a “community of admissible observers” if each of them is an admissible observer with respect to all the others. Full higher-order epistemic reasoning (of observers reasoning about other observers etc) is consistently applicable only within such communities of admissible observers.
Typical Wigner's Friend-type scenarios break these conditions: some of the supposed “agents” (such as the Friend in the box) are not admissible observers from the perspective of the other agents (such as Wigner), precisely because the later ones get to know that the information carried by the earlier one is being completely erased from the universe. In contrast, in the epistemic scenarios that underly standard multi-agent protocols in Quantum Information and Quantum Computation theory, one can claim that the conditions required for mutual admissibility as observers are satisfied.[ This is primarily because large macroscopic systems are inherently “leaky”, spontaneously disseminating information into their environment. And secondarily, because real `agents' typically do make multiple copies of their information, and tend to save them or leak them into places that are not under the full control of other agents' (and may be even beyond their own control). It is thus practically impossible for an agent to be absolute sure that all traces of another agent's information have been fully erased from the universe.]
We organize this paper as follows. In the next section, we first introduce Wigner's Friend paradox and its extensions, and in section <ref> we briefly sketch how in certain conditions the paradoxes can be avoided by allowing agents to “leak” their information into remote places that are beyond the other agents' control. In subsection <ref> we look at the stronger FR Paradox, and in subsection <ref> we go over some of the solutions and interpretations of this thought experiment proposed in the literature. In section <ref>, we sketch a formalization of our proposed solution and we apply it to several versions of the above-mentioned paradoxes. We end, in section <ref>, with some conclusions and further reflections.
§ INTRODUCING WIGNER'S FRIEND(S)
Wigner's Friend thought experiment <cit.>, as well as the proposed variations and extension by Deutsch <cit.>
and in particular the Frauchiger-Renner (FR) Paradox <cit.>, aim to show that if quantum theory is assumed to be universally valid (and hence can be applied to complex systems that are composed of quantum systems as well as their classical observers), then different agents are rationally entitled to ascribe different (pure) states to the same system, and as a result they cannot share their information in a consistent manner.
The latest result in this series is the FR-paradox which is stated in the format of a no-go theorem, stating that any theory which satisfies conditions (Q) (the universal validity of quantum theory), (C) (the consistency between
agents about their predictions of measurement outcomes) (S) (the unique outcomes of measurements) will lead to a
contradiction. It is crucial that these assumptions are made fully explicit, so we will provide our reading of them as follows.
Assumption (S): Quantum mechanics in von Neumann's formalisation describes external systems S from the perspective of an observer O. When O interacts with S, he describes this as a state collapse, with a unique measurement result. According to von Neumann, “we are obliged always to divide the world into two parts, the one being the observed system, the other the observer. In the former we can follow all physical processes (in principle at least)arbitrarily precisely. In the latter, this is meaningless. The boundary between the two is arbitrary to a very large extent.”<cit.>. In this view, the “cut” between the observer O and the observed system is here taken to be relative and irrelevant. Indeed every system, be it a measurement apparatus or the retina of an observer, i.e. a system that can record a result, can be treated as an `observer'.
Assumption (Q): Following <cit.>, given an external system S and observer O, then, in the absence of interaction with O, O can describe S using unitary evolutions U (possibly applied to a larger supersystem S'⊇ S, obtained by taking a tensor product), then possibly applying a partial trace (to represent the state of subsystem S).
By adopting an `external' perspective of another observer that is far enough from any given interaction between O and S, every apparent measurement-collapse can also be treated as a unitary evolution (sometimes combined with taking a partial trace).
Assumption (C): The descriptions obtained by using the standard formalism of quantum mechanics by different `observers', are mutually consistent in some sense, and can thus be shared (by communication) without contradictions. Even in the absence of communication, observers can still put themselves in the shoes of other observers, reason counterfactually (including about themselves) from such an external perspective, and in this process use freely their own current private information, with no restrictions and no danger of contradictions.
In this explicit form and under these assumptions, the “ FR paradox” is just a theorem, there is no arguing with it. So the remaining questions we are concerned with are: (a) which assumption(s) are subject to constraints or must be given up, and why? (b) how can we explain the apparent validity of all the above assumptions in “standard” contexts (e.g. quantum information protocols involving multiple agents, the non-contradictory nature of science as a social activity based on communication and agreement of multiple scientists, etc)?
§.§ Wigner's Friend thought experiment
The original version of Wigner's Friend paradox (as depicted in Figure 1) assumes given an isolated lab L, within which there is a quantum system S and a friend F, who can perform measurements on S. Wigner W is located outside the lab where he can perform measurements on the entire lab L consisting of S and F. Both F and W are treated as different `observers'. The following scenario ensues.
Epistemic-Quantum Scenario 1. The experiment starts by assuming that it is common knowledge[Common knowledge is one of the strongest epistemic requirements that one can impose on the members of a group, it requires an infinite series of higher-order levels of knowledge about a fact to hold (i.e. p holds and everybody knows it and everybody knows that everybody knows it and everybody knows that everybody knows that everybody knows it etc.). For details see a standard textbook such as <cit.>.] that a quantum system S is initially in the superposition state |+⟩_S:= 1/√(2) (|0⟩_S+ |1⟩_S). Inside the Lab L=S+F, the Friend F measures system S in the standard basis {|0⟩_S,|1⟩_S}, recording either the outcome a=0 for state |0⟩_S or a=1 for state |1⟩_S. Suppose the actual outcome is a=0. Looking now at Wigner's perspective, W describes the entire lab-experiment as a unitary transformation U, that entangles S and F. When setting |fail⟩_L:=1/√(2)(|0⟩_S⊗|0⟩_F + |1⟩_S ⊗ |1⟩_F), and
|ok⟩_L:=1/√(2)(|0⟩_S⊗|0⟩_F - |1⟩_S ⊗ |1⟩_F), W computes the result of U to be |fail⟩_L.
Reasoning from different perspectives. The problem in scenario 1 becomes visible by the observers assigning different descriptions to the state of S and providing different predictions. Indeed at the end of the scenario, W and F have the following different descriptions of the system S: F assigns to S the pure state |0⟩_S; while W assigns to S the mixed state {|0⟩_S: 1/2, |1⟩_S: 1/2}, obtained by tracing out F in the density-operator description of L:
Tr_F (ρ_L):= Tr_F (|fail⟩_L ⟨ fail|_L)= 1/2 |0⟩_S ⟨ 0|_S +1/2 |1⟩_S⟨ 1|_S.
These different descriptions by F and W lead to different predictions: F predicts that any new measurement of S in the standard basis will yield outcome |0⟩_S; while W assigns equal probabilities to the outcomes |0⟩_S and |1⟩_S.
Whose predictions are `right': F's or W's?
Compatible State Descriptions. First we note that, in a sense, the descriptions of F and W at the end of scenario 1 are compatible with each other, as one is a (more informative) refinement of the other. Indeed, the probabilistic assignment {|0⟩_S: 1} can be obtained from the assignment {|0⟩_S: 1/2, |1⟩_S: 1/2} by applying standard Bayesian conditioning.
More generally, Leifer and Spekkens <cit.> define two density operators (“mixed states”) to be compatible if they have a common refinement (obtainable by conditioning each of the two assignments); equivalently, iff they can both be obtained (by conditioning) from a common prior assignment. In particular, density operators corresponding to pure states (“maximally informative” descriptions) are compatible iff they are equal.
So assuming in scenario 1 that the probabilistic assignment {|0⟩_S: 1} can be obtained from the assignment {|0⟩_S: 1/2, |1⟩_S: 1/2}, we can conclude that F's description is more informative than W's. Hence in a sense both W and F can be “right”, though F has more information about S than W. This claim can be “confirmed” by communication: suppose F announces to W the outcome a=0 of his measurement. From W's perspective, this can be interpreted as a measurement by him of F's state |0⟩_F, which collapses the state
|fail⟩_L=|0⟩_S⊗|0⟩_F + |1⟩_S ⊗ |1⟩_F into the state |0⟩_S⊗ |0⟩_F. The two agree now, and W has now adopted F's description of S.
The above argument seems to indicate that F really has more information about S than W! After all if F were to announce his measurement outcome to W then they would agree. But is this argument always applicable?
Incompatible State Descriptions. In the above analysis of scenario 1 we pointed out that F's description is more informative than W's description, if we are concerned with the question: what will be the outcome of a measurement of S in the standard basis?
But what if we are concerned with predicting the outcome of measurements of the full lab L=S+F? In that case we reach a problem because F cannot even represent a measurement of L which includes himself.
In general, Quantum Mechanics does not seem to provide a way for observers to describe themselves (or any supersystem that includes themselves).
To still reason about the lab L that contains himself, agent F could try to adopt an “external” (counterfactual) view of himself, by asking himself what state W (or any other external observer) would assign to the lab L if he (F) communicated to him all he knows?
As we saw, after such a communication, W would describe L as being in state |0⟩_S⊗ |0⟩_F. The same applies to any external observer who'd have access to the information of both W and F.
Note that, to do this, no actual communication is necessary: F can just imagine this communication as a counterfactual possibility, and conclude that the state of his lab L is |0⟩_S⊗ |0⟩_F.
Since no communication actually happens, W still assigns the state |fail⟩_L to the lab L.
We seem to have reached a contradiction. The two descriptions |0⟩_S⊗ |0⟩_F (as provided by F when he reasons from W's perspective and takes his own information into account) and |fail⟩_L (as provided by W) are both pure states, thus they are incompatible in the sense of being mutually exclusive. The obtained descriptions cannot both represent the true state, since they do not have a common refinement, and they lead to incompatible predictions.
One could in fact claim that in some sense each of the agents has more information than the other with respect to the some potential measurement.
To recap, if the lab L is measured in the standard basis, then F's description predicts outcome |00⟩_L with probability 1; while
W's description gives equal probabilities 1/2 to outcomes |00⟩_L and |11⟩_L. So F has more information than W about standard-basis measurements of L.
However, if the lab L is measured in any basis that includes |fail⟩_L and |ok⟩_L, then W's description predicts outcome |fail⟩_L with probability 1; while F gives equal probabilities 1/2 to outcomes |fail⟩_L and |ok⟩_L.
So one could claim that W has more information than F about the Bell-basis measurements of L.
So, who is `right'? Can both agents be right? This time, quantum mechanics provides no way to combine the information of the two observers: there is no common refinement to any (pure or mixed) state.
Reasoning about Other Observers in Quantum Mechanics. The lesson we draw from the analysis of scenario 1 is that reasoning about other agents is problematic in quantum mechanics. It seems that if F adopts an `external' perspective on himself while combining this view with his own information, then the two agents' descriptions are directly contradictory!
But note that this contradiction is not “centralized” in any one agent: no single observer possesses all the information needed to derive the contradiction. Each of the descriptions is still “locally consistent”. Also, note that an actual measurement by W (or anybody else) of the lab L in a basis that includes |fail⟩_L and |ok⟩_L would erase the quantum state |0⟩_S, thus physically destroying the information that F possessed about S.
So, when reasoning about measurements of the lab L, we seemed to have reached a genuine contradiction. It looks like quantum mechanics doesn't allow observers to adopt another observer's perspective. Or should we just conclude that agents simply cannot reason about actions that erase their own current information? Or maybe they can do that, as long as they abstain from using the erased information? Or maybe even that is OK, as long as the resulting global inconsistency is never “internalized” in any single agent? We come back to such questions in the next sections.
§.§ The Third Observer and the Leaking Lab
One can try to avoid the issue of agents' reasoning about actions that erase their own memory, by delegating this task to a third observer O.
As we'll see below, an agreement about the state-descriptions can be regained if F “leaks” his information out of the lab before it is erased by W, by communicating it to O. To show this we introduce scenario 2.
Scenario 2: Wigner's Friend and an Observer. In this altered scenario, we assume that O starts in a state |0⟩_O. Initially, O is in the same epistemic situation as W is in scenario 1 : so O assigns to L the state |fail⟩_L.
At this stage, we have three observers W, F and O: the two external observers (W and O) agree on the lab's state assignment |fail⟩_L, but their state description is incompatible with F's assignment of state |00⟩_L:=|0⟩_S⊗ |0⟩_F to the same lab. These different predictions could in principle be tested against each other by performing a measurement of the lab L in the standard basis (or also in the Bell basis).
However, suppose now that, before any such measurement of the whole lab L can be performed, F communicates the outcome of his measurement a=0 to O. When O receives the message from F, the state assigned to L by O collapses to |00⟩_L. Now O and F agree on their descriptions of the lab (and hence also on their descriptions of system S): they both assign the state |00⟩_L to the lab L, and state |0⟩_S to system S. How abour Wigner?
We might think that W (being unaware and unaffected by the communication between F and O) will still assign to L the state |fail⟩_L.
In scenario 2, it seems that we still have obtained a direct contradiction between the descriptions of L by W and O while having avoiding any reasoning about one's memory erasure, or any adoption of another agent's perspective!
Note however that, given scenario 2, this reasoning is actually mistaken: it relies on W's wrong assumption that no communication happens between F and O. If W assigns to the lab the state assignment |fail⟩_L, this simply reflects his false belief that F has not leaked his measurement results to anybody else.
However, if W is a perfectly rational agent, he cannot rely on this false assumption. Even the mere possibility of such a communication should lead W to change his state assignment for L. Indeed, let us suppose that W considers possible that F is leaking his results to O, but assigns a very low (though non-zero) probability to this eventuality, say, 1% (probability 0.01); hence, with probability 0.99, he thinks no leaking happens. In the case that the communication does happen, W should model this as a unitary operation that entangles L and O, ending with a description of system L+O in the state
|fail⟩_LO:=1/√(2)(|00⟩_L ⊗ |0⟩_O + |11⟩_L⊗ |1⟩_O).
In contrast, in the case that no leaking happens, then W should still assign state |fail⟩_L to the lab (and thus assign state |fail⟩_L⊗ |0⟩_O to the system L+O). Overall, given his probabilistic uncertainty about these two cases, W should assign to L+O the mixed state
{ |fail⟩_LO : 1/100, |fail⟩_L⊗ |0⟩_O : 99/100}.
By tracing out O's state (i.e. taking the partial trace Tr_O), we conclude that W's correct description of the lab is given by the mixed state
{|00⟩_L :1/200, |11⟩_L: 1/200, |fail⟩_L: 99/100}.
This description is compatible with both O's and F's descriptions of L as |00⟩_L, it is indeed a less informative description (that can be refined by Bayesian conditioning to |00⟩_L). Now, all agents agree in their predictions concerning any further measurement of the lab!
If instead we make the stronger assumption that W knows that the leaking happens (without knowing the communication's actual content, i.e. F's measurement results), then his description gets sharper: since he assigns probability 1 to the first case (in which communication happens), his state assignment for L+O will now be exactly |fail⟩_LO, and thus by tracing out O he will assign to the lab the mixed state
{|00⟩_L :1/2, |11⟩_L: 1/2}.
So we see that, if we allow for the possibility (or alternative, for the certainty) of information `leaking' out of the lab (before it is erased by W and before the lab as a whole is measured), agreement is regained!
As far as W's description of L is concerned, both the above mixed-state assignments for L are compatible with the description that W would give to L if he included F (together with himself) on the observer's side of the `cut' (i.e. if he assumed that, whenever F interacts with a system S, it collapses its state, without W necessarily knowing the actual outcome). Indeed, W's description of L in that case would be
{|00⟩_L :1/2, |11⟩_L: 1/2},
which coincides with his description above in the case that leakage is certain (and is a refinement of his description for the case that leakage is merely possible with some non-zero probability).
It is true that theoretically this agreement is not `perfect': just before the leakage, there is disagreement between the internal observer (F) and the external ones (W and O) concerning L's state. But, if any measurement of the whole lab L can only happen after the leakage happened (with certainty), or at least after it may have happened (as far as the relevant agents know), then this disagreement remains purely `academic' and essentially untestable: the possibility of information leaking saves the appearances. To put it conversely: if the friend F's information is inherently so “leaky”, or if he is inherently so prone to making records of his information and disseminating them far enough beyond W's control, then for all practical purposes W can treat F as a collapsing `observer', just like himself!
One may still object that, even after the leaking happens, there will still be some disagreement at some higher level, when the different observers attempt to describe the super-system L+O. E.g. in the case that W knows that the communication between F and O happened (but doesn't know F's measurement result), we saw that he assigns state |fail⟩_LO to L+O. In contrast, at the same time (after the communication from F to O), both F and O know the result of F's measurement was |0⟩_S (and know that O got this result); so (by counterfactually taking the point of view of some other observer external from themselves) they could describe the super-system L+O as being in state |fail⟩_L⊗ |0⟩_O. We again obtain incompatible predictions, this time referring to the results possible measurements of L+O. However, this disagreement also remains `academic' and untestable, as long as no measurement of the whole L+O can happen, e.g. because this super-system cannot encapsulated, but remains open and beyond our agents' control; which means that, even if and when such a global measurement of L+O would eventually happen, further leaking (outside L+O) may have happened in the meantime (or at least this possibility cannot be excluded by then).
§.§ FR thought experiment
As we saw, all the above paradoxes involve the physical erasure of the information possessed by one of the agents involved in each scenario. As we also noticed (and as previously noted by other authors <cit.>), these scenarios stop being `paradoxical' if, before any such erasure takes place, the relevant information has been copied, recorded or “leaked” in some remote place (that is not under other agents' control). In fact, as we will show, the mere epistemic possibility of such leakage is enough to solve the paradox: as long as nobody knows for a fact that the relevant information has been completely erased from the universe, no paradox will occur! In section <ref>, we will take this observation as the basis of our proposed solution of the paradoxes.
But before doing this, we first have to consider a version of these paradoxes that may at first sight seem to provide a pre-emptive objection to this solution: what if we manage to get the contradiction, while apparently still preserving the observed information somewhere else, e.g. into another observer who keeps a record on it? This is exactly the claim involved in
the recent strengthening of Wigner's Friend proposed by Frauchiger and Renner (the FR Paradox 2018).
In addition, what if at the same time we avoid any need for internalizing another agent's perspective, by reasoning strictly from one agent's perspective (as in <cit.>)?
The FR thought experiment, and its subsequent variants, were designed precisely in order to make explicit these issues and explore their puzzling implications.
Scenario 3 (Frauchiger and Renner 2918) This time we have two labs L and L, two outside observers (-two `Wigners' W and W), and two `Friends' (F and F) inside their corresponding labs. In addition, each lab will contain an observed subsystem: a quantum coin R inside L, and a particle spin S (to be sent as a signal by F and received by F inside L). The protocol starts with the following events in lab L:
* At t_0: F measures a quantum coin R in state √(1/3)|h⟩_R + √(2/3)|t⟩_R and measures it in the {|h⟩_R,|t⟩_R} basis. F outputs r=heads or r=tails with probabilities 1/3 and 2/3 respectively, inside lab L.
* Still at t_0: F sets the spin S of a particle to |↓⟩_S if r=heads and to superposition state |→⟩_S = √(1/2)(|↓⟩_S + |↑⟩_S) if r=tails. Next, she sends S to F.
The protocol continues with the following events in lab L:
* At t_1: F measures S in the basis {|↓⟩_S,|↑⟩_S}, records the outcome.
Next we return to the actions of the outside observers, we start first with lab L:
* At t_2: W measures the lab L={R,F} w.r.t the basis containing |ok⟩_L
with
|ok⟩_L=
√(1/2)(|h⟩_R|h⟩_F - |t⟩_R|t⟩_F)
She announces ok if the outcome associated to this vector occurs else, she announces fail.
The protocol continues with W observing the second lab L:
* At t_3: W measures lab L={S,F} w.r.t a basis containing |ok⟩_L,
where
|ok⟩_L = √(1/2)(|↓⟩_S|↓⟩_F - |↑⟩_S|↑⟩_F)
She publicly announces ok if the outcome associated to
this vector occurs, else she announces fail.
If W gets outcome ok and W gets ok then the experiment is halted, else the protocol repeats.
From the modeler's perspective, when using quantum mechanics, the halting condition is reached with probability 1/12. However, epistemic reasoning indicates that the halting condition can never be reached, and hence we obtain a direct contradiction. This is the summary of the epistemic reasoning steps used in the main FR argument that is described by Waaijer and van Neerven <cit.>: From W and W's perspective, if W reaches `ok' then in the final step W is certain that he has to announce `fail':
(i) If F observes `tail' at t_0, she prepares S in the superposition state and sends it to F. She also infers that W will announce `fail'.
(ii) If F measures `up' at t_1 , she is certain that F must have measured `tail' at t_0. Because of (i) and the hypothesis (C), F therefore is certain that W will announce `fail'.
(iii) If W measures `ok' at t_2, he infers that F must have measured the spin to be `up' at t_1 . Because of (ii), W is then certain that F is certain that W will
announce `fail'. Hence by (C), W is certain that W will announce `fail'.
(iv) If W hears W's announcement of `ok' at t_2, then by (iii) W is certain that W is certain that W will announce `fail'. Hence by (C), W is certain to announce `fail'.
§.§ Approaches to the FR Paradox
Several authors (see e.g. <cit.> conclude that quantum mechanics cannot consistently describe one observer's reasoning about another observer, at least not in the way in which common sense (and standard epistemic logic) does. Hence, some of these authors question condition (C), i.e. the assumption that the agents' descriptions/predictions are mutually consistent. These doubts can be extended to one or another formal conditions underlying (C), depending on which of them are deemed blameworthy by each author.
A possible culprit: knowledge transfer?
Nurgalieva and del Rio <cit.> make use of epistemic logic, based on the standard possible-worlds semantics, to analyze the agents' reasoning in the FR paradox, and blame the paradox on what they call the `Trust' postulate. A better name for this principle is `Knowledge Transfer', since it asserts that nested, higher-order knowledge (about another agent's knowledge) allows the transfer of lower-order knowledge between agents. If we read K_Oφ as “observer O knows proposition φ” (in the sense of ascribing probability 1 to φ), then this principle can be formalized as the claim that the following implication is valid (universally true):
(∗) K_O K_O'φ⇒ K_O φ.
Nurgalieva and del Rio correctly point out that (∗) follows trivially from the standard axioms of Epistemic Logic, in particular from
the principles of Factivity[Factivity, also known as `Veracity', is a basic epistemic postulate, that incorporates the main feature of knowledge, and its main difference from mere belief: known propositions are true, i.e. in agreement with the facts.] and Monotonicity[In normal modaL-epistemic logic, Monotonicity is usually derived from the rule of Necessitation and Kripke's axiom of Distributivity.]:
(Factivity) K_O φ⇒φ
(Monotonicity) φ⇒ψ K_O φ⇒ K_O ψ
Since they blame the paradox on the Knowledge Transfer principle (∗), these authors are lead to question the basic assumptions of epistemic logic.
Waaijer and van Neerven in <cit.> similarly reject the idea of `promoting one agent's certainty to another agent's certainty' but add a condition that this rejection applies only when the fact in question `cannot be validated by records from the past'. Laloë <cit.> refers to a similar idea, suggested by P. Grangier, of finding a way to protect the obtained measurement information from perturbations created by other observers. This can be done by adding a secret qubit to the experiment, or a `friend of the friend', via which the friend can store the result of his measurement.
The idea of weakening condition (C), by requiring the existence of a persistent record or by letting the information escape outside the lab,
will
form an important ingredient of our own proposed solution below. Yet as we'll see in the next section, this idea needs to be tied in more precisely to the very definition of “being an observer”.
Counterfactual reasoning about others' observations?
One should note that one can derive the same contradiction in a way that does not make any appeal to the principle of `knowledge transfer'. Indeed R. Healey <cit.> proposes such an argument, that does not require any knowledge transfer.
All the inferences (i)-(iii) are now done by W alone (after hearing W's announcement).
But note that Healey's version still requires agents to reason about other agents' measurement results (though not necessarily to adopt their perspective, or reason about their knowledge). W reasons by cases, using what rea
Healey calls counterfactual implications of the form :
“If F observes `tail' at t_0, then the unique
outcome of my measurement of L at t_3 would be fail.”
On the other hand, W similarly proves that
“If F observes `head' at t_0, then the unique
outcome of W's measurement of L would be fail, in contradiction to W's announcement.”
Together, these lead again to the wrong conclusion that the protocol never halts.
The analysis in <cit.> locates the error in the conclusion of step (i), mainly in using the state assigned by F to the signal S to derive conclusions about W's measurement of L.
According to Healey, “F is justified in using this state assignment for the purpose
of predicting the outcome of a measurement on S only where S's correlations
with other systems (encoded in an entangled state of a supersystem) may be
neglected. But the prior
interaction between W and L undercuts F's justification for using his state assignment for S to predict anything about W's subsequent measurement of L.”
Another suspect: intervention insensitivity?
Healey thinks the error in this version is the underlying assumption of
“Intervention Insensitivity”: The truth-value of an outcome-counterfactual is
insensitive to the occurrence of a physically isolated intervening event.
Once again, W's measurement of L is such a disturbing intervention on the system L, which is physically isolated from L. But this still affects the outcome of W's subsequent measurement of L, invalidating step (i) in W's reasoning.
Lack of persistence versus subjectivity: 'unstable' facts, or no `facts' at all?
P. A. Guérin, V. Baumann, F. Del Santo and C. Brukner make an important point in <cit.>, according to them we shouldn't simply assume that the (information carried by the) observed outcomes will persist in time. Hence not even F can assume that her `memory of measurement outcomes' from the past and from the present will persist, aka be still relevant and usable, at all future times. We agree with this point! However, Brukner in <cit.> gets to even question the existence of objective reality: the “facts” (observed outcomes of experiments) are deemed to be “subjective” (i.e. observer-dependent).
In contrast, Rovelli <cit.> adopts a more realist interpretation, in accordance to which such facts are relative (having only a relational meaning, as binary properties relating two systems), but they are nevertheless “real” and objective: consciousness, subjectivity, higher-order agency, beliefs etc play no essential role. In <cit.>, the authors distinguish between `stable' facts (which can be transferred from a reference system to another) and unstable ones. They add that such stability is always only approximative and relative, and that in the case of macroscopic reference systems it can be explained by decoherence.
We tend to agree with the last view, and in fact we think that there is no need to doubt the “facts”, but only the probabilistic predictions based on (obsolete such) facts. A quantum state is just a statistical summary of the results of past interactions: there is nothing subjective about that. However, the predictions derived from such past “facts” may be incorrect, when they are applied to interactions that have erased from the universe all the information carried by these quantum states.
In other words, such past “facts”, no matter how 'objective', may not necessarily yield `knowledge' about future events, unless (and while) the underlying evidence (=the quantum information embodied by these facts) still persists somewhere in the universe.
Persistence as a criterion of being an `observer' Other authors <cit.> take such stability or persistence of measurement results to be “a necessary condition on a quantum system to behave as an observer”. This is embodied in Condition 1 in <cit.>, which states that: “For an entity O to behave as an observer, there must exist at least one degree of freedom in which O can encode the results of measurement, and which is untouched for the duration of the considered experiment”.
As stated, this sounds very similar to our proposed solution, and indeed there are close connections between this view and ours. However, the authors of <cit.> adopt a rather narrow understanding of the stability criterion: namely, they take it to simply forbid even the possibility of wiping the Friend's memory of the measurement results.[“In particular, to be used in Wigner's friend type scenarios, this condition applies to the friends solely if their memory remain stable even when Wigner makes his own measurements.” <cit.>.] This completely trivializes the problems involved in Wigner's Friend and FR paradoxes, and it amounts to imposing a rather brutal `solution': the scenarios involved in these thought experiments simply cannot happen!
In contrast, we will adopt a much more liberal interpretation of the persistence criterion: although the initial observer's memory may be wiped out, this doesn't matter as long as records of his observations have already been produced and disseminated throughout the universe; even if each individual such record is wiped out in its turn, this doesn't matter if (the process of copying and leaking is fast enough so that) other records have been produced and disseminated in the meantime; and even if none of this actually happens, it doesn't matter as long as it may have happened, as far as another external, background observer can tell.
§ OUR SOLUTION
As announced in the last section, our proposal for a solution to Wigner-friend-type paradoxes uses the same idea of persistence/stability of evidence/information (as a necessary requirement for sharing it, or for reasoning even counterfactually about such evidence). But in our view this notion is closely linked to the idea of information leakage, in the form of continuous production and dissemination of records or copies of this information. Moreover, our solution implements this idea in a `situated' perspective (with respect to a background observer), based on a particular implementation of Relational QM <cit.>. The question of informational persistence can thus be weakened to the mere epistemic possibility of informational leakage, as far as the observer's knowledge is concerned. It is also reinterpreted, from a metaphysical question about the (in)existence of objective “facts”, into a pragmatic-epistemic question about the admissibility of additional observers (from the perspective of the background observer).
We accept assumptions (S) and (Q): the `Cut' between the observer and system is indeed relative in a sense: every system is an “observer” from its own perspective, and describes its own interactions with other systems as collapsing measurements of these systems; while from an external perspective, the same interactions can be described as unitary evolutions.
In principle, every `observer' can keep its `Cut' minimal, by treating every other system as a quantum system (rather than as another collapsing observer).
As already mentioned, we adopt the `situated' perspective of Rovelli's Relational Quantum Mechanics, according to which there is no “view from nowhere”: quantum state attributions and quantum properties are always relative to a background “observer” system O. But in this paper we extend this relational view to the very notions of “observer” and “observation”; moreover, to make them more realistic (i.e. applicable to real-life situations), these notions have to be further relativized to a given history or set of histories, that encompass the relevant timeframe and the available actions.[In this sense, our approach borrows some of the features of the “consistent histories” interpretation.]
This means that assumption (C) will have to be weakened: sharing information between different `observers' (or reasoning counterfactually about other systems as `observers') is only justified when there exists a valid non-minimal `cut' that includes all of these subsystems on the “observer” side (together with the reasoner). For this, we need a criterion for deciding whether a given system can be regarded as an “admissible observer” from the perspective of a background observer O and with respect to a given history or set of histories.
The main idea behind our solution to the paradoxes is that the needed criterion has to do with the persistence of the information possessed by the subsystem in question. Roughly speaking, a subsystem A can count as an admissible observer for (another subsystem) O only if, as far as O “knows”, all the information ever possessed by A (at any moment of the relevant set of histories) may “forever” survive somewhere in the universe.[“Forever” needs not be understood in an absolute sense: it is relative to the relevant timeframe, history or set of histories.] This doesn't mean that A's own memory is necessarily perfect, or that it is immune from external erasure: it just means that, even if A's memory has been wiped put, O can still never be sure that A's information has not survived somewhere else (e.g. in the formed of a `leaked copy”, or a copy of a copy etc).
In the next subsection we sketch a formal rendering of our solution.
§.§ Formalization
First, we are given a (finite or infinite) list of labels (letters) Σ={s_1, s_2, …}, denoting elementary systems, together with associated Hilbert spaces H^(s), …. From these, we can generate systems, given by (typically, but not always, finite) subsets S, O, S', A, B, …⊂Σ, each with its associated Hilbert space H^(S):= ⊗_s∈ S H^(s). We leave unspecified which subsets S⊆Σ are admitted as “systems”, in order to accommodate various approaches: according to one approach all subsets are systems (and hence in particular the universe Σ is itself a system), while according to others some restrictions (e.g. finiteness) are to be imposed in order for a subset S⊆Σ to qualify as a system.
The state of any system S from the perspective of an `observing' system O is denoted by S_O. When we want to make explicit the time of this state attribution, we write S_O^t (for the state of S at time t from the perspective of O), where we assume given a discrete set of temporal moments t=0, 1, …, n. The systems S and O may or not overlap, or we may even have O⊆ S (but see below some constraints on this overlap). In case they do overlap, the “observed system” is the non-overlapping part S-O of the system under description (S).
The principle (Q) of universality of Quantum Mechanics amounts in this setting to assuming the following postulates:
(Q_D) at any moment t, the state of any system S according to any `observer' O is given by a density operator ρ=S_O^t, representing a (pure or mixed) state over H^(S);
(Q_+) systems are closed under composition (=disjoint union): given disjoint systems S, S', we can form the composed system S+S' (formally defined as the union of S∪ S' of all the labels of S and S'); and moreover, we have S_O = Tr_S' (S+S')_O, where Tr_S' is the partial trace operator wrt subsystem S';
(Q_U) for any moment t at which the observer O does not interact with the system S, the evolution of S at time t is given by S_O^t+1= U^t (S_O^t), where U^t is some unitary map;
(Q_P) for any moment t at which the observer O does interact with some part A⊆ S-O of the observed system S-O, the evolution of S+O at time t is given by (S+O)_O^t+1= (I_S-O× B^ρ_A_O)((P^t_ρ_A⊗ I_S+O-A)) ((S+O)_O^t), where
I_S+O-A and I_S-O are the identity operators on H^(S+O-A) and respectively H^(S_O), P^t_ρ_A is some projector onto some pure state ρ_A of the system A, and B^ρ_A_O is the operation of augmenting O's memory O_O^t with (some bit-encoding B^ρ_A of) the outcome ρ_A of the measurement of A by O.
All systems (even including the “supersystem” Σ representing the whole universe, if such a supersystem is at all conceivable)[In this paper, we do not want to rely on the assumption that the whole universe is itself a system, as this assumption seems incompatible with Relational Quantum Mechanics, or at least with some interpretations of this approach. But we do not want to definitely reject this assumption either.] are always described from such a `situated' point of view, associated to a background observer O.
While it is commonly assumed that no system can observe itself, there is no reason to deny an observer the capacity to `know' or represent its own state. Indeed, if we identify this state with the observer's memory (e.g. the record of all O's measurement results), then having this capacity amounts to having “`perfect recall” (a standard assumption in epistemic logic). This may be considered as a simplifying idealization applying only to `rational' agents, but it is a useful one. In quantum-mechanical terms, this amounts to assuming the following Introspection postulate (I):
(I) the state O_O (assigned to O itself from O's own perspective) is always a pure, fully separated state of H^(O) (at any given moment t), i.e. one of the form ⊗_i∈ O s_i, where each s_i is a pure state of S^(i) (for i∈ O). In fact, we typically assume that s_i are elements of the standard basis for H^(i) (though this is of course just a matter of convention): intuitively, O's `memory' consists of a number of `bits' s_i, recording the results of past measurements.
In combination with the postulate (Q_+), this assumption poses certain constraints on the overlap between S and O, namely the state S_O must be a separated state of the form (S-O)_O ⊗ (S∩ O)_O, where (S∩ O)_O is the restriction ⊗_i∈ S∩ O s_i to S∩ O of the above-mentioned fully separable state O_O.
In principle, the “Cut” between observer and observed systems is arbitrary: any system can play the role of observer O from its own (O's) perspective. But the interesting question is: which other systems can be admitted as “observers” from O's perspective? The answer is far from obvious, and formulating it is the core contribution of this paper.
Propositional Knowledge We say that the observer O knows that a system S satisfies a property φ (at time t), and write S K_Oφ (and respectively S K_O^t φ) iff, for every decomposition of the density operator S_O (S_O^t) representing the state of S wrt O (at time t) into a mixture ∑_i p_i |ρ_i⟩⟨ρ_i| of pure states |ρ_i⟩ (with all p_i≠ 0 and ∑_i p_i=1), the property φ holds on all these states ρ_i⟩ of system S. The possibilistic dual of knowledge K̂_O is defined as usual by putting
K̂_O φ := K_O φ,
where φ is the (classical) negation of φ. It is easy to see that SK̂_Oφ iff there exists some decomposition of the density operator S_O (S_O^t) into a mixture ∑_i p_i |ρ_i⟩⟨ρ_i| of pure states |ρ_i⟩ (with all pi≠ 0), s.t. the property φ holds on some state |ρ_i⟩. In English, we can read K̂_O φ as saying that φ may hold as far as O knows (or that O considers φ possible).
While this notion of knowledge is qualitative, it can encode probabilistic knowledge as well (even if in this paper we choose not to formalize the probabilistic aspect). For instance, if S_O is a pure state |ρ⟩ (at some time t), and if we denote by |ρ⟩_S the property of system S being in state |ρ⟩, then K_O |ρ⟩_S holds (at time t); in other words, the observer `knows' that S is in state |ρ⟩, and so (by the postulates of Quantum Mechanics) O 'knows' the probability of any particular outcome of any measurement that might be performed on S.
Proviso: no nested knowledge yet! The above definition does not entitle us yet to assign a meaning to formulas of the form K_O K_A φ, where A is any other subsystem of S (different from the observer O). For such a nested formula to make sense, we will first need to determine whether or not the system A may count as an `observer' from O's perspective: formulas of the form K_Aφ will belong to the observer O's inner language (and thus be allowed in the scope of an epistemic K_O-operator) only if A is admissible as an observer according to O.
Records Given systems A, B⊆ S of the same dimension, we say that B is a record of A in state S_O^t (of system S at time t according to O) if there exists some unitary map U_A→ B: H^(A)→ H^(B)) s.t. for every element e_A of the (standard) basis of H^(A)
we have Tr_S-B (P_e_A⊗ I_S-A)(S_O))= U_A→ B(e_A) (where P_e_A is the projector onto e_A, I_S-A is identity operator on H^(S-A), and Tr_S-B is the partial trace wrt the complement of B). Intuitively, whenever system A stores `information' (results of past measurements) in the form of the classical bits forming e_A, the same information is encoded in system B (via the reversible encoding U_A→ B).[Note that, although the above definition refers to the standard basis for H^(A), the concept itself is independent of the choice of basis, because of our existential quantification over unitaries. A change of basis will only replace the unitary U_A→ B with some other unitary U'_A→ B.] Here, systems A and B may be disjoint or may overlap, and in the second case they may even by identical: indeed, in any state S_O, every subsystem A⊆ S can be thought of as a `record' of itself.
Histories and protocols A history (of a system S wrt an observer O) is a sequence h=(S_O^0, T^1, S_O^1, T^2, …, S_O^n-1, T^n, S_O^n), where in accordance with (Q_U) and (Q_P) we require that all T^k are linear maps (called dynamical maps), which can be either unitary maps or projectors, and that S_O^k =T^k(S_O^k-1) (for all k≥ 1). A protocol (for S wrt an observer O) is a family π of histories, typically (but not always) assumed to be of the same length n and to have the same initial state S_O^0.
Persistent records
Given a history h (or more generally, a protocol π) of a system S wrt O, we say that a subsystem A⊆ S has a persistent record at time t if there exists a subsystem A'⊆ S (of the same dimension as A) s.t. (1) A' is a record of A in state S_O^t-1, and (2) if T_S^t is the unique dynamical map associated to h at time t (or any of the dynamical maps associated to any history h∈π at time t), E is any basis for H^(A') and F is any basis for H^(S-A'), then there exist some unitary map U_A'^t and some dynamical map T_S-A'^t, s.t. T_S^t(e_A'⊗ f_S-A')= U_A'^t (e_A') ⊗ T_S-A'^t (f_S-A') for all basis elements e_A'∈ E, f_S-A'∈ F.
Informational persistence throughout a history/protocol A subsystem A⊆ S is (informationally) persistent throughout a history/protocol (for the system S wrt O) if we have that: at every time t≥ 1, A has some persistent record A'_t; each such record A'_t has itself a persistent record A”_t at time t+1; and so on (for all times t+k≤ n, where n is the length of the given history or protocol).
Admissible observers A subsystem A⊆ S is an admissible observer wrt (a background observer system) O and a given history/protocol (for S wrt O) iff, for all that O knows (at any given time), A may be informationally persistent throughout the history/protocol; in other words, O never gets to know for a fact that A is not (will not be, or has not been) persistent throughout the history/protocol.
In other words, a system A can be admitted as an `observer' (from O's perspective) only if it can never be known for sure (by O) that the information once carried by A has been (or will be) completely erased from the universe (at least not for the duration of the given history/protocol). Note that the notion of “admissible observer” is always relative to a background observer system O and to a given history/protocol. Also note that the informational persistence of A does not require that past states of A are preserved within A. It is perfectly compatible with the possibility that past information is erased at some point from A's `memory' (or even that its whole memory is `wiped out', by being reset to some default state), as long as that past information is still saved somewhere in the universe (in possibly encoded form via some unitary transformation).
Furthermore, one should note that actual informational persistence is not required for being an admissible observer. All that is needed is that the background observer O cannot exclude the possibility of informational persistence: it cannot ensure (with probability 1) that the information carried by the admissibile observer is completely erased from the universe (at least not for the duration of the relevant history/protocol). This is important: actual persistence (of all the information ever acquired by a subsystem A) is a strong condition, conferring some kind of “immunity guarantee” or indestructibility to A's information, which seems to demand that A has a high degree of power or control over information, or at least that A has an inexhaustible ability to produce records and spread them far away from any relevant interference from O. In contrast, the mere epistemic possibility of A's informational persistence (according to O's state of knowledge) is a much weaker condition, which has more to do with the limits of O's power of controlling information: it only means that O cannot prevent A's information from “leaking” beyond O's control.
Note that by the above definition, O is an admissible observer wrt O itself and any given history/protocol for S+O (wrt O). We abbreviate this by saying that any system O is always an admissible observer wrt itself. In this sense, the `cut' between the observer and observed is indeed arbitrary!
Given a protocol for a system S wrt an observer O⊆ S, let A⊆ S be an admissible observer. For every description of the state evolution of the system S wrt O (as defined above), there exists a `compatible' description of the evolution of S wrt A (in which A takes the place of the background observer). Here, “compatibility” of state descriptions is understood in the sense of Leifer and Spekkens <cit.>: it means that the mixed states (attributed by the two observers at the same times, to give statistical predictions regarding the outcomes of all measurements involved in the protocol according to the two descriptions above) have a common refinement: so they can be regarded as partial descriptions of the same evolution.
The proof of this result is essentially based on an elaboration of the argument given in our analysis of the `leaking' scenario 2 in Section <ref>. Spelling out the proof in full generality requires further formalities, that go beyond the scope and context of this paper, and are best left for a future technical publication in a Formal Logic journal.
Nested knowledge Intuitively, O can reason about A as an observer only if A is an admissible observer wrt O: only in this case, a nested formula of the form K_O K_Aφ is meaningful (provided that φ does not contain any inner epistemic operators). Further nesting requires further conditions: for a formula of the form K_O K_A K_O φ to be meaningful, each of the two systems A and O has to be an admissible observer for the other system. We now generalize this notion to arbitrary groups.
Communities of observers A group 𝒢 of subsystems of a system S (which may or not include O itself) is a community of admissible observers wrt O (and a given history/protocol) if every system A∈𝒢 is an admissible observer both wrt to any other system B∈𝒢 as well as wrt O (and the given history/protocol).
Intuitively, a group of `agents' forms a community of observers if there is a kind of “fair balance of powers” between them: each agent's ability to knowingly control and erase information from the universe (or to at least know for a fact that it has been fully erased) cannot fully overwhelm other agents' ability to produce and disseminate records of their own information throughout the universe.
The above Proposition has the following immediate consequence:
Given a protocol for a system S wrt an observer O⊆ S, let 𝒢 be a community of observers s.t. O∈𝒢. Then,
for every description of the state evolution of the system S wrt O (as defined above), there exist compatible descriptions of the evolution of S wrt to each of the admissible observers A∈𝒢, and moreover all these descriptions are mutually compatible.
Indefinite nesting: the conditions for full epistemic logic, knowledge transfer and quantum computation protocols Intuitively, whenever we have a community of observers 𝒢, then each of them can reason about the others' observations as well as about the others' reasoning about itself and all others, etc. In other words, only for such a community of observers we can meaningfully use all the power of epistemic logic, with indefinitely long nested levels of knowledge, e.g. K_O K_A K_B K_A K_C φ, etc. All the standard axioms of Epistemic Logic hold in this case, and so in particular the principle of Knowledge Transfer holds (since it is a theorem of Epistemic Logic).
As a consequence, the agents involved in any standard protocol studied in the fields of Quantum Information and Quantum Computation need to always be assumed to form a community of observers.
One should stress that, from the perspective of an external observer O, checking whether or not another system A satisfies the conditions for being an “admissible observer” should be done without first assuming that A collapses the state of the systems it interacts with! In other words, O is entitled to reason “from A's perspective” only after the conditions of being an “admissible observer” are verified. One cannot `prove' these conditions by first assuming A's collapsing perspective, and then using this assumption to show that A satisfies the conditions for being an observer: that kind of circular reasoning would be a case of petitio principii! As we'll see, this is exactly the kind of reasoning that is involved in the usual descriptions of the FR Paradox.
§.§ Application to Wigner's Friend-type scenarios
According to the above analysis, whether or not a system is an admissible observer depends on some background observer and of the background history or protocol. One of the problems with Wigner's Friend-type of scenarios is that these background references are usually not fully specified.
First, consider the original Wigner's Friend paradox (scenario 1), or also the version involving a third observer O (scenario 2). Who should we take as our background reference `observer'. In the first case, it seems natural to take it to be W, while in the second case it could be either W or O. This is OK only as long as the scenario does not involve any wiping out of the information possessed by systems W or O. Let us grant that. Next, to decide whether or not F is himself an “admissible observer” from the perspective of W or O, we need to know what happens next (-what is the subsequent history, or the set of possible histories, i.e. the protocol), or at least we need to know what do any of these background observers know about what happens next.
Suppose that, in scenario 1, W knows that no information will be leaking out of the lab L and that moreover it is possible for him (or some other external agent) to measure the lab L as a whole in a basis that contains |fail⟩_L. In this case, F simply does not meet the conditions for being an admissible observer wrt W: no contradiction ensues, since from W's perspective F is just a quantum system, whose interactions are governed by unitary evolutions.
Conversely, if F is an admissible observer wrt W, then this implies that either the background protocol does not allow for such a global measurement of L, or that W cannot be sure that information doesn't leak out of the lab before such a global measurement is performed. If moreover we assume that W knows that F is an admissible observer (wrt W), then this means that W knows that he cannot control/stop the leaking of information from the lab or that a global measurement of the lab is not currently possible.
This brings us to scenario 2, in which F's measurement results are leaked to the third observer O, before any global measurement of the lab L could take place. Even if we take W to be our background observer, and even if W doesn't know that F's information has leaked, he still must consider the possibility that this may have happened: assuming the opposite in this context would be an example of relying on a false, ungranted assumption, and thus an example of faulty reasoning! If the leakage/communication does happen, then W cannot know that it didn't happen; and even if the leakage didn't happen, as long as W cannot be sure of this, he has to take into consideration this very possibility. And this very possibility establishes one of the necessary conditions for F to be an admissible observer wrt W in scenario 2.
But this is just a necessary condition: whether or not F really is an admissible observer depends on everything that may happen next, in all the possible histories permitted by the protocol under consideration. Suppose for instance that W knows that it is possible to perform next a global measurement of the super-system L+O before any information leaks out of this super-system L+O. In that case, F is not an admissible observer wrt W; but in fact this scenario already contradicts the assumption that the external system O was itself an admissible observer (wrt W)!
In effect, in order to determine whether F (or O) is an admissible observer wrt W, we need to be given a full specification of the relevant protocol (i.e. of what possible histories are consistent with it), or at least of W's knowledge about the relevant protocol.
What about the FR Paradox? We can safely assume that the external observers W and W form a community of mutually admissible observers, since the scenario does not mention any measurements of super-systems that encapsulate them. But what about F and F?
At first sight, it may seem that F's measurement results are `leaked' out of L via the signal S; moreover, this seems to happen “just in time” i.e. before W performs his global measurement of L.
But this is not quite true. The signal S does not satisfy our conditions for being a “persistent record” of F's information, indeed it is not a record at all! Indeed, the usual story assumes that W or W can reason from F's and F's perspective
as collapsing observers, argues that they can conclude that F measured `up', and thus that F must have measured `tails', and based on that it argues that the information carried by F's result (`tails') has been indeed leaked to F, and made available to the external observers. But this is kind of petitio principii mentioned above! W and W are not entitled to reason by cases about F's and F's measurement results before we establish that F's and F's are admissible observers! And to do that, we have to first reason from the perspective of W's and W's “joint minimal cut”, according to which they themselves are the only collapsing observers, while F and F are just quantum systems.
If we do that, we can easily see that the signal S does not formally constitute a record of system F's information; and indeed, even at an informal level, one can see that the entanglement between L and S does not carry enough information to recover the correlation between R and F. Thus, the actions of sending the signal S to the other lab L and of F “measuring” the signal at time t_1 do not constitute a true “leak” in our sense, since the entanglement they establish between the two labs does not endow L (and F) with a true record of the correlation between R and F: there simply is not enough information in L at this stage to fully recover that correlation.
Since at the next stage (at t_2) a global measurement of L is performed by W (and moreover this measurement is a destructive one, since it is done in a orthogonal basis, thus wiping out F's memory), and since this happens before F's information has been fully copied/recorded in a safe place, we can only conclude that F fails to satisfy the conditions for being an admissible observer wrt W or W.
The same applies even more straightforwardly to F: F's information is not leaked anywhere outside of L before W performs a global destructive measurement of L. So neither F nor F are admissible observers for W and W.
Since the four systems do not form a community of admissible observers, we cannot apply the full power of Epistemic Logic to all four “agents”. We cannot use nested modalities of the form K_W K_F φ etc, to allow the external observers W and W to reason about the internal systems F and F as observers. The only community of observers that we have in this scenario that contains more one observer is {W, W}. So we can only allow nested operators of the form K_W K_W K_W …: W and W can indeed reason about each other as observers, but they should only reason about F and F as non-observing quantum systems.
In contrast, suppose that we change the scenario of the FR Paradox, by ensuring that the quantum system F and F are immediately `leaking' their information (by sending a record into some remote environment, over which W and W have no full control), before any global measurements are performed on L or L. Then it is easy to see that the paradox disappears!
§ CONCLUSIONS
Our approach in this paper is based on a clarification of the type of systems that can be viewed as a true agent and can act as an `observer'.
At a first approximation, only leaking systems (whose information is never completely erased from the universe) are “true agents” (to whom we can attribute “knowledge”, and reason about it using epistemic logic). This doesn't mean that their memory is perfect, or that it is immune from external erasure: it just means that their information survives somewhere “forever” (e.g. by being copied in other locations, before being erased from any given location).
To put it conversely: a system can be treated as an observer only for a period in which its information is preserved somewhere (throughout the period).
But this “absolute” notion of observer is just a first approximation, and as such it is true strong. A deeper, more sophisticated rendering of our solution makes the notion of “observer” to be itself relative to some other, background observer O. Moreover, the strong, absolute requirement of informational persistence is replaced with a weaker, epistemic condition, namely that the background observer O cannot exclude the possibility of informational persistence: she simply is not in a position to know for certain that the relevant information is (has been or will be) fully erased from the universe.
Finally, we have the third, fully correct, approximation of our solution, which makes the notion of “admissible observer” to also be relative to a given history or set of history (a `protocol'). Thus, “forever” is not understood here in an absolute sense: it is relative to a given period of time, a history, or a given protocol. A system A is an admissible observer for O as long as the background observer O cannot exclude the possibility that A's information may survive throughout the protocol.
When we consider a family of subsystems, and want to be able to allow them to use nested epistemic operators and the full power of Epistemic Logic to reason about each other as observers,
we need to first ensure that each of them is an “admissible observer” with respect to all the others. In our technical terms, they need to form a “community of admissible observers”. Intuitively, this means that none of them is able to `control' (i.e. fully erase from the universe, or at least know for sure that it's been fully erased) the information of any of the
others.
This context-dependent and observer-dependent approach gives us a pragmatic-epistemic solution to Wigner Friend's-type paradoxes: there are no absolute, universally acceptable observers, but only (communities of) admissible observers relative to some background observer and background timeframe or protocol. Every system is an admissible observer to itself, but not every system is an admissible observer to every other system. And it is quite possible that no subsystem is an admissible observer with respect to the whole (past and future) history of the universe: if for instance all current local correlations between subsystems will eventually be completely wiped out in the far future, then no current “agent” will be an admissible observer with respect to any long enough history. But, in the context of a restricted history or set of history available and relevant to a given background observer, there will typically exist other admissible observers.
Concluding from this that Epistemic Logic is simply inconsistent with Quantum Mechanics (as some authors seem to suggest) sounds rather far-fetched and unhelpful: after all, many of the multi-agent protocols in Quantum Information and Quantum Computation make implicit use of epistemic reasoning about other observers.
Our conclusion is rather that Epistemic Logic is applicable only to groups of physical systems that satisfy certain conditions (namely, they form a community of admissible observers), and that moreover these conditions are typically satisfied only in a relative sense (from the perspective of a background observer, and for the duration of a relevant history or protocol).
In other words, sharing observations between systems, or reasoning counterfactually about other systems' observations, is not always possible or justified, but
only when there is a meaningful possibility that these subsystems are “leaking” (accidentally or systematically): for all we know, their information may be continuously copied and disseminated somewhere else (by entanglements with a large part of their environment), so that we cannot completely erase this information from the universe.
The more “leaky” or open a system is, the harder it is for us to encapsulate it and control its information, the more likely it is to be an admissible observer for us.
This is our explanation for the fact that standard epistemic
reasoning can typically be applied without any paradox in common macroscopic situations (such as
multi-agent quantum protocols): all macroscopic observers are typically open,
“leaking” systems.
To conclude, the view adopted in this paper is that, on the one hand, every system can be treated as an `observer' from its own perspective; but, on the other hand, not all external systems can be treated as “observers” from the perspective of a given observer. Instead of blankly stating that Quantum Mechanics does not allow observers to reason about other systems as observers, a more fine-grained solution is indeed possible. Being an observer is itself a relational property, one that is relative to another observer (or to a community of
other observers) and a background protocol (that restricts the timeframe and the possible histories). This doesn't make observers “less real” and less interesting, and it doesn't make Epistemic Logic less useful for reasoning about quantum scenarios, on the contrary.[In this sense, see some of our previous work <cit.>, on using epistemic logic to reason both about the informational properties of quantum systems (and in particular about eistemic characterizations of entanglement), and
about the epistemic states of classical observers in quantum environments.]
9
BaSm08b Baltag, A.; Smets, S.: A dynamic-logical perspective on quantum behavior, Studia Logica, 89:185-209 (2008)
BaSm15 Baltag, A.; Smets, S.: Logics of Informational Interactions, Journal of Philosophical Logic, 44(6):595-607 (2015)
BaSm17 Baltag, A.; Smets, S.: Modeling correlated information change: from conditional beliefs to quantum conditionals, Soft Computing, 21(6), 1523-1535 (2017)
BS2 Baltag, A.; Smets, S.: Correlated Knowledge, An Epistemic-Logic View on Quantum Entanglement, International Journal of Theoretical Physics, 49:3005-3021 (2010)
BS1 Baltag, A.; Smets, S.: Correlated Information: A Logic for Multi- Partite Quantum Systems. In Coecke, B. and Panangaden, P. (eds.) Electronic Notes in Theoretical Computer Science ENTCS. Proceedings of the 6th Workshop on Quantum Physics and Logic, Oxford. vol. 270, pp.3-14, (2011)
Barwise Barwise, J.; Perry, J.: Situations and Attitudes, Bradford Books, MIT Press (1983)
Boge Boge, F.: Quantum information vs. epistemic logic: An analysis of the Frauchiger-Renner theorem. Foundations of Physics, 49:1143-1165, arXiv:1909.11889v1 (2019)
Brukner1 Brukner, C.: On the quantum measurement problem, in R. Bertlmann and A. Zeilinger (eds): Quantum [Un]speakables II, Half a Century of Bell's Theorem. Springer (2017), arXiv:1507.05255v1 (2015)
Brukner Brukner, C.: A no-go theorem for observer-independent facts, Entropy 20, 350, (2018)
Brukner2 Guérin, P.A.; Baumann, V.; Del Santo, F.; Brukner, C.: A no-go theorem for the persistent reality of Wigner's friend's perception. Commun Phys 4, 93 (2021).
DeBrota DeBrota, J.B; Fuchs, C.A; Schack, R.: Respecting One's Fellow: QBism's Analysis of Wigner's Friend, Foundations of Physics 50:1859-1874 (2020)
CR Di Biagio, A.; Rovelli, C.: Stable Facts, Relative Facts, Foundations of Physics 51:30 (2021)
Corti Cordi, A.; Fano, V.; Tarozzi G.,: A logico-Epistemic Investigation of Frauchiger and Renner's Paradox, International Journal of Theoretical Physics 62:54, (2023)
De85 Deutsch, D.: Quantum theory as a universal physical theory, International Journal of Theoretical Physics, 24, I (1985)
Dretske Dretske, F.: Knowledge and the Flow of Information, MIT Press (1981)
Jordan Elouard, C.; Lewalle, P.; Manikandan, S.K; Rogers, S; Frank, A. and Jordan, A.N: Quantum erasing the memory of Wigner's friend. Quantum, 2021-07-08, volume 5, page 498, arXiv:2009.09905v4 (2021)
FrRe18 Frauchiger, D.; Renner, R.: Quantum theory cannot consistently describe the use of itself, Nature Communications, 9(1):3711. Preprint, arXiv:1604.07422 (2018)
Halpern Fagin, R.; Halpern, J.; Moses, Y. and Vardi, M.: Reasoning about Knowledge, MIT Press, (1995)
Healey Healey, R.: Quantum Theory and the Limits of Objectivity Richard, Foundations of Physics, 48:1568-1589, (2018)
LaudisaRovelli Laudisa, F. and Rovelli, C.: Relational Quantum Mechanics, E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy, online, (2021)
Laloe Laloë, F.: Can quantum mechanics be considered consistent? a discussion of Frauchiger and Renner's argument. ArXiv:1802.06396v3, (2018)
LF Leifer, M.S.; Spekkens, R.W.: A Bayesian approach to compativility, improvement and pooling of quantum states, arXiv:1110.1085v1 (2011)
NuRe21 Nurgalieva, N.; Renner, R.: Testing Quantum Theory with Thought Experiments, preprint, arXiv:2106.05314v1 (2021)
NdR Nurgalieva, N.; del Rio, L.: Inadequacy of Modal Logic in Quantum Settings, EPTCS 287:267-297 (2019)
Rovelli Rovelli, C.: Relational Quantum Mechanics, International Journal of Theoretical Physics, 35(8):1637-1678, (1996)
Shimony Shimony, A.: Role of the Observer in Quantum Theory, American Journal of Physics, 31:755, (1963)
vDiHoKo07 van Ditmarsch, H.P; van der Hoek, W.; Kooi, B.: Dynamic Epistemic Logic, Springer (2007)
Wi61 Wigner, E.P.: Remarks on the mind-body question, in I.J. Good, The Scientist Speculates, London Heinemann (1961)
WvN Waaijer, M.; van Neerven, J.: Relational Analysis of the Frauchiger-Renner Paradox and Interaction-Free Detection of Records from the Past, Foundations of Physics (2021)
Crull Crull, E. and Bacciagaluppi, G.: Translation of W. Heisenberg: “Ist eine deterministische Ergänzung der Quantenmechanik möglich?”, Pittsburgh Phil-Sci Archive, (2011)
VN von Neumann, J.: Mathematical Foundations of Quantum Mechanics , N. Wheeler (ed.) of the English translation, New Edition, 2018, Princeton University Press. First published in German in (1932)
|
http://arxiv.org/abs/2307.01857v1
|
20230704180059
|
Resonance $X(7300)$: excited $2S$ tetraquark or hadronic molecule $χ_{c1}χ_{c1}$?
|
[
"S. S. Agaev",
"K. Azizi",
"B. Barsbay",
"H. Sundu"
] |
hep-ph
|
[
"hep-ph",
"hep-ex",
"hep-lat"
] |
Institute for Physical Problems, Baku State University, Az–1148 Baku,
Azerbaijan
Department of Physics, University of Tehran, North Karegar Avenue, Tehran
14395-547, Iran
Department of Physics, Doǧuş University, Dudullu-Ümraniye, 34775
Istanbul, Türkiye
Division of Optometry, School of Medical Services and Techniques, Doǧu
ş University, 34775 Istanbul, Türkiye
Department of Physics Engineering, Istanbul Medeniyet University, 34700
Istanbul, Türkiye
We explore the first radial excitation X_4c^∗ of the fully
charmed diquark-antidiquark state X_4c=cccc built of axial-vector components, and the hadronic molecule ℳ
=χ_c1χ_c1. The masses and current couplings of these scalar
states are calculated in the context of the QCD two-point sum rule approach.
The full widths of X_4c^∗ and ℳ are evaluated
by taking into account their kinematically allowed decay channels. We find
partial widths of these processes using the strong couplings g_i^∗
and G_i^(∗) at the X_4c^∗(ℳ
)-conventional mesons vertices computed by means of the QCD three-point sum
rule method. The predictions obtained for the parameters m=(7235 ± 75)
MeV, Γ=(144 ± 18) MeV and m=(7180
± 120) MeV, Γ=(169 ± 21) MeV of
these structures, are compared with the experimental data of the CMS and
ATLAS Collaborations. In accordance with this analysis, the radially excited
tetraquark X_4c^∗ is promising candidate to the resonance X(7300), though we do not exclude the molecule or mixed tetraquark-molecule
model for this state.
Resonance X(7300): excited 2S tetraquark or hadronic molecule χ_c1χ_c1?
H. Sundu
August 1, 2023
=======================================================================
§ INTRODUCTION
The multiquark hadrons composed of exclusively heavy quarks were in agenda
of researches from first years of the parton model and QCD. During past
decades a lot of was done to investigate features of such particles,
calculate their parameters in the context of different models, study
production and decay mechanisms of these hadrons. Reports of the LHCb, ATLAS
and CMS Collaborations on the scalar X resonances in the 6.2-7.3
GeV mass range became one of important experimental achievements
in the physics of fully charmed four-quark mesons <cit.>. The structures X(6200)
, X(6600), X(6900) and X(7300) observed by these experiments in the di-J/ψ and J/ψψ ^' mass distributions provide useful
information and allow one to compare numerous theoretical models and
predictions with the masses and widths of these states.
These discoveries generated new theoretical activities to explain observed
states, reveal their internal structures <cit.>
. The fully heavy X resonances were considered as scalar four-quark mesons
with diquark-antidiquark or hadronic molecule organizations <cit.>
. For example, the resonance X(6900) may be a diquark-antidiquark state
with pseudoscalar ingredients, or hadronic molecule χ _c0χ _c0
<cit.>. The structure X(6200) was interpreted as a
ground-level tetraquark with the spin-parities J^PC=0^++ or 1^+-, whereas X(6600)- as its first radial excitation <cit.>. The four structures X(6200)-X(7300) were assigned to be
different excited tetraquark states <cit.>.
Alternative scenarios explain appearance of the X resonances by
coupled-channel effects. Thus, using this approach the authors of Ref. <cit.> predicted existence of the near-threshold state X(6200)
with J^PC=0^++ or 2^++ in the di-J/ψ system.
Coupled-channel effects may also generate a pole structure identified in
Ref. <cit.> with X(6900), and lead to emergence of a bound
state X(6200), and resonances X(6680) and X(7200), which can be
classified as broad and narrow structures, respectively.
Production mechanisms of fully heavy tetraquarks in different processes
became topics for interesting investigations <cit.>. Thus, inclusive production of fully charmed S
-wave four-quark mesons at the LHC energies was studied in the
nonrelativistic QCD factorization framework in Ref. <cit.>.
Production of fully-heavy tetraquark states in pp and pA collisions
through the double parton scattering mechanism was considered in Ref. <cit.>, in which it was shown that a search for such states is
feasible in the future runs of LHC and in Future Circular Collider.
The fully heavy four-quark mesons were studied also in our articles <cit.>. The scalar tetraquarks X_
4c=cccc and X_4b=bbb
b built of axial-vector diquarks were explored in Ref. <cit.>. It was demonstrated that X_4c with the mass (6570± 55) MeV and full width (110± 21) MeV is nice
candidate to the resonance X(6600). The fully beauty state X_4b has the mass (18540± 50) MeV that is smaller than the η
_bη _b threshold therefore it cannot be seen in η _bη _b
or Υ (1S)Υ (1S) mass distributions. The X_4b
can decay to open-beauty mesons through bb annihilation to
gluon(s) that triggers X_4b→ B^+B^- and other
decays <cit.>. Another way of transformations to conventional
mesons are leptonic and nonleptonic decays of X_4b.
The scalar tetraquarks T_4c and T_4b composed of
pseudoscalar diquarks were explored in Ref. <cit.>, in which
we computed their masses and widths. The parameters m=(6928± 50)
MeV and Γ_4c=(128± 22) MeV of T_4c are in excellent agreements with relevant CMS data,
therefore we interpreted it as the resonance X(6900). The exotic meson T_
4b decays to η _bη _b pairs and can be detected in
the mass distribution of these mesons. It is interesting that the hadronic
molecule χ _c0χ _c0 (a brief form of χ _c0(1P)χ
_c0(1P)) has similar parameters and is another candidate to X(6900)
<cit.>. Hence, X(6900) may be considered as a linear
superposition of the molecule χ _c0χ _c0 and diquark-antidiquark
state T_4c.
The lowest lying structure among X states is the resonance X(6200), that
may be interpreted as the molecule η _cη _c. In fact, the mass (6264± 50) MeV and full width (320± 72) MeV of the
molecule η _cη _c agree with the LHCb-ATLAS-CMS data <cit.>.
The last position in the list of new X structures is held by the resonance
X(7300). This state was detected in both the di-J/ψ and J/ψψ
^' mass distributions. In Ref. <cit.>, we used this
fact to make assumptions about its nature, and argued that X(7300) maybe
is the 2S radial excitation of the exotic meson X(6600). Another option
for X(7300) is the hadronic molecule model χ _c1(1P)χ _c1(1P)
(in what follows χ _c1χ _c1) that may have close parameters.
In the present article, we address problems connected with the resonance X(7300) in attempts to describe its parameters in the four-quark model. To
this end, we calculate the mass and width of the first radial excitation X_
4c^∗ of the diquark-antidiquark state X_4c.
The full width of X_4c^∗ is evaluated using its
kinematically allowed decays to J/ψ J/ψ, J/ψψ ^', η _cη _c, η _cη _c(2S), η _cχ _c1, χ
_c0χ _c0, and χ _c1χ _c1 mesons. We are also going to
perform the similar analysis in the case of the molecule ℳ=χ
_c1χ _c1. We will compare predictions for parameters of X_
4c^∗ and ℳ with experimental data, and each other to
make decision about the nature of X(7300).
This article is organized in the following form: In Sec. <ref>
, we explore the excited tetraquark X_4c^∗ and compute
its mass and full width. The same analysis for the molecule ℳ is
carried out in Sec. <ref>. In the last Section <ref>,
we present our brief conclusions. Appendix contains expressions some of
correlation functions used in the present analysis.
§ RADIALLY EXCITED STATE X_4C^∗
In this section, we explore the first radial excitation X_4c
^∗ of the scalar tetraquark X_4c built of axial-vector
diquarks. The mass and current coupling of this state are computed by means
of the QCD two-point sum rule (SR) approach <cit.>. To evaluate partial widths of the
kinematically allowed decay channels of X_4c^∗, we are
going to employ the three-point sum rule method, which is necessary to find
strong couplings at corresponding three-particle vertices.
§.§ Mass m and coupling f of X_4c^∗
The sum rules for the mass m and current coupling f of the tetraquark X_4c^∗ can be extracted from analysis of the correlation
function
Π (p)=i∫ d^4xe^ipx⟨ 0|𝒯{J(x)J^†(0)}|0⟩ ,
where 𝒯 is the time-ordered product of two currents, and J(x)
is the interpolating current for the states X_4c and X_
4c^∗.
We model X_4c and X_4c^∗ as tetraquarks
built of the axial-vector diquark c^TCγ _μc and axial-vector
antidiquark cγ _μCc^T. Then, the
interpolating current is determined by the expression
J(x)=c_a^T(x)Cγ _μc_b(x)c_a(x)γ ^μC
c_b^T(x),
with a and b being color indices. In Eq. (<ref>) c(x) is c-quark fields, and C is the charge conjugation matrix. The current J(x)
describes the diquark-antidiquark states with spin-parities J^PC
=0^++.
The ground-level particle with this quark content and quantum numbers is the
tetraquark X_4c which was investigated in our paper <cit.>. We computed its mass m_0 and coupling f_0 by
employing the two-point SR approach. We took into account explicitly only
the ground-state term and included all other contributions to a class of
"higher resonances and continuum states". We refer to this standard
treatment as "ground-state+continuum" approximation.
To derive sum rules for m and f, we express the correlation function Π (p) in terms of X_4c and X_4c^∗
tetraquarks' masses and couplings. Having inserted a complete set of
intermediate states with the same content and quantum numbers of these
tetraquarks, and carried out integration over x, we get
Π ^Phys(p)=⟨ 0|J|X_4c(p)⟩⟨ X_4c(p)|J^†|0⟩/m_0^2-p^2
+⟨ 0|J|X_4c^∗(p)⟩⟨ X_4c
^∗(p)|J^†|0⟩/m^2-p^2⋯ .
This expression contains two terms corresponding to the ground-state
particle X_4c with the mass m_0 and a contribution coming
from the first radially excited state, i.e., from 2S level tetraquark X_
4c^∗. Here, the ellipses stand for the effects of higher
resonances and continuum states. This approach is "ground-level+first
excited state +continuum" approximation.
The Π ^Phys(p) can be simplified using the matrix elements
⟨ 0|J|X_4c(p)⟩ =f_0m_0, ⟨ 0|J|X_4c
^∗(p)⟩ =fm,
where f_0 and f are current couplings of the X_4c and X_
4c^∗, respectively. Then, we get
Π ^Phys(p)=f_0^2m_0^2/m_0^2-p^2+
f^2m^2/m^2-p^2+⋯ .
This function contains only the Lorentz structure proportional to I, hence the invariant amplitude Π ^Phys(p^2) necessary for
our analysis is defined by rhs of Eq. (<ref>).
The QCD side of the sum rules is formed by the correlation function Π (p)
expressed using c-quark propagators and calculated in the operator product
expansion (OPE) with some accuracy. In the case under discussion,
Π ^OPE(p) and corresponding amplitude Π ^OPE
(p^2) were computed in Ref. <cit.>. There, we also found
the parameters m_0 and f_0 of the ground-state particle X_
4c, which appear in the present analysis as input quantities.
After the Borel transformation and continuum subtraction the SR equality
takes the form
f^2m^2e^-m^2/M^2=Π
(M^2,s_0)-f_0^2m_0^2e^-m_0^2/M^2,
which in conjunction with the derivation of Eq. (<ref>) over d/d(-1/M^2), can be utilized to find sum rules for m and f. Here, Π (M^2,s_0) is the amplitude Π ^OPE(p^2) after the
Borel transformation and subtraction operations, and M^2 and s_0 are
corresponding parameters.
The Π (M^2,s_0) is given by the formula
Π (M^2,s_0)=∫_16m_c^2^s_0dsρ ^OPE
(s)e^-s/M^2.
where ρ ^OPE(s) is a two-point spectral density. It consists
of the perturbative contribution ρ ^pert.(s) and the
dimension-4 nonperturbative term ∼⟨α _sG^2/π⟩: The explicit expression of ρ ^pert.(s) can be
found in Ref. <cit.>.
To carry out numerical computations, one needs the gluon vacuum condensate ⟨α _sG^2/π⟩ =(0.012± 0.004) GeV^4 and
c-quark mass m_c=(1.27± 0.02) GeV. Another important
problem to be clarified, is a choice of the parameters M^2 and s_0.
The regions in which they can be changed should meet known restrictions of
SR computations. Stated differently, M^2 and s_0 have to be fixed in
a such way that to ensure dominance of the pole contribution (PC)
and perturbative term over a nonperturbative one. Another important
constraints are convergence of OPE and a stability of extracted
observables against variations of the Borel parameter M^2. Because, Π
(M^2,s_0) does not contain quark and mixed condensates the dominance of
PC and stability of extracted quantities play key role in
choosing of the parameters M^2 and s_0.
In the first phase of computations, we fix the regions for M^2 and s_0 in a such manner that to consider only the ground-state term in Eq.(<ref>). This task was fulfilled in Ref. <cit.>,
where M^2 and s_0 were varied inside of the regions
M^2∈ 5.5,7] GeV^2, s_0∈ 49,50]
GeV^2.
As a result, we evaluated the mass m_0 and coupling f_0 of the
ground-state tetraquark X_4c
m_0 = (6570± 55) MeV,
f_0 = (5.61± 0.39)× 10^-2 GeV^4.
At the second stage of studies, we use m_0 and f_0 in Eq. (<ref>) as input parameters and calculate the mass m and coupling f of
the excited state
m = (7235± 75) MeV,
f = (8.0± 0.9)× 10^-2 GeV^4.
To compute Eq. (<ref>), we use the working regions
M^2∈ 5.5,7] GeV^2, s_0^∗∈ 55,56]
GeV^2,
which obey all constraints imposed on Π (M^2,s_0) by the SR
analysis. In fact, the pole contribution changes inside limits 0.93≥PC≥ 0.71, at the minimum of M^2=5.5 GeV^2 the
nonperturbative term is negative and constitutes only 1.4% part of the
correlation function. The extracted quantities m and coupling f bear
residual dependence on the parameters M^2 and s_0^∗ which is a
main source of theoretical uncertainties. These effects are equal to ±
1% in the case of m, and to ± 11% for f staying within limits
acceptable for the SR computations. The behavior of the mass m under
variations of M^2 and s_0^∗ is shown in Fig. <ref>.
Because we consider two terms in Eq. (<ref>), and find parameters
of the ground-level and radially excited tetraquarks, there is a necessity
to check a self-consistency of performed studies. Indeed, the parameters s_0 and s_0^∗ separate contributions of interest from ones
which are modeled using the assumption about quark-hadron duality.
Therefore, in these studies the inequalities m_0^2<s_0 and s_0<m^2<s_0^∗ should be held: With results of the numerical
analysis at hand, it is not difficult to verify these relations.
The prediction for the mass m=7235 MeV of the 2S excited
tetraquark X_4c^∗ within uncertainties of calculations
and errors of experiments is consistent with values m^ATL
=7220± 30_-30^+20 MeV and m^CMS
=7287_-18^+20± 5 MeV, respectively. In our article <cit.> we supposed that the resonance X(7300) is 2S excited
state of X(6600). This assumption based on the fact that the ATLAS
Collaboration detected the resonances X(6600) and X(7300) in J/ψ
J/ψ and J/ψψ ^' mass distributions, respectively.
Because the mass difference between mesons ψ ^' and J/ψ
is around 590 MeV, and a comparable mass splitting (600-735)
MeV exists in the X(7300)-X(6600) system, it is natural to
assume that X(7300) is excitation of X(6600). Our results for the masses
of X_4c and X_4c^∗ differ by amount 665
MeV and seem support this scenario.
§.§ The full width of X_4c^∗
The mass m of the excited tetraquark X_4c^∗ allow us to
determine its decay channels, and evaluate full width of this state. It is
clear, that decays to J/ψ J/ψ, J/ψψ ^', η
_cη _c, η _cη _c(2S), η _cχ _c1, χ
_c0χ _c0, and χ _c1χ _c1 mesons are among such allowed
channels. It is worth noting that decay X_4c^∗→η _cχ _c1 is the P-wave process, whereas the remaining ones are
S-wave decays.
We are going to explain in a detailed form only processes X_4c
^∗→ J/ψ J/ψ and X_4c^∗→ J/ψψ ^', and provide final results for other
channels. The partial widths of these decays are governed by the strong
couplings g_i^∗ at the vertices X_4c^∗J/ψ
J/ψ, and X_4c^∗J/ψψ ^'. These
couplings can be evaluated using the following three-point correlation
function
Π _μν(p,p^')=i^2∫ d^4xd^4ye^ip^'ye^-ipx⟨ 0|𝒯{J_μ^ψ(y)
× J_ν^ψ(0)J^†(x)}|0⟩ ,
where J_μ^ψ(x) is the interpolating current for the mesons J/ψ and ψ ^'
J_μ^ψ(x)=c_i(x)γ _μc_i(x),
with i=1,2,3 being the color indices.
We apply usual recipes of the sum rule method and express the correlation
function Π _μν(p,p^') in terms of physical parameters of
particles. Because the tetraquark X_4c^∗ decays both to J/ψ J/ψ and J/ψψ ^' pairs, we isolate in Π _μν(p,p^') contributions of the mesons J/ψ and ψ
^' from ones of higher resonances and continuum states. But the
current J(x) also couples to the ground-state tetraquark X_4c
. Therefore, for the physical side of the sum rule Π _μν^
Phys(p,p^'), we get
Π _μν^Phys(p,p^')=∑_I=1,2⟨
0|J_μ^ψ|J/ψ (p^')⟩/p^' 2-m_J^2
⟨ 0|J_ν^ψ|J/ψ (q)⟩/q^2-m_J^2
×⟨ J/ψ (p^')J/ψ (q)|X_4c
^I(p)⟩⟨ X_4c^I(p)|J^†|0⟩/
p^2-m_I^2
+∑_I=1,2⟨ 0|J_μ^ψ|ψ (p^')⟩/
p^' 2-m_ψ^2⟨ 0|J_ν^ψ|J/ψ
(q)⟩/q^2-m_J^2
×⟨ψ (p^')(p^')J/ψ (q)|X_4c
^I(p)⟩⟨ X_4c^I(p)|J^†|0⟩/
p^2-m_I^2⋯ ,
where m_J=(3096.900± 0.006) MeV and m_ψ=(3686.10±
0.06) MeV are the masses of J/ψ and ψ ^'mesons
<cit.>. To write down Π _μν^Phys(p,p^') in a compact form, we use in Eq. (<ref>) notations X_
4c^1=X_4c, X_4c^2=X_4c
^∗ and m_1^2=m_0^2, m_2^2=m^2.
The function Π _μν^Phys(p,p^') can be
expressed in terms of mesons and tetraquarks masses and decay constants
(couplings). To this end, one should use the matrix elements of the
tetraquarks Eq. (<ref>), as well as the matrix elements
⟨ 0|J_μ^ψ|J/ψ (p)⟩ = f_Jm_Jε _μ(p),
⟨ 0|J_μ^ψ|ψ ^'(p)⟩ = f_ψm_ψ
ε_μ(p),
and
⟨ J/ψ (p^')J/ψ (q)|X_4c(p)⟩
=g_1(q^2)[ q· p^'ε ^∗(p^')·ε ^∗(q).
. -q·ε ^∗(p^')p^'·ε ^∗(q)] ,
⟨ψ (p^')J/ψ (q)|X_4c(p)⟩
=g_2(q^2)[ q· p^'ε^∗(p^')·ε ^∗(q).
. -q·ε^∗(p^')p^'·ε ^∗(q)] .
Here, f_J=(409± 15) MeV, f_ψ=(279± 8) MeV
and ε _μ, ε_μ are the decay
constants and polarization vectors of the mesons J/ψ and ψ
^'<cit.>, respectively. In the vertices
with the excited tetraquark X_4c^∗(p) one should write
form factors g_1^∗(q^2) and g_2^∗(q^2).
Having used these matrix elements and carried out simple calculations, we
find for Π _μν^Phys(p,p^')
Π _μν^Phys(p,p^')=g_1(q^2)f_0m_0f_J^2m_J^2F_μν(m_0,m_J)
+g_1^∗(q^2)fmf_J^2m_J^2F_μν(m,m_J)
+g_2(q^2)f_0m_0f_Jm_Jf_ψm_ψF_μν(m_0,m_ψ)+
+g_2^∗(q^2)fmf_Jm_Jf_ψm_ψF_μν(m,m_ψ)+⋯ ,
where
F_μν(a,b)=[ ( a^2-b^2-q^2) g_μν-2q_μp_ν^'] /2( p^2-a^2) (
p^' 2-b^2) (q^2-m_J^2).
As is seen, there are two structures in Π _μν^Phys
(p,p^') which can be used for SR analysis. To derive the sum rules
for the form factors g_i^(∗ )(q^2), we work with the Lorentz
structure g_μν, and corresponding invariant amplitude Π ^
Phys(p^2,p^' 2,q^2).
After the double Borel transformation of the function Π ^Phys
(p^2,p^' 2,q^2) over the variables -p^2 and -p^' 2
, we get
ℬΠ ^Phys(p^2,p^'
2,q^2)=g_1(q^2)f_0m_0f_J^2m_J^2F(m_0,m_J)
+g_1^∗(q^2)fmf_J^2m_J^2F(m,m_J)
+g_2(q^2)f_0m_0f_Jm_Jf_ψm_ψF(m_0,m_ψ)
+g_2^∗(q^2)fmf_Jm_Jf_ψm_ψF(m,m_ψ)+⋯ ,
with F(a,b) being equal to
F(a,b)=( a^2-b^2-q^2) /2(q^2-m_J^2)
e^-a^2/M_1^2e^-b^2/M_2^2.
The second component of the sum rules is the same correlation function Π
_μν^OPE(p,p^'), but calculated using the c
-quark propagators. The function Π _μν^OPE(p,p^') and invariant amplitude Π ^OPE(p^2,p^' 2,q^2)
were computed in Ref. <cit.>. Having equated ℬΠ
^Phys(p^2,p^' 2,q^2) and the doubly Borel
transformation of the amplitude Π ^OPE(p^2,p^'
2,q^2), and performed the continuum subtractions, we find the sum rule
equality, right-hand side of which is determined by the function
Π (𝐌^2,𝐬_0,q^2)=∫_16m_c^2^s_0ds
∫_4m_c^2^s_0^'ds^'ρ (s,s^',q^2)
× e^-s/M_1^2e^-s^'/M_2^2.
where 𝐌^2=(M_1^2,M_2^2) and 𝐬
_0=(s_0,s_0^') are the Borel and continuum threshold
parameters, respectively. A spectral density ρ (s,s^',q^2) is
found as an imaginary part of Π ^OPE(p^2,p^' 2,q^2)
. Let us note that parameters (M_1^2,𝐬_0) and (M_2^2,s_0^') correspond to X_4c-X_4c
^∗ and J/ψ -ψ ^' channels, respectively.
The equality Eq. (<ref>) obtained by this way contains four
unknown form factors g_1(2)^(∗ )(q^2). One of possible methods to
extract them from this equality is to calculate its derivatives over -1/M_1^2 and -1/M_2^2. But then final expressions for g_1(2)^(∗ )(q^2) become rather complicated, which may reduce an
accuracy of numerical analyses. Here, we pursue the alternative policy: By
choosing appropriate subtraction parameters in X_4c-X_4c
^∗ and J/ψ -ψ ^' channels, we include in analysis
terms from Eq. (<ref>) one by one. These operations change number
of components in ℬΠ ^Phys and integration limits in
Π (𝐌^2,𝐬_0,q^2). At each new stage, we take into
account results obtained in previous steps, and solve subsequent equations
with only one unknown form factor.
First of all, let us note that the form factor g_1(q^2) was evaluated
in Ref. <cit.>. It corresponds to the vertex X_4c
J/ψ J/ψ and is necessary to compute the partial width of the decay X_4c→ J/ψ J/ψ. To calculate g_1(q^2), we
fixed parameters (M_1^2,s_0) as in Eq. (<ref>), whereas
for (M_2^2,s_0^') used
M_2^2∈ 4,5] GeV^2, s_0^'∈ 12,13] GeV^2,
where s_0^' is limited by the mass m_ψ^2 of the next
state in the J/ψ -ψ ^' channel, i.e., s_0^'<m_ψ^2. Afterwards, we choose (M_1^2,s_0) in accordance
with Eq. (<ref>), but do not modify (M_2^2,s_0^')
. By this way, we include into consideration g_1^∗(q^2) and
obtain the equation containing g_1(q^2) and g_1^∗(q^2) .
This means that remaining terms in Eq. (<ref>) are included in
"higher resonances and continuum states" and their effects are implicitly
taken into account in Π (𝐌^2,𝐬_0,q^2) through
the quark-hadron duality. Then, using results for g_1(q^2), we
calculate the form factor g_1^∗(q^2) that determines the width
of the process X_4c^∗→ J/ψ J/ψ.
At the new stage of studies, we consider the equation for the form factors g_1(q^2) and g_2(q^2). The latter corresponds to the vertex X_
4cJ/ψψ ^', and formally describes the channel X_
4c→ J/ψψ ^'. This decay mode of X_
4c is kinematically forbidden, because the threshold 6737 MeV
for production of the J/ψψ ^' pair exceeds
the mass of the tetraquark X_4c. But g_2(q^2) is required
to determine the form factor g_2^∗(q^2) of interest. To extract g_2(q^2), we fix (M_1^2,s_0) by means of Eq. (<ref>),
but choose (M_2^2,s_0^∗') in the form
M_2^2∈ 4,5] GeV^2, s_0^∗'∈
15,16] GeV^2,
where s_0^∗'<m_ψ (3S)^2. Finally, using for the X_
4c-X_4c^∗ and J/ψ -ψ ^' channels
Eqs. (<ref>) and (<ref>), we calculate the last form
factor g_2^∗(q^2).
The SR method allows one to calculate the form factors in the deep-Euclidean
region q^2<0. All functions g_i^(∗ )(q^2) in the present work
are calculated in the region q^2=-(1-10) GeV^2. But partial
widths of the decays under consideration are determined by values of these
form factors at the mass shell q^2=m_J^2. To solve this problem, we
introduce a new variable Q^2=-q^2 and denote the obtained functions by
g_i^(∗ )(Q^2). Afterwards, we use a fit functions 𝒢
_i^(∗ )(Q^2) that at momenta Q^2>0 are equal to the SR's
results, but can be extrapolated to the domain of Q^2<0. In present
article, we use functions 𝒢_i(Q^2)
𝒢_i(Q^2)=𝒢_i^0exp[ c_i^1
Q^2/m^2+c_i^2( Q^2/m^2) ^2] ,
with parameters 𝒢_i^0, c_i^1 and c_i^2. It is
worth noting that in the case of g_1^∗(q^2) and g_2^∗(q^2) the parameter m in Eq. (<ref>) is the mass of the
tetraquark X_4c^∗, whereas for the intermediate functions
g_1(q^2) and g_2(q^2), we use the mass m_0 of X_4c.
Results obtained for g_1^∗(q^2) and g_2^∗(q^2) are
plotted in Fig. <ref>. Computations demonstrate that 𝒢
_1^0∗=0.68 GeV^-1, c_1^1∗=3.93, and c_1^2∗=-4.33 lead to nice agreement with the sum rule's data for g_1^∗(Q^2). At the mass shell q^2=m_J^2 the function 𝒢_1^∗(Q^2) is equal to
g_1^∗≡𝒢_1^∗(-m_J^2)=(3.1± 0.5)×
10^-1 GeV^-1.
The width of the decay X_4c^∗→ J/ψ J/ψ
can be obtained by employing the expression
Γ[ X_4c^∗→ J/ψ J/ψ]
=g_1^∗ 2λ _1/8π( m_J^4/m^2+
2λ _1^2/3) ,
where λ _1=λ (m,m_J,m_J) and
λ (m_1,m_2,m_3)=[ m_1^4+m_2^4+m_3^4
. /2m_1
. -2(m_1^2m_2^2+m_1^2m_3^2+m_2^2m_3^2)]
^1/2.
Then it is not difficult to find that
Γ[ X_4c^∗→ J/ψ J/ψ]
=(30.1± 8.3) MeV.
In the case of g_2^∗(Q^2), similar investigations give for the
parameters of the function 𝒢_2^∗(Q^2) following
results: 𝒢_2^0∗=0.54 GeV^-1, c_2^1∗=3.28, and c_2^2∗=-4.26. The strong coupling g_2^∗
equals to
g_2^∗≡𝒢_2^∗(-m_J^2)=(2.5± 0.5)×
10^-1 GeV^-1.
Partial width of the process X_4c^∗→ J/ψψ
^' is given by the formula
Γ[ X_4c^∗→ J/ψψ ^'
] =g_2^∗ 2λ _2/8π( m_ψm_J^2/m^2+2λ _2^2/3) ,
where λ _2=λ (m,m_ψ,m_J). This leads to the
prediction
Γ[ X_4c^∗→ J/ψψ ^'
] =(11.5± 3.3) MeV.
The results obtained for these two decay channels are collected in Table
<ref>.
The decays X_4c^∗→η _cη _c and X_
4c^∗→η _cη _c(2S) can be explored in
the context of this scheme as well. In this case, the double Borel
transformation of the amplitude Π _η _c^Phys
(p^2,p^' 2,q^2) equals to
ℬΠ _η _c^Phys(p^2,p^'
2,q^2)=g_3(q^2)f_0m_0f_η _c^2m_η _c^4/
4m_c^2R(m_0,m_η _c)
+g_3^∗(q^2)fmf_η _c^2m_η _c^4/4m_c^2
R(m,m_η _c)+g_4(q^2)f_0m_0f_η _cm_η
_c^2/4m_c^2
× f_η _c(2S)m_η _c(2S)^2R( m_0,m_η
_c(2S)) +g_4^∗(q^2)fmf_η _cm_η _c^2
/4m_c^2
× f_η _c(2S)m_η _c(2S)^2R(m,m_η _c(2S))+⋯
,
where m_η _c=(2983.9± 0.4) MeV, f_η _c=(398.1±
1.0) MeV and m_η _c(2S)=(3637.5± 1.1) MeV, f_η _c(2S)=331 MeV are the spectroscopic parameters of the η _c and η _c(2S) mesons <cit.>. The
function R(a,b) is defined by the formula
R(a,b)=( a^2+b^2-q^2) /2(q^2-m_η _c^2)
e^-a^2/M_1^2e^-b^2/M_2^2.
The invariant amplitude Π _η _c^OPE(p^2,p^'
2,q^2) was calculated in our article <cit.>. Here, one
should take into account that the regions (M_2^2,s_0^') and (M_2^2,s_0^∗') for η _c-η _c(2S) channel are
given by the expressions
M_2^2∈ 3.5,4.5] GeV^2, s_0^'∈
11,12] GeV^2,
and
M_2^2∈ 3.5,4.5] GeV^2, s_0^∗'∈ 13,14] GeV^2,
respectively. In the case of g_3^∗(Q^2), our studies lead for
the parameters of the function 𝒢_3^∗(Q^2) to
predictions: 𝒢_3^0∗=0.39 GeV^-1, c_3^1∗=4.01, and c_3^2∗=-4.99. Then the coupling g_3^∗ is equal to
g_3^∗≡𝒢_3^∗(-m_η _c^2)=(1.7±
0.4)× 10^-1 GeV^-1.
The width of the decay X_4c^∗→η _cη
_c can be found by means of the formula
Γ[ X_4c^∗→η _cη _c]
=g_3^∗ 2m_η _c^2λ _3/8π( 1+
λ _3^2/m_η _c^2) ,
where λ _3=λ (m,m_η _c,m_η _c). Numerical
computations yield
Γ[ X_4c^∗→η _cη _c]
=(30.6± 10.5) MeV.
For the second decay X_4c^∗→η _cη
_c(2S), we get
g_4^∗≡𝒢_4^∗(-m_η _c^2)=(1.4±
0.3)× 10^-1 GeV^-1,
Γ[ X_4c^∗→η _cη _c(2S)
] =(16.6± 5.5) MeV,
where 𝒢_4^∗(Q^2) is the function with parameters 𝒢_4^0∗=0.32 GeV^-1, c_4^1∗=4.06, and
c_4^2∗=-5.02.
Treatment of the channels X_4c^∗→η _cχ
_c1, χ _c0χ _c0, and χ _c1χ _c1 is done by taking
into account vertices of the tetraquarks X_4c and X_4c
^∗ with these meson pairs. Therefore, the physical side of the sum
rules consists of two terms. In the case of the η _cχ _c1
mesons, both the ground-level tetraquark X_4c and its excited
state X_4c^∗ decays to this meson pair. Therefore, to
find the partial decay width of the process X_4c^∗→η _cχ _c1, we use the form factor g_5(q^2)
studied in Ref. <cit.>, and extract g_5^∗(q^2)
necessary to compute the coupling g_5^∗ at the mass shell q^2=m_η _c^2. The corresponding fit function 𝒢
_5^∗(Q^2) has the parameters: 𝒢_5^0∗=3.46, c_5^1∗=3.59, and c_5^2∗=-4.72.
The remaining processes X_4c^∗→χ _c0χ
_c0 and χ _c1χ _c1 are investigated by the same manner, the
difference being that decays of X_4c to mesons χ _c0χ
_c0, and χ _c1χ _c1 are not kinematically allowed channels,
but we compute relevant form factors to find strong couplings g_6^∗
and g_7^∗ of interests. The related correlation functions are
calculated in the present work for the first time and given by the
expressions (<ref>) and (<ref>). The final results of
analysis are collected in Table <ref>. Let us note only that
in numerical computations, we employ the SR predictions for the decay
constants f_χ _c1=(344± 27) MeV and f_χ _c0=343
MeV <cit.>.
Having used results for the partial widths of the excited X_4c
^∗ tetraquark's decay channels, we estimate its full width
Γ =(144± 18) MeV.
§ HADRONIC MOLECULE Χ _C1Χ _C1
Here, we investigate the hadronic molecule ℳ=χ _c1χ _c1
and calculate the mass and current coupling of this structure, which will be
used to determine its kinematically allowed decay channels. Decays of the
molecule ℳ and its full width are also studied in this section.
§.§ Mass and current coupling
The sum rules for the mass m and current coupling f of the molecule ℳ can be extracted by exploring the
correlation function
Π (p)=i∫ d^4xe^ipx⟨ 0|𝒯{J(x)
J^†(0)}|0⟩ .
Here, J(x) is the interpolating current for ℳ
J(x)=c_a(x)γ _5γ _μc_a(x)
c_b(x)γ _5γ ^μc_b(x),
with a, and b being the color indices. We are going to calculate
spectroscopic parameters of the ground-level molecule ℳ,
therefore the physical side of the SRs is given by only one term
Π ^Phys(p)=f^2m^2/
m^2-p^2+⋯ .
It is calculated by taking into account the matrix element
⟨ 0|J|ℳ⟩ =fm.
The invariant amplitude that is required for following analysis is Π ^
Phys(p^2)=f^2m^2/(m
^2-p^2).
The correlation function Π ^OPE(p) in terms of the c-quark
propagators is determined by Eq. (<ref>)
Π ^OPE(p)=i∫ d^4xe^ipx{Tr[ γ
_5γ _μS_c^ba^'(x)γ _νγ
_5S_c^a^'b(-x)] .
×Tr[ γ _5γ ^μS_c^ab^'(x)γ ^νγ _5S_c^b^'a(-x)] -Tr
[ γ _5γ _μS_c^bb^'(x)γ _ν.
. ×γ _5S_c^b^'a(-x)γ _5γ ^μS_c^aa^'(x)γ ^νγ _5S_c^a^'b(-x)
] -Tr[ γ _5γ _μ.
. × S_c^ba^'(x)γ _νγ
_5S_c^a^'a(-x)γ _5γ ^μS_c^ab^'(x)γ ^νγ _5S_c^b^'b(-x)]
+Tr[ γ _5γ _μS_c^bb^'(x)γ
_νγ _5S_c^b^'b(-x)] Tr[ γ
_5γ ^μS_c^aa^'(x)γ ^ν.
. . ×γ _5S_c^a^'a(-x)] } .
It is convenient to denote the invariant amplitude of the QCD side by Π ^
OPE(p^2). Then, the sum rules for the mass and current coupling
take simple forms
m^2=Π ^'(M^2,s_0)/Π (M^2,s_0)
and
f^2=e^m^2/M^2/m^2Π
(M^2,s_0),
where Π ^'(M^2,s_0)=dΠ (M^2,s_0)/d(-1/M^2). Here, Π (M^2,s_0) is the amplitude Π ^OPE(p^2) obtained
after the Borel transformation and continuum subtraction operations.
Computations lead to the following constraints on the parameters M^2 and
s_0
M^2∈ 6,8] GeV^2, s_0∈ 63,65] GeV
^2.
It is not difficult to check that PC meets usual requirements of
SR computations. In Fig. <ref>, we plot dependence of the pole
contribution on the Borel parameter. It is seen, that expect for a small
region, PC is larger than 0.5. On average in s_0 the PC exceeds 0.5 for all values of M^2.
The mass and current coupling of the molecule ℳ are
m = (7180± 120) MeV,
f = (1.06± 0.13)× 10^-1 GeV^4,
respectively. It is worth to note that m and f
in Eq. (<ref>) are mean values of the mass and current coupling
averaged over the working regions (<ref>). It overshoots the mass 7022 MeV of two χ _c1 mesons by 160 MeV and is
unstable against decays to these particles.
In Fig. <ref>, we plot the mass m as a function of M^2 and s_0, in which its residual dependence on these parameters is
clear. It is also useful to estimate a gap between the ground-state ℳ and excited molecules ℳ^∗. The mass m^∗ of the state ℳ^∗ should obey the
constraint m^∗≥√(s_0), i.e., m
^∗≥ 8 GeV, which implies an approximately 800 MeV mass splitting between these molecules.
§.§ Width of ℳ
Decay channels of the hadronic molecule ℳ do not differ from
that of the tetraquark X_4c^∗. A difference appears in
treatment of these processes. Indeed, the molecule ℳ is
ground-state particle it its class, therefore physical side of relevant sum
rules in ℳ channel contains terms connected only with its decays.
Because the resonances under investigation were detected in the di-J/ψ
and J/ψψ ^' mass distributions, we concentrate on the
decays ℳ→ J/ψ J/ψ and ℳ→
J/ψψ ^'. The correlation function required for this analysis
is given by the formula
Π_μν(p,p^') = i^2∫
d^4xd^4ye^ip^'ye^-ipx⟨ 0|𝒯{J_μ^ψ(y)
× J_ν^ψ(0)J^†(x)}|0⟩ .
As usual, we express Π_μν(p,p^') in terms
of the physical parameters of particles involved in the decay process. To
this end, we write it in the following form
Π_μν^Phys(p,p^')=⟨
0|J_μ^ψ|J/ψ (p^')⟩/p^' 2-m_J^2
⟨ 0|J_ν^ψ|J/ψ (q)⟩/q^2-m_J^2
×⟨ J/ψ (p^')J/ψ (q)|ℳ(p)⟩
⟨ℳ(p)|J^†|0⟩/p^2-m
^2
+⟨ 0|J_μ^ψ|ψ (p^')⟩/p^'
2-m_ψ^2⟨ 0|J_ν^ψ|J/ψ (q)⟩/
q^2-m_J^2
×⟨ψ (p^')J/ψ (q)|ℳ(p)⟩
⟨ℳ(p)|J^†|0⟩/p^2-m
^2+⋯ .
We have already defined the matrix elements of the hadronic molecule ℳ and mesons J/ψ and ψ ^'. The vertices ℳJ/ψ J/ψ and ℳJ/ψψ ^' after
some substitutions are given by Eq. (<ref>). As in previous
section, we use the amplitude Π^Phys
(p^2,p^' 2,q^2) which in Π_μν^
Phys(p,p^') corresponds to a term proportional to g_μν.
The double Borel transformation of the function Π^
Phys(p^2,p^' 2,q^2) over the variables -p^2 and -p^' 2 is equal to
ℬΠ^Phys(p^2,p^'
2,q^2)=G_1(q^2)fmf_J^2m_J^2F(
m,m_J)
+G_1^∗(q^2)fmf_Jm_Jf_ψm_ψF(m,m_ψ)+⋯ .
The correlation function Π_μν^OPE
(p,p^') is given by the formula
Π_μν^OPE(p,p^')=2i^2∫
d^4xd^4ye^-ipxe^ip^'y{Tr[ γ _νS_c^jb(-x). .
. ×γ ^αγ _5S_c^bj(x)] Tr
[ γ _μS_c^ia(y-x)γ _αγ
_5S_c^ai(x-y)]
-Tr[ γ _μS_c^ia(y-x)γ _αγ
_5S_c^aj(x)γ _νS_c^jb(-x)γ ^α.
. . ×γ _5S_c^bi(x-y)] } .
The QCD side of the sum rule and amplitude Π^OPE
(p^2,p^' 2,q^2) are extracted from this expression. The
strategy pursued in our study of these processes repeats one used in Sec.
<ref> while considering decays of the tetraquark X_
4c^∗. We first determine the form factor G_1(q^2) utilizing
the "ground-state + continuum" scheme. The parameters (M_1^2,s_0)
are universal for all decays of ℳ and are presented in Eq. (<ref>). The second pair of the parameters (M_2^2,s_0^')
corresponding to J/ψ J/ψ decay can be found in Eq. (<ref>
). Once determined G_1(q^2), in the second stage of computations we
choose (M_2^2,s_0^∗') from Eq. (<ref>) and
employ information on G_1(q^2) to find the form factor G_1^∗(q^2), responsible for the process ℳ→ J/ψψ
^'. The functions 𝒢_8(Q^2) and 𝒢
_8^∗(Q^2) are formed by the parameters
𝒢_8^0 = 0.76 GeV^-1 , c_8^1=3.32,c_8^2=-4.19,
𝒢_8^0∗ = 0.68 GeV^-1,c_8^1∗=3.20,c_8^2∗=-4.11.
The strong couplings G_1 and G_1^∗ are extracted from these
functions at the mass shells Q^2=-m_J^2.
This approach is also valid for the channels ℳ→η
_cη _c and ℳ→η _cη _c(2S). The
correlation function required for these decays is written down below
Π ^OPE(p,p^')=2∫ d^4xd^4ye^-ipxe^ip^'y{Tr[ γ _5S_c^ia(y-x). .
. ×γ _αγ _5S_c^ai(x-y)] Tr
[ γ _5S_c^jb(-x)γ ^αγ _5S_c^bj(x)
]
-Tr[ γ _5S_c^ia(y-x)γ _αγ
_5S_c^aj(x)γ _5S_c^jb(-x)γ ^α.
. . γ _5S_c^bi(x-y)] } .
The functions 𝒢_9(Q^2) and 𝒢_9^∗(Q^2)
needed to extrapolate the form factors G_2(q^2) and G_2^∗(q^2) are determined by the parameters: 𝒢_9^0=0.46
GeV^-1 , c_9^1=3.93, c_9^2=-4.83 and 𝒢
_9^0∗=0.30 GeV^-1,c_9^1∗=3.90, c_9^2∗=-4.81. These functions at the mass shells Q^2=-m_J^2 fix the
couplings G_2 and G_2^∗, respectively.
The decays ℳ→η _cχ _c1, χ _c0χ
_c0, and χ _c1χ _c1 are investigated directly in the context
of the "ground-state + continuum" approach. Corresponding functions Π
_μ^OPE(p,p^'), Π ^OPE(p,p^')
and Π_μν^OPE(p,p^') can found in
Appendix as Eqs. (<ref>) -(<ref>). Predictions obtained for the
partial widths of different modes of the hadronic molecule ℳ,
strong couplings and related parameters are presented in Table <ref>. It should be noted that, to collect results obtained in this
work in the framework of a single Table, the couplings G_1^∗, G_2 and G_2^∗ are placed there under numbers G_2^∗, G_3 and G_4^∗, respectively.
For the full width of the hadronic molecule, we get
Γ=(169± 21) MeV,
which characterizes it as a wide structure.
§ SUMMING UP
In the present work, we have explored radially excited tetraquark X_
4c^∗ and hadronic molecule ℳ=χ _c1χ _c1. We have computed their masses and full widths using the QCD sum rule
method intending to confront obtained results with available data of the
ATLAS-CMS Collaborations for the heaviest resonance X(7300).
The LHCb fixed this state at 7.2 GeV, but did not provide other
information. The CMS measured parameters of this resonance and found that
m^CMS = 7287_-18^+20± 5 MeV,
Γ ^CMS = 95_-40^+59± 19 MeV.
The ATLAS Collaboration observed X(7300) in the J/ψψ ^'
mass distribution and also reported the mass and width of this state
m^ATL = 7220± 30_-30^+20 MeV,
Γ ^ATL = 100_-70-50^+130+60 MeV.
As is seen, experimental data suffer from big errors, which are relatively
small in the case of Eq. (<ref>).
Comparing our findings m=(7235± 75) MeV and m
=(7180± 120) MeV with corresponding experimental data and taking
into account errors of calculations and measurements, we conclude that
masses of the excited tetraquark X_4c^∗ and hadronic
molecule ℳ are compatible with m^CMS and m^
ATL. Using only the central values of m and m, we
can state closeness of m and m to the ATLAS datum.
Therefore, at this phase of analysis, it is difficult to make assignment for
the resonance X(7300).
The full widths of the structures X_4c^∗ and ℳ
provide very important information for this purpose. It is interesting that X(7300) is narrowest fully charmed state detected by the ATLAS and CMS
experiments provided one ignores errors of measurements. Among the
four-quark structures explored in the present work, the tetraquark X_
4c^∗ has the width close to the experimental data.
Therefore, we consider it as a natural candidate to the observed state X(7300). The molecule ℳ, due to large theoretical and
experimental uncertainties, may also be interpreted as the resonance X(7300), or some part of this state in a tetraquark-molecule mixing model.
It is known that in the framework of the sum rule method physical
observables can be evaluated with some accuracy. At the same time, this
method allows us to estimate uncertainties of relevant analysis. The
ambiguities fixed for the masses and widths of structures X_4c
^∗ and ℳ are typical for such kind of investigations,
and can be hardly reduced. Therefore, for more credible conclusions about a
nature of X(7300), one needs more precise experimental data. This is true
not only for X(7300), but also for other fully charmed X resonances.
*
§ DIFFERENT CORRELATION FUNCTIONS
Here, we collect expressions of correlation functions, which are employed to
calculate some of the strong couplings. In the case of the decay X_
4c^∗→χ _c0χ _c0 the correlation
function Π ^OPE(p,p^') is given by the formula
Π ^OPE(p,p^')=2i^2∫
d^4xd^4ye^-ipxe^ip^'y{Tr[
S_c^ia(y-x)γ _μS_c^jb(-x)S
_c^bj(x)γ ^μS_c^ai(x-y)] .
. -Tr[ S_c^ia(y-x)γ _μS
_c^jb(-x)S_c^aj(x)γ ^μS_c^bi(x-y)]
} .
The fit function 𝒢_6^∗(Q^2) used to find the strong
coupling g_6^∗ is fixed by the parameters 𝒢_6^0∗=0.51 GeV^-1, c_6^1∗=3.11, and c_6^2∗=-3.57.
For the decay X_4c^∗→χ _c1χ
_c1 the function Π _μν^OPE(p,p^') has the
following form:
Π _μν^OPE(p,p^')=2i^2∫
d^4xd^4ye^-ipxe^ip^'y{Tr[ γ _μγ _5S_c^ia(y-x)γ _αS_c^jb(-x)γ
_5γ _νS_c^aj(x)γ ^αS_c^bi(x-y)
] .
. -Tr[ γ _μγ _5S_c^ia(y-x)γ
_αS_c^jb(-x)γ _5γ _νS
_c^bj(x)γ ^αS_c^ai(x-y)] } .
In this case, the function 𝒢_7^∗(Q^2) has the
parameters: 𝒢_7^0∗=0.74 GeV^-1, c_7^1∗=2.48, and c_7^2∗=-3.01.
The correlation functions for the decays of the hadronic molecule ℳ and functions to calculate the relevant strong couplings:
Decay ℳ→η _cχ _c1
Π _μ^OPE(p,p^')=2i^3∫
d^4xd^4ye^-ipxe^ip^'y{Tr[ γ _μγ _5S_c^ia(y-x)γ _αγ _5S_c^ai(x-y)]
Tr[ γ _5S_c^jb(-x)γ ^αγ
_5S_c^bj(x)] .
. -Tr[ γ _μγ _5S_c^ia(y-x)γ
_αγ _5S_c^aj(x)γ _5S_c^jb(-x)γ ^αγ _5S_c^bi(x-y)] } ,
and the fit function 𝒢_10(Q^2) for G_5(Q^2): 𝒢_10^0=3.85, c_10^1=3.51, and c_10^2=-4.56.
Decay ℳ→χ _c0χ _c0
Π ^OPE(p,p^')=-2i^2∫
d^4xd^4ye^-ipxe^ip^'yTr[ S_c^ia(y-x)γ
_αγ _5S_c^aj(x)S_c^jb(-x)γ ^αγ
_5S_c^bi(x-y)] ,
and G_6(Q^2): 𝒢_11^0=0.55 GeV^-1, c_11^1=3.06, and c_11^2=-3.46.
Decay ℳ→χ _c1χ _c1
Π_μν^OPE(p,p^') = 2i^2∫
d^4xd^4ye^-ipxe^ip^'y{Tr[ γ _μγ _5S_c^ia(y-x)γ _αγ _5S_c^ai(x-y)]
Tr[ γ _νγ _5S_c^jb(-x)γ ^αγ _5S_c^bj(x)] .
. -Tr[ γ _μγ _5S_c^ia(y-x)γ
_αγ _5S_c^aj(x)γ _νγ
_5S_c^jb(-x)γ ^αγ _5S_c^bi(x-y)]
} ,
and the parameters 𝒢_12^0=0.86 GeV^-1, c_12^1=2.41, and c_12^2=-2.89 to compute G_7(Q^2).
99
LHCb:2020bwg R. Aaij et al. (LHCb Collaboration),
Sci. Bull. 65, 1983 (2020).
Bouhova-Thacker:2022vnt E. Bouhova-Thacker (ATLAS Collaboration),
PoS ICHEP2022, 806 (2022).
CMS:2023owd A. Hayrapetyan, et al. (CMS Collaboration)
arXiv:2306.07164 [hep-ex].
Zhang:2020xtb J. R. Zhang,
Phys. Rev. D 103, 014018 (2021).
Albuquerque:2020hio R. M. Albuquerque, S. Narison,
A. Rabemananjara, D. Rabetiarivony and G. Randriamanatrika,
Phys. Rev. D 102, 094001 (2020).
Wang:2022xja Z. G. Wang,
Nucl. Phys. B 985, 115983 (2022).
Dong:2022sef W. C. Dong and Z. G. Wang,
Phys. Rev. D 107, 074010 (2023).
Faustov:2022mvs R. N. Faustov, V. O. Galkin, and E. M. Savchenko,
Symmetry 14, 2504 (2022).
Becchi:2020mjz C. Becchi, A. Giachino, L. Maiani and
E. Santopinto,
Phys. Lett. B 806, 135495 (2020).
Becchi:2020uvq C. Becchi, A. Giachino, L. Maiani, and
E. Santopinto,
Phys. Lett. B 811, 135952 (2020).
Dong:2020nwy X. K. Dong, V. Baru, F. K. Guo, C. Hanhart and
A. Nefediev,
Phys. Rev. Lett. 126, 132001 (2021). [erratum: Phys. Rev. Lett.
127, 119901 (2021)].
Dong:2021lkh X. K. Dong, V. Baru, F. K. Guo, C. Hanhart,
A. Nefediev and B. S. Zou,
Sci. Bull. 66, 2462 (2021).
Liang:2021fzr Z. R. Liang, X. Y. Wu and D. L. Yao,
Phys. Rev. D 104, 034034 (2021).
Niu:2022vqp P. Niu, Z. Zhang, Q. Wang, and M. L. Du,
arXiv:2212.06535 [hep-ph].
Yu:2022lak Z. R. Liang, X. Y. Wu and D. L. Yao,
arXiv:2212.14339 [hep-ph].
Kuang:2023vac S. Q. Kuang, Q. Zhou, D. Guo, Q. H. Yang, and
L. Y. Dai,
arXiv:2302.03968 [hep-ph].
Feng:2023agq F. Feng, Y. Huang, Y. Jia, W. L. Sang, D. S. Yang,
and J. Y. Zhang,
arXiv:2304.11142 [hep-ph].
Abreu:2023wwg L. M. Abreu, F. Carvalho, J. V. C. Cerquera, and
V. P. Goncalves,
arXiv:2306.12731 [hep-ph].
Agaev:2023wua S. S. Agaev, K. Azizi, B. Barsbay and H. Sundu,
arXiv:2304.03244 [hep-ph].
Agaev:2023gaq S. S. Agaev, K. Azizi, B. Barsbay and H. Sundu,
arXiv:2304.09943 [hep-ph].
Agaev:2023ruu S. S. Agaev, K. Azizi, B. Barsbay and H. Sundu,
arXiv:2305.03696 [hep-ph].
Shifman:1978bx M. A. Shifman, A. I. Vainshtein and V. I. Zakharov,
Nucl. Phys. B 147, 385 (1979).
Shifman:1978by M. A. Shifman, A. I. Vainshtein and V. I. Zakharov,
Nucl. Phys. B 147, 448 (1979).
PDG:2022 R. L. Workman et al. [Particle Data Group],
Prog. Theor. Exp. Phys. 2022, 083C01 (2022).
Kiselev:2001xa V. V. Kiselev, A. K. Likhoded, O. N. Pakhomova, and
V. A. Saleev,
Phys. Rev. D 65, 034013 (2002).
Hatton:2020qhk D. Hatton et al. (HPQCD Collaboration),
Phys. Rev. D 102, 054511 (2020).
VeliVeliev:2012cc E. Veli Veliev, K. Azizi, H. Sundu, and G. Kaya,
PoS (Confinement X) 339, 2012; arXiv:1205.5703.
Veliev:2010gb E. V. Veliev, H. Sundu, K. Azizi, and M. Bayar,
Phys. Rev. D 82, 056012 (2010).
|
http://arxiv.org/abs/2307.03046v1
|
20230706151231
|
Convergence Properties of Newton's Method for Globally Optimal Free Flight Trajectory Optimization
|
[
"Ralf Borndörfer",
"Fabian Danecker",
"Martin Weiser"
] |
math.OC
|
[
"math.OC",
"cs.NA",
"math.NA",
"49M05, 49M15, 49M37, 65K05, 65L10, 90C26, 90C30"
] |
Origin-Destination Travel Time Oracle for Map-based Services
Yan Lin^1,3,
Huaiyu Wan^1,3,
Jilin Hu^2,
Shengnan Guo^1,3,
Bin Yang^2,
Youfang Lin^1,3,
Christian S. Jensen^2
August 1, 2023
======================================================================================================================
The algorithmic efficiency of Newton-based methods for Free Flight Trajectory Optimization is heavily influenced by the size of the domain of convergence. We provide numerical evidence that the convergence radius is much larger in practice than what the theoretical worst case bounds suggest. The algorithm can be further improved by a convergence-enhancing domain decomposition.
§ INTRODUCTION
Today, aircraft are required to take routes in the airway network, a 3D graph over the surface of the earth. Such routes are longer and less fuel efficient than unconstrained routes. Air traffic associations in many places, in particular, in Europe and in the US, are therefore investigating options to introduce Free Flight aviation regimes that allows such routes, in an attempt to reduce congestion, travel times, and fuel consumption. By giving pilots more freedom to choose their routes, taking into account factors such as weather conditions, wind patterns, and individual aircraft performance, Free Flight can improve overall efficiency and operational flexibility.
In <cit.>, we introduced an algorithm that combines Discrete and Continuous Optimization techniques to obtain a globally optimal trajectory under Free Flight conditions. The approach involves constructing a discrete approximation of the problem in the form of a sufficiently dense graph, which implicitly generates a pool of potential candidate paths. These paths (i) can be efficiently explored using state-of-the-art shortest path algorithms, and (ii) provide suitable initial solutions for a locally convergent continuous optimization approach. Specifically, we proposed the application of Newton's method to the first-order necessary conditions, an algorithm that is known as Newton-KKT method or Sequential Quadratic Programming (SQP) <cit.>.
The efficiency of this hybrid method hinges on the graph density that is required to guarantee that a discrete candidate path lies within the domain of convergence of the continuous optimizer. The size of the domain of convergence depends on the wind conditions, and directly impacts the computational efficiency of the algorithm: A smaller convergence radius requires a denser graph and thus more discrete candidate paths that need to be considered.
In this article we provide numerical evidence that
the convergence radius exceeds the theoretical lower bound. This finding greatly enhances the robustness, the speed, and the practical applicability of the proposed approach beyond the theoretical guarantees that are currently known.
Furthermore, our investigation confirms that the norm that was introduced in our previous papers to quantify the size of the domain of convergence is an appropriate choice. It effectively captures the characteristics of the domain and provides meaningful insights into its extent. We finally propose a nonlinear domain decomposition-inspired algorithmic modification to increase the convergence radius and enhance optimization performance.
§ THE FREE FLIGHT TRAJECTORY OPTIMIZATION PROBLEM
Neglecting any traffic flight restrictions, we consider flight paths in the Sobolev-Space
X = {ξ∈ W^1,∞((0,1), ^2) | ξ(0) = x_O, ξ(1) = x_D}.
connecting origin x_O and destination x_D. A short calculation reveals that an aircraft travelling along such a path ξ with constant airspeed through a three times continuously differentiable wind field w∈ C^3(^2,^2) of bounded magnitude w_L^∞ < reaches the destination after a flight duration
T(ξ) = ∫_0^1 f(ξ(τ),(τ)) dτ,
where ξ_τ denotes the time derivative of ξ and
f(ξ,)
:= t_τ
= -^Tw + √((^Tw)^2+(^2 - w^Tw)(^T ))/^2 - w^Tw,
see <cit.>.
Among these paths ξ, we need to find one with minimal flight duration T(ξ), since that is essentially proportional to fuel consumption <cit.>. This classic of optimal control is known as Zermelo's navigation problem <cit.>.
Since the flight duration T as defined in (<ref>) is based on a time reparametrization from actual flight time t∈[0,T] to pseudo-time τ∈(0,1) according to the actual flight trajectory x(t) = ξ(τ(t)) such that x_t(t)-w(x(t)) =, the actual parametrization of ξ in terms of pseudo-time τ is irrelevant for the value of T and we can restrict the optimization to finding the representative with constant ground speed. Hence, we will subsequently consider the constrained minimization problem
min_ξ∈ X, L∈ T(ξ), s.t. (τ)^2 = L^2 for a.a. τ∈(0,1).
§ NUMERICAL RESULTS
In the following we explore three key aspects of Free Flight Optimization numerically: the gap between the empirical convergence radius and its theoretical lower bound, the suitability of the norm used in previous works for assessing convergence accurately, and an algorithmic approach for increasing the convergence radius.
These points will be studied on a benchmark example of crossing a wind field consisting of 15 regularly aligned disjoint vortices from x_O = (0,0) to x_D = (1,0) at an airspeed of = 1, see <Ref> a). The wind speed attains its maximum at the center of a vortex with w_L^∞≤1/2 and decreases monotonically to 0 towards the boundary. A formal definition is given in <cit.>. In a wind field with n vortices, there may be roughly 𝒪(2^n) locally optimal routes, posing a challenging problem for global optimization; moreover, a wind field setting of this complexity will rarely if ever be encouuntered in practice.
§.§ Size of the Convergence Radius
It has been shown in <cit.> that there is a positive convergence radius R_C such that the Newton-KKT method initialized with ξ converges to a minimizer ξ if
ξ-ξ + (ξ-ξ)_τ + |L- L| + λ-λ≤ R_C.
Since the constraint in (<ref>) is only weakly active, the Lagrangian Multiplier can directly be initialized with λ=λ=0 (see <cit.>). Moreover, the L can reasonably be initialized with the path length of the candidate route. Hence we concentrate on the first two terms.
It can be shown that even under mild wind conditions, R_C≈ 10^-8 holds. Numerical experiments, however, reveal that the domain of convergence is actually much larger.
For the purpose of illustration we examine a two-dimensional affine subspace of the trajectory space
M := ξ + ^ hf + ^ lf
anchored at the global optimum ξ and spanned by a low- and a high-frequent deviation, both of the form
^f(τ) = n(τ) sin(k^fπτ), f∈{ hf, lf}
with k^ lf=1, k^ hf=30 and n(τ)∈^2 denoting a unit vector perpendicular to the optimal direction of flight (τ).
The norm of such a deviation reads
^f_W^1,∞(0,1) = ^f + ^f_τ = 1+k^fπ
and consequently
_W^1,∞(0,1) = a^ hf^ hf + a^ lf^ lf_W^1,∞(0,1) = |a^ hf| (1+k^ hfπ) + |a^ lf| (1+k^ lfπ).
From this subspace, M candidates ξ are sampled around the global optimum and used as starting points in order to solve the optimization problem (<ref>) via the Newton-KKT method as described in <cit.>.
<Ref> a) shows the global optimum in blue and the extremes of the sampled region in red and green, solid and dotted, respectively.
<Ref> b) shows whether the procedure converged back to the optimum (white) or not (gray) with the abscissa and ordinate indicating the Sobolev-norm of the high- and low-frequency deviation, respectively.
The total Sobolev-distance (<ref>) is indicated by dotted contour lines.
It is clearly visible that the convergence radius is consistently larger than 10^-1 – several orders of magnitude larger than the theoretically guaranteed 10^-8.
§.§ Relevance of the Error Terms
Throughout this and previous papers (e.g., <cit.>) two critical components were used to asses distances in the studied trajectory space:
distance error: ||Δξ||_L^∞ = |a^lf| + |a^hf|,
angular error: ||Δξ_τ||_L^∞ = |a^lf| k^lfπ+ |a^hf| k^hfπ.
Higher order derivatives do not affect the overall travel time (<ref>).
With the same norm, a low-frequent deviation introduces mostly distance error, while a deviation with high frequency results in significant angular error. This observation allows transforming each quadrant of <Ref> b) into the space of distance and angular error via
=
1/1+k^ lfπ^ lf_W^1,∞
+ 1/1+k^ hfπ^ hf_W^1,∞
=
k^ lfπ/1+k^ lfπ^ lf_W^1,∞
+ k^ hfπ/1+k^ hfπ^ hf_W^1,∞,
as shown in <Ref> c).
Note that both deviations contribute to angular and distance errors. As a result, cones around the axes (depicted as light gray regions) cannot be represented using deviations of the specified form.
Both error terms are significant. A viable route can have a large distance error if it is far from the optimum (<Ref> a), red paths), but it should exhibit parallel behavior for a small angular error. On the other hand, if the candidate path zig-zags around the optimum, it will have a substantial angular error (<Ref> a), green paths), but it cannot deviate significantly from the optimal path, leading to a lower distance error.
In terms of distance error, the extent of the domain of convergence is largely determined by the wind field. At each vortex there are two locally optimal options; passing left or right.
At some point one will inevitably enter the convergence region of the next local optimum.
§.§ Algorithmic Improvement
Our approach focuses on candidate routes with a high angular error, as exemplified by the red route in <Ref>. This is of importance for the discrete-continuous algorithm, since graph-based shortest paths tend to zig-zag around an optimizer <cit.>.
It is intuitively clear that on a local scale, an optimal trajectory is nearly straight. We exploit this for reducing high-frequent errors by solving local trajectories on an overlapping decomposition of the time domain, thus realizing a nonlinear alternating Schwarz method <cit.>.
We select equidistant points along the initial route, such that the distance between consecutive points is smaller than significant wind field structures. In the example, the route was obtained by imposing a large, high-frequency deviation as before and divided into 11 segments, deliberately not a divisor of the frequency. This initial route lies outside the convergence region (see <Ref>).
In the first step, we calculate the optimal routes on all subintervals (depicted in green). Next, utilizing this refined segment, we repeat the process with shifted waypoints (depicted in orange). A significant portion of the oscillation has been smoothed out, resulting in a notable reduction of the angular error. Using this refined segment as a starting point for optimizing the entire route leads us to the desired optimum (blue).
<Ref> reveals, that this improvement enlarges the convergence region significantly.
§ CONCLUSION
The recently proposed Discrete-Continuous Hybrid Algorithm for Free Flight Trajectory Optimization relies on the existence of a sufficiently large domain of convergence around a global minimizer.
In our study, we have presented compelling evidence that this condition is satisfied even under highly challenging conditions and that the measure we have proposed for assessing it is appropriate.
Furthermore, we have introduced a domain decomposition method to expand the convergence region, which is expected to significantly enhance the practical performance of the hybrid approach.
|
http://arxiv.org/abs/2307.00394v1
|
20230701174530
|
Axion-Like Particles at future $e^- p$ collider
|
[
"Karabo Mosala",
"Pramod Sharma",
"Mukesh Kumar",
"Ashok Goyal"
] |
hep-ph
|
[
"hep-ph"
] |
MobileViG: Graph-Based Sparse Attention for Mobile Vision Applications
Mustafa Munir*
The University of Texas at Austin
mmunir@utexas.edu
William Avery*
The University of Texas at Austin
williamaavery@utexas.edu
Radu Marculescu
The University of Texas at Austin
radum@utexas.edu
============================================================================================================================================================================================================================
§ INTRODUCTION
Axion-like particles (ALP) are Standard Model (SM) singlet pseudo-scalar Nambu-Goldstone bosons originally purposed to solve the strong CP problem <cit.>. Their interaction with the SM particles arises from an explicit breaking of an approximate Global Peccei-Quinn U(1)_PQ symmetry <cit.> with couplings considered as free parameters. Subsequently, ALPs made their appearance in beyond the SM (BSM) viz. composite models <cit.>, Grand Unification models <cit.>, extra-dimension models <cit.>, super-symmetric models <cit.>, string theories <cit.> etc. The axion mass (m_a) and the new physics scale associated with the new physics varied over the vast range of ALP mass (sub eV to TeV) with the scale f_a varying from electroweak to TeV and beyond. Light ALPs with masses less than eV to MeV range can modify the Cosmic Microwave Background (CMB), big Bang Nucleosynthesis (BBN), cooling and evolution of stars. Their coupling to SM particles within this mass range is severally constrained from astrophysical and cosmological observations <cit.>. Heavier ALPs in the MeV to TeV mass range though, unimportant from the astrophysical and cosmological considerations, have been the subject matter of recent studies in the context of particle physics and as dark matter portals connecting the dark-matter with the visible matter <cit.> and have been employed to explain leptonic g-2 anomaly with some success <cit.>. The mass and interaction of these particles with the visible matter have been explored at the high energy colliders like LEP, Tevatron, Belle-II and at the CERN Large Hadron Collider (LHC) (see for example <cit.>). Unlike these colliders, the future e^+e^- colliders like ILC <cit.>, FCC-ee <cit.>, CEPC <cit.> the high energy muon collider <cit.> and the electron-hadron collider (LHeC) using the LHC protons on the electron beam <cit.> are designed to have high luminosity and also providing cleaner experimental environment to go beyond the LHC’s precision ability, are eminently suited to determine the ALP properties <cit.>. Constraints on a large range of ALP parameter space were established from the future e^+ e^- collider experiments <cit.> through the photon fusion production to obtain the sensitivities on the ALP γγ coupling for the ALP mass in the 1 GeV to ∼600 GeV range. The possibility of detecting ALP production through electro-weak massive vector-boson fusion (VBF) processes was recently investigated in the future muon collider for m_a O(TeV) and beyond <cit.> to study the WW, ZZ, γ Z and γγ coupling constraints.
In this work, we investigate the possibility of detecting ALPs production via VBF processes at future Large Hadron-electron Collider (LHeC) e^-p colliders, focusing on producing constraints on possible couplings parameters, g_γγ, g_WW, g_γ Z and g_ZZ. We base our study on LHeC environment, which employs the 7 TeV proton beam of the LHC and electrons from an Energy Recovery Linac (ERL) being developed for the LHeC. The choice of an ERL energy of E_e = 60 (120) GeV with an available proton energy E_p = 7 TeV would provide a centre of mass energy of √(s)≈ 1.3 (1.8) TeV at the LHeC <cit.>.
This article is organised in following sections: in <ref> model with effective Lagrangian and analysis framework is explained, a preliminary estimation of ALP production as a function of m_a, coupling(s) and LHeC energies are explored in <ref>, and results using different observable(s) are explained in <ref>. The comparison(s) of ours findings with existing results are discussed in <ref> and a summary with discussions are followed in <ref>.
§ MODEL AND FRAMEWORK
The interactions of ALPs with gauge bosons and SM fermions occurs via the dimension five operators, with their masses considered independently of their respective coupling strengths <cit.>. Hence the effective interactions between the ALPs and the electroweak gauge bosons are represented by the effective Lagrangian <cit.>:
L_ eff = 1/2(∂_μ a)(∂^μ a) - 1/2 m_a^2 a^2 + g^2 C_WWa/f_a W^A_μνW̃^μν A + g^' 2 C_BBa/f_a B_μνB̃^μν,
where X_μν represents the field strength tensor for the SU(2)_L or U(1)_Y, X̃^μ^ν = 1/2ε^μναβ X_αβ with ε^0123 = 1 and X ∈{B,W}. The ALP field and mass are represented by a and m_a, respectively. After electroweak symmetry breaking we can write the interactions between the ALP and the electroweak gauge bosons (W^±, Z, γ) in terms of dimension-full couplings g_γγ, g_WW, g_Zγ and g_ZZ respectively as:
L_ eff⊃ e^2a/f_a g_γγ F_μνF^μν + 2e^2/c_w s_wa/f_a g_Zγ F_μνZ^μν +
e^2/c^2_w s^2_wa/f_a g_ZZ Z_μνZ^μν
+e^2/s^2_wa/f_a g_WW W_μνW^μν.
In terms of C_ij (i,j ≡γ, Z, W), the couplings g_ij are given by
.[ g_γγ = C_WW+C_BB, g_Zγ = c_w^2 C_WW-s_w^2 C_BB,; ; g_ZZ = c_w^4 C_WW + s_w^4 C_BB, g_WW = C_WW. ]},
where c_w and s_w are the cosine and sine of the weak mixing angle, respectively. For all studies in this work the scale parameter is fixed to f_a = 1 TeV.
Using the interactions defined in <ref>, the relevant decay widths of ALP are given by
Γ(a → W^+W^-) ≡Γ_WW = e^4/8 π f_a^2 s_θ_W^4| g_WW|^2 m_a^3 (1-4m_W^2/m_a^2)^3/2,
Γ(a →γγ) ≡Γ_γγ = e^4/4 π f_a^2| g_γγ|^2 m_a^3,
Γ(a → Z Z) ≡Γ_ZZ = e^4/4 π f_a^2 c_θ_W^4 s_θ_W^4| g_ZZ|^2 m_a^3 (1-4m_Z^2/m_a^2)^3/2,
Γ(a → Zγ) ≡Γ_Zγ = e^4/2 π f_a^2 c_θ_W^2 s_θ_W^2| g_Zγ|^2 m_a^3 (1-m_Z^2/m_a^2)^3.
As Γ_ij is a function of corresponding coupling and masses of ALP, in this study we take variable decay width to find the limits of g_ij as a function of m_a. In Figure <ref>, the branching ratios for the decay modes a → W^+W^-, γγ, ZZ, and Zγ are plotted as a function of the mass of the ALP, m_a, assuming g_ij = 1.
Further, we use the following formula to find local significance and discovery limits for a given number of signal (S) and background (B) events at a particular luminosity L, considering the total statistical and systematic uncertainties δ_s as
N_ SD = S/√(S + B + (δ_s· S)^2 +(δ_s· B)^2),
where in terms of corresponding cross section of signal σ(g_ij) and background σ_ SM, S = σ(g_ij)· L and B = σ_ SM· L, respectively.
Also to constrain the ALP-gauge coupling g_ij, we use a χ^2-analysis both at total cross-section and most sensitive differential-distribution level, where the χ^2 definition is given by
χ^2 = ∑_k=1^n(N_k(g_ij) - N_k^ SM/Δ N_k)^2.
In this case, N_k(g_ij) represents number events for signal in k^th bin of a distribution of total n bins while N_k^ SM is the corresponding background and Δ N_k is defined as:
Δ N_k = √(N_k^ SM(1+δ_s^2 N_k^ SM)).
For our results we consider δ_s = 5% for a given luminosity L, and L = 1 ab^-1.
§ ALP PRODUCTION IN E^-P COLLIDER
As mentioned in <ref>, we are interested to probe ALP-gauge couplings by direct production of ALP through VBF processes in e^-p collider. In such an environment, using the interactions defined in <ref> the direct production of ALP can occur in charged-current (CC) mode through W-boson fusion as shown in <ref> [WW], and in neutral-current (NC) mode through γγ (<ref>), ZZ (<ref> [ZZ]) and Zγ-fusion (<ref> [Zγ]), where in particular we have considered the decay of ALP, a→γγ, for a given m_a. For all results the branching ratio of ALP decay to di-photon is taken as B_a→γγ = 1. Here, we also note that the Zγ-channel cannot be separated from the γγ-channel because B_a→γγ = 1 requires g_γγ 0, though we can choose g_ZZ = 0. Therefore, the notation Zγ will refer to the effect of considering the channels shown in <ref>, <ref>, and their interference.
To explore the goals of this study, we first build a model file for the interactions defined in <ref> using the package FeynRules <cit.>. For the generation of events, we use the Monte Carlo event generator package <cit.>. Further
fragmentation and hadronization are done with a customized Pythia-PGS <cit.>, and the detector level simulation performed with reasonably chosen parameters using Delphes <cit.> and jets were clustered using FastJet <cit.> with the anti-k_T algorithm <cit.> using the distance parameter, R = 0.4 as explained in ref. <cit.>. The factorization and normalization scales are set to be dynamic scales for both signal and potential backgrounds. For this study, e^- polarization is assumed to be -80%. The initial requirements on transverse momentum (p_T) and rapidity (η) of jets, leptons and photons are nominal: p_T^j, e^-, γ > 10 GeV, |η_j, e^-, γ| < 5 and no cuts on missing energy.
With these setups the estimated cross-section of ALP production through (a) CC process: e^-p →ν_e a j, and (b) NC process: e^-p → e^- a j
with further decay of a →γγ in the mass range of 5 ≤ m_a ≤ 300 GeV is shown in <ref> for a benchmark electron's energy E_e = 60 GeV and proton energy E_p = 7 TeV at LHeC. Also for a fixed m_a = 50 GeV, cross-section as a function of 60 ≤ E_e ≤ 300 GeV is shown in <ref>. Note that for the WW, γγ and ZZ-fusion the corresponding coupling value is taken as g_kk=1 (k = W, γ, Z) keeping others 0, while for Zγ-fusion g_Zγ = g_γγ = 1 keeping g_WW = 0 = g_ZZ.
In next <ref>, we will focus on background generation and analysis procedures to estimate the bounds on the couplings g_ij using the methods described in <ref>. We will construct observables that are sensitive to the presence of these couplings and use them to establish the limits.
§ ANALYSIS, OBSERVABLE AND RESULTS
To generate backgrounds, we adopt similar setups as described earlier. This includes specifying the center-of-mass energy, beam polarization, and luminosity, as well as considering the relevant physics processes in CC, NC and photo-production modes and their corresponding cross sections. For E_e = 60 GeV and E_p = 7 TeV, the estimated total cross-section of background is approximately less than 6 fb. To optimise the signal events over the leading backgrounds additional cuts on leading and sub-leading jets, photons and leptons are applied depending on channels in this study:
* for all channels: p_T^j, e^-, γ > 20 GeV,
* WW: 0 < | η_γ| < 3, 0 < | η_j| < 4,
* γγ: -2 < | η_γ| < 3, -2 < | η_j| < 5, -2.5 < | η_e^-| < 2,
* ZZ: 0 < | η_γ| < 3, 0 < | η_j| < 5, 0 < | η_e^-| < 5, and
* Zγ: 0 < | η_γ| < 3, 0 < | η_j| < 5, -2.5 < | η_e^-| < 1,
in addition with invariant di-photon mass, m_γγ, cuts as a function of corresponding signal of m_a in the window of ∼± 5 GeV. This cut significantly reduces the backgrounds in comparison to signal events. By having these optimized events we then estimate the significance and evaluate the projected sensitivities by using the formula in <ref>. In <ref> we show the discovery limit on the coupling g_ij as a function of m_a by fixing N_ SD = 5. These limits are direct reflection of cross-section (and branching ratio) dependence on m_a as shown in <ref> (<ref>).
Though we need to find a mechanism through which these limits must improve, and for that we studied various possible observables by considering the differential distributions and combinations of tagged final state e^-, photons, and jets for the signal as well backgrounds. In <ref> we show most sensitive normalized differential, where for WW channel, ΔΦ_γ_1 j, the azimuthal angle between the two planes of the final state leading-p_T photon and forward jet with respect to the beam direction is shown in <ref>. However, the scattered angle θ_e with respect to beam direction for the final state tagged e^- is most sensitive for γγ, ZZ and Zγ channels shown in <ref>, <ref> and <ref>, respectively. It is interesting to note that the signal events in case of γγ (ZZ)-channel lies towards higher (lower) θ_e due to pure QED (V-A) structure of photon-fermion (Z-fermion) couplings, though its mixed in case of Zγ channel. And the shape of backgrounds are due to the selection of different η-regions. So furthermore we perform a χ^2-analysis at both cross section (one-bin) and differential distribution (multiple-bin) levels [A one-bin χ^2 analysis refers to the calculation of χ^2 for the total cross section, where the entire distribution is considered as a single bin. This approach combines all the observed and expected values across all bins and calculates the χ^2 based on the overall distribution. In a multiple-bin χ^2 analysis, the observed data are divided into different bins based on the values of the kinematic observable. The expected theoretical distribution (SM background) is also divided into the corresponding bins. The χ^2 value is then calculated for each bin by comparing the observed and expected values, taking into account the uncertainties or errors in the observed data. The individual χ^2 values for each bin are typically summed to obtain the total χ^2 value for the analysis. Therefore a multiple-bin χ^2 analysis captures the differential information present in each bin separately, providing more detailed insights into the distribution across different kinematic regions. In contrast, a one-bin χ^2 analysis provides an overall measure of the goodness-of-fit but does not account for the variations or discrepancies within individual bins.] and apply <ref> on these observable to estimate the sensitivities of g_ij as a function of m_a.
Note that the general structure of the σ(g_ij) is given as
σ(g_ii) = g_ii^2 σ_ii×B_a→γγ;
σ (g_Zγ) = (g_γγ^2 σ_γγ + g_Zγ^2 σ_Zγ + g_γγg_Zγσ_ inf.) ×B_a→γγ.
The <ref> provides the justification for the Mexican-hat shape of the χ^2 distribution, where the minimum value is χ^2_ min. = 0. In the case of a one-parameter analysis of g_ii (m_a), we set χ^2 ≡Δχ^2 = 4.0 to correspond to the 95% confidence level (C.L.). In <ref> and <ref>, we present the sensitivities of g_ii (m_a) using the one-bin and multiple-bin χ^2 analyses, respectively. From <ref>, it is evident that the limits on g_ii perform significantly better compared to those obtained using <ref> (as shown in <ref>). However, the multiple-bin analysis on differential distributions, as shown in <ref>, outperforms the one-bin analysis (<ref>) specifically for the WW and γγ channels. This indicates that considering multiple bins in the analysis provides improved sensitivity in constraining the values of g_ii(m_a) for these specific channels. The results presented in <ref> and <ref> demonstrate the enhanced performance of the multiple-bin analysis approach, emphasizing its superiority in capturing the sensitivity to g_ii(m_a) compared to the one-bin analysis, particularly for the WW and γγ channels.
Since the multiple-bin analysis performs better in these three scenario, in <ref>, the limits for the Zγ channel in the g_γγ (m_a) - g_Zγ (m_a) plane are shown for five selected values of m_a for χ^2 ≡Δχ^2 = 6.18 (as of two-parameter analysis for 95% C.L.) using this approach only. It is observed that the shape of the limits is asymmetric with respect to g_γγ = g_Zγ≈ 0, where the limits also blow up. This asymmetry can be attributed to the presence of negative and positive interference effects. According to <ref>, the region around g_γγ = g_Zγ≈ 0 can be understood. In this region, all values of g_Zγ can satisfy the χ^2 criterion below the 2σ standard deviation when g_γγ tends to zero. However, we have excluded the region near g_γγ = 0 in order to fulfill the minimum requirement of an ALP signal for the study. The observed spikes in the contour is due to the negative contribution from interference, which leads to infinite values for both couplings. The presence of four spikes can be attributed to the even powers of the couplings in the cross-section <ref>. When the value of g_γγ is non-zero, these spikes disappear, and the contour takes on a circular shape due to the negligible contribution from interference.
§ COMPARISON OF G_IJ (M_A) TO EXISTING BOUNDS
In <ref>, a comparison of coupling limits is presented in the |g_ij|-m_a plane at the 95% confidence level (C.L.), along with constraints from various experiments and theory predictions. It is important to note that a given measurement can depend on multiple ALP couplings. Representing the corresponding bound in the 2D (|g_ij|, m_a) plane requires making theoretical assumptions, which can vary significantly from constraint to constraint. These differences should be considered for a proper comparison. In <ref>, the bounds derived in the present work (shown as the brown line) represent the 95% C.L. limits. They are derived assuming full decay of the ALP to di-photons. In order to compare the limits on Zγ-channel, given in <ref> as a correlation between g_Z γ and g_γγ due to the interference, we show a standalone comparison of constraints on g_Zγ with previous studies keeping g_γγ = 1.
The limits on g_γγ and g_Zγ at higher ALP masses are obtained from collider studies, where the ALP decays resonantly either to hadrons or to photon pairs. The relevant process are from e^+ e^- →γ + hadrons, studied by the L3 experiment <cit.> and the leading bounds from photon pair production at the Large Hadron Collider (LHC) in proton-proton collisions <cit.> (labeled as “LHC" for measurements from ATLAS and CMS), as well as from light-by-light scattering γγ→γγ measured in lead-lead (Pb-Pb) collisions <cit.> (labeled as “Light-by-light (LHC)"). The measurement of the total Z decay width at LEP provides constraints up to m_a ≲ m_Z <cit.>.
For ALP masses above 100 GeV, the dominant bounds come from resonant triboson searches <cit.>. Additionally, nonresonant searches in diboson production via gluon fusion at the LHC (labeled as “Nonresonant ggF") provide constraints on all four ALP interactions. Each nonresonant bound is extracted from a specific process gg → a^* → V_1 V_2 (V = γ, Z, W^±). The constraint on g_γγ is derived in ref. <cit.>, those on g_WW and g_γ Z in ref. <cit.>, and the constraint on g_ZZ in ref. <cit.>.
The bound obtained from the Z width measurement at LEP does not require additional assumptions. The bounds from nonresonant ggF, which include nonresonant gg → a^* → V_1 V_2 processes, scale with the inverse of the axion-gluon coupling (g_gg) and are completely lifted when the ALP coupling to gluons C_GG→ 0. In the figure, they are normalized to g_gg = 1 TeV^-1 (for details see ref. <cit.>).
Bounds labeled as “γ + had" and LHC (various) assume gluon dominance, i.e., g_gg≫ g_V_1 V_2, and in this limit, they are largely independent of C_GG (see ref. <cit.>). Among these, bounds on g_γγ labeled as “LHC" additionally assume negligible branching fractions to fermions and heavy electroweak bosons in the mass region where they are kinematically allowed. The limit from light-by-light scattering, shown in red, assumes B_a →γγ = 1, which corresponds to vanishing couplings to gluons and light fermions. The triboson constraints on g_WW and g_Z γ make use of the photophobic ALP scenario <cit.>.
Overall the limits found in this work performs better sensitivity for all three ALP couplings, namely, g_WW, g_ZZ and g_Zy comparing to available studies in different collider scenario, whereby, the limits on g_γγ are competitive with respect to few cases.
§ SUMMARY AND DISCUSSIONS
In this article, we investigated the potential for the production of relatively high-mass Axion-Like Particles (ALPs) in an electron-proton (e^-p) environment. Specifically, we focused on the proposed energy of the Large Hadron-electron Collider (LHeC) with a center-of-mass energy of √(s)≈ 1.3 TeV and an integrated luminosity of L = 1 ab^-1. Although exploring high masses beyond 200 GeV is less likely due to the limited cross section achievable with the available energy and luminosity, we examined the limits on coupling measurements as prediction for such masses based on our analysis procedure. These limits serve as approximate predictions that can be investigated further if the electron energy (E_e) is increased to higher values.
In <ref> we provides a comprehensive overview of the coupling limits in the |g_ij|-m_a plane, taking into account various experimental and theoretical constraints, and highlights the strengths and limitations of each measurement in constraining ALP couplings. To analyze and capture the differential information in the distribution of kinematic observables, a multiple-bin χ^2 analysis is preferable in contrast to one-bin, where we observed the limits performance are better. Also the limits on g_WW, g_ZZ and g_Zγ comparing to available studies in different collider scenario are better at LHeC for considered range of m_a, whereby, the limits on g_γγ are competitive with respect to few scenario.
By studying the possibilities of ALP production in the e^-p environment at the LHeC, we contribute to the understanding of ALP physics and provide insights into the potential for probing relatively higher masses and coupling strengths in future experiments.
PS would like to acknowledge the School of Physics at the University of the Witwatersrand, where the majority of this work was conducted during his visit. Their support and resources were invaluable in carrying out this research. AG thanks SERB , G.O.I. Under CRG/2018/004889.
unsrt
99
Peccei:1977hh
R. D. Peccei and H. R. Quinn,
Phys. Rev. Lett. 38, 1440-1443 (1977)
Peccei:1977ur
R. D. Peccei and H. R. Quinn,
Phys. Rev. D 16, 1791-1797 (1977)
Weinberg:1977ma
S. Weinberg,
Phys. Rev. Lett. 40, 223-226 (1978)
Kivel:2022rzr
A. Kivel, J. Laux and F. Yu,
JHEP 03, 078 (2023)
[arXiv:2211.12155 [hep-ph]].
Kim:1984pt
J. E. Kim,
Phys. Rev. D 31, 1733 (1985)
Leder:1996py
G. Leder,
Nucl. Phys. B 497, 334-344 (1997)
[arXiv:hep-ph/9610552 [hep-ph]].
Feindt:1991rb
M. Feindt,
CERN-PPE-91-90.
Georgi:1981pu
H. M. Georgi, L. J. Hall and M. B. Wise,
Nucl. Phys. B 192, 409-416 (1981)
Rubakov:1997vp
V. A. Rubakov,
JETP Lett. 65, 621-624 (1997)
[arXiv:hep-ph/9703409 [hep-ph]].
Dienes:1999gw
K. R. Dienes, E. Dudas and T. Gherghetta,
Phys. Rev. D 62, 105023 (2000)
[arXiv:hep-ph/9912455 [hep-ph]].
Legero_2003
T. Legero and T. Wilk and A. Kuhn and G. Rempe,
Applied Physics B8,77 (2003)
Lillard:2017cwx
B. Lillard and T. M. P. Tait,
JHEP 11, 005 (2017)
[arXiv:1707.04261 [hep-ph]].
Svrcek:2006yi
P. Svrcek and E. Witten,
JHEP 06, 051 (2006)
[arXiv:hep-th/0605206 [hep-th]].
PhysRevD.104.035038
Hirao, Kaigo and Moroi, Takeo,
Phys. Rev. D 3, 104 (2021)
PhysRevD.104.103521
Nguyen, David V. and Sarnaaik, Dimple and Boddy, Kimberly K. and Nadler, Ethan O. and Gluscevic, Vera,
Phys. Rev. D 10, 104 (2021)
Green:2017ybv
D. Green and S. Rajendran,
JHEP 10, 013 (2017)
[arXiv:1701.08750 [hep-ph]].
Baumann:2016wac
D. Baumann, D. Green and B. Wallisch,
Phys. Rev. Lett. 117, no.17, 171301 (2016)
[arXiv:1604.08614 [astro-ph.CO]].
Krnjaic:2019dzc
G. Krnjaic and S. D. McDermott,
Phys. Rev. D 101, no.12, 123022 (2020)
[arXiv:1908.00007 [hep-ph]].
Raffelt:2006cw
G. G. Raffelt,
Lect. Notes Phys. 741, 51-71 (2008)
[arXiv:hep-ph/0611350 [hep-ph]].
Lee:2018lcj
J. S. Lee,
[arXiv:1808.10136 [hep-ph]].
Chang:2018rso
J. H. Chang, R. Essig and S. D. McDermott,
JHEP 09, 051 (2018)
[arXiv:1803.00993 [hep-ph]].
Darme:2020sjf
L. Darmé, F. Giacchino, E. Nardi and M. Raggi,
JHEP 06, 009 (2021)
[arXiv:2012.07894 [hep-ph]].
Agrawal:2021dbo
P. Agrawal, M. Bauer, J. Beacham, A. Berlin, A. Boyarsky, S. Cebrian, X. Cid-Vidal, D. d'Enterria, A. De Roeck and M. Drewes, et al.
Eur. Phys. J. C 81, no.11, 1015 (2021)
[arXiv:2102.12143 [hep-ph]].
Long:2015pua
B. Long,
Phys. Rev. D 94, no.1, 011503 (2016)
[arXiv:1508.06084 [hep-ph]].
BESIII:2020pxp
M. Ablikim et al. [BESIII],
Phys. Rev. Lett. 124, no.24, 241803 (2020)
[arXiv:2004.13910 [hep-ex]].
dEnterria:2021ljz
D. d'Enterria,
[arXiv:2102.08971 [hep-ex]].
Zhang:2021sio
H. Y. Zhang, C. X. Yue, Y. C. Guo and S. Yang,
Phys. Rev. D 104, no.9, 096008 (2021)
[arXiv:2103.05218 [hep-ph]].
TLEPDesignStudyWorkingGroup:2013myl
M. Bicer et al. [TLEP Design Study Working Group],
JHEP 01, 164 (2014)
[arXiv:1308.6176 [hep-ex]].
Abada:2019
Abada, A., Abbrescia, M., AbdusSalam, S.S. et al.,
Eur. Phys. J. Spec. Top. 228, (2014)
Bozovic-Jelisavcic:2018zhc
I. Bozovic-Jelisavcic, R. Manqi, Z. Hongbo, Z. Kai and H. Suen,
PoS ICHEP2018, 429 (2019)
CEPCStudyGroup:2018ghi
J. B. Guimarães da Costa et al. [CEPC Study Group],
[arXiv:1811.10545 [hep-ex]].
An:2018dwb
F. An, Y. Bai, C. Chen, X. Chen, Z. Chen, J. Guimaraes da Costa, Z. Cui, Y. Fang, C. Fu and J. Gao, et al.
Chin. Phys. C 43, no.4, 043002 (2019)
[arXiv:1810.09037 [hep-ex]].
MuonCollider:2022nsa
D. Stratakis et al. [Muon Collider],
[arXiv:2203.08033 [physics.acc-ph]].
LHeCStudyGroup:2012zhm
J. L. Abelleira Fernandez et al. [LHeC Study Group],
J. Phys. G 39, 075001 (2012)
[arXiv:1206.2913 [physics.acc-ph]].
LHeCStudyGroup:2012wne
J. L. Abelleira Fernandez et al. [LHeC Study Group],
[arXiv:1211.5102 [hep-ex]].
Han:2022mzp
T. Han, T. Li and X. Wang,
[arXiv:2203.05484 [hep-ph]].
Jaeckel:2012yz
J. Jaeckel, M. Jankowiak and M. Spannowsky,
Phys. Dark Univ. 2, 111-117 (2013)
[arXiv:1212.3620 [hep-ph]].
Baldenegro:2018hng
C. Baldenegro, S. Fichet, G. von Gersdorff and C. Royon,
JHEP 06, 131 (2018)
[arXiv:1803.10835 [hep-ph]].
Florez:2021zoo
A. Flórez, A. Gurrola, W. Johns, P. Sheldon, E. Sheridan, K. Sinha and B. Soubasis,
Phys. Rev. D 103, no.9, 095001 (2021)
[arXiv:2101.11119 [hep-ph]].
Marciano:2016yhf
W. J. Marciano, A. Masiero, P. Paradisi and M. Passera,
Phys. Rev. D 94, no.11, 115033 (2016)
[arXiv:1607.01022 [hep-ph]].
Inan:2020kif
S. C. İnan and A. V. Kisselev,
Chin. Phys. C 45, no.4, 043109 (2021)
[arXiv:2007.01693 [hep-ph]].
Buttazzo:2018qqp
D. Buttazzo, D. Redigolo, F. Sala and A. Tesi,
JHEP 11, 144 (2018)
[arXiv:1807.04743 [hep-ph]].
Gavela:2019cmq
M. B. Gavela, J. M. No, V. Sanz and J. F. de Trocóniz,
Phys. Rev. Lett. 124, no.5, 051802 (2020)
[arXiv:1905.12953 [hep-ph]].
Inan:2020aal
S. C. İnan and A. V. Kisselev,
JHEP 06, 183 (2020)
[arXiv:2003.01978 [hep-ph]].
Bruening:2013bga
O. Bruening and M. Klein,
Mod. Phys. Lett. A 28, no.16, 1330011 (2013)
[arXiv:1305.2090 [physics.acc-ph]].
Brivio:2017ije
I. Brivio, M. B. Gavela, L. Merlo, K. Mimasu, J. M. No, R. del Rey and V. Sanz,
Eur. Phys. J. C 77, no.8, 572 (2017)
[arXiv:1701.05379 [hep-ph]].
Yue:2021iiu
C. X. Yue, H. Y. Zhang and H. Wang,
Eur. Phys. J. C 82, no.1, 88 (2022)
[arXiv:2112.11604 [hep-ph]].
Alloul:2013bka
A. Alloul, N. D. Christensen, C. Degrande, C. Duhr and B. Fuks,
Comput. Phys. Commun. 185, 2250 (2014).
Alwall:2011uj
J. Alwall, M. Herquet, F. Maltoni, O. Mattelaer and T. Stelzer,
JHEP 1106, 128 (2011).
Sjostrand:2006za
T. Sjostrand, S. Mrenna and P. Z. Skands,
JHEP 05, 026 (2006)
[arXiv:hep-ph/0603175 [hep-ph]].
deFavereau:2013fsa
J. de Favereau et al. [DELPHES 3],
JHEP 02, 057 (2014)
[arXiv:1307.6346 [hep-ex]].
Cacciari:2011ma
M. Cacciari, G. P. Salam and G. Soyez,
Eur. Phys. J. C 72, 1896 (2012)
[arXiv:1111.6097 [hep-ph]].
Cacciari:2008gp
M. Cacciari, G. P. Salam and G. Soyez,
JHEP 04, 063 (2008)
[arXiv:0802.1189 [hep-ph]].
Kumar:2015kca
M. Kumar, X. Ruan, R. Islam, A. S. Cornell, M. Klein, U. Klein and B. Mellado,
Phys. Lett. B 764, 247-253 (2017)
[arXiv:1509.04016 [hep-ph]].
L3:1992kcg
O. Adriani et al. [L3],
Phys. Lett. B 292 (1992), 472-484
Mariotti:2017vtv
A. Mariotti, D. Redigolo, F. Sala and K. Tobioka,
Phys. Lett. B 783 (2018), 13-18
[arXiv:1710.01743 [hep-ph]].
CMS:2018erd
A. M. Sirunyan et al. [CMS],
Phys. Lett. B 797 (2019), 134826
[arXiv:1810.04602 [hep-ex]].
ATLAS:2020hii
G. Aad et al. [ATLAS],
JHEP 03 (2021), 243
[erratum: JHEP 11 (2021), 050]
[arXiv:2008.05355 [hep-ex]].
Craig:2018kne
N. Craig, A. Hook and S. Kasko,
JHEP 09 (2018), 028
[arXiv:1805.06538 [hep-ph]].
Carra:2021ycg
S. Carra, V. Goumarre, R. Gupta, S. Heim, B. Heinemann, J. Kuechler, F. Meloni, P. Quilez and Y. C. Yap,
Phys. Rev. D 104, no.9, 092005 (2021)
[arXiv:2106.10085 [hep-ex]].
CMS:2021xor
A. Tumasyan et al. [CMS],
JHEP 04, 087 (2022)
[arXiv:2111.13669 [hep-ex]].
Bonilla:2022pxu
J. Bonilla, I. Brivio, J. Machado-Rodríguez and J. F. de Trocóniz,
JHEP 06, 113 (2022)
[arXiv:2202.03450 [hep-ph]].
Alonso-Alvarez:2018irt
G. Alonso-Álvarez, M. B. Gavela and P. Quilez,
Eur. Phys. J. C 79 (2019) no.3, 223
[arXiv:1811.05466 [hep-ph]].
|
http://arxiv.org/abs/2307.01969v1
|
20230705004040
|
Multimodal Prompt Learning for Product Title Generation with Extremely Limited Labels
|
[
"Bang Yang",
"Fenglin Liu",
"Zheng Li",
"Qingyu Yin",
"Chenyu You",
"Bing Yin",
"Yuexian Zou"
] |
cs.CV
|
[
"cs.CV"
] |
Understanding Resolution of Multi-Language Bugs:
An Empirical Study on Apache Projects
Zengyang Li2This work was funded by the Natural Science Foundation of Hubei Province of China under Grant No. 2021CFB577, the National Natural Science Foundation of China under Grant Nos. 62176099 and 62172311, and the Knowledge Innovation Program of Wuhan-Shuguang Project under Grant No. 2022010801020280., Wenshuo Wang2, Sicheng Wang2, Peng Liang31, Ran Mo2
2School of Computer Science & Hubei Provincial Key Laboratory of Artificial Intelligence and Smart Learning,
Central China Normal University, Wuhan, China
3School of Computer Science, Wuhan University, Wuhan, China
{zengyangli, moran}@ccnu.edu.cn, {scwang1998, wenshuowang}@mails.ccnu.edu.cn, liangp@whu.edu.cn
August 1, 2023
==============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Generating an informative and attractive title for the product is a crucial task for e-commerce. Most existing works follow the standard multimodal natural language generation approaches, e.g., image captioning, and employ the large scale of human-labelled datasets to train desirable models. However, for novel products, especially in a different domain, there are few existing labelled data. In this paper, we propose a prompt-based approach, i.e., the Multimodal Prompt Learning framework, to accurately and efficiently generate titles for novel products with limited labels. We observe that the core challenges of novel product title generation are the understanding of novel product characteristics and the generation of titles in a novel writing style. To this end, we build a set of multimodal prompts from different modalities to preserve the corresponding characteristics and writing styles of novel products. As a result, with extremely limited labels for training, the proposed method can retrieve the multimodal prompts to generate desirable titles for novel products. The experiments and analyses are conducted on five novel product categories under both the in-domain and out-of-domain experimental settings. The results show that, with only 1% of downstream labelled data for training, our proposed approach achieves the best few-shot results and even achieves competitive results with fully-supervised methods trained on 100% of training data; With the full labelled data for training, our method achieves state-of-the-art results.
§ INTRODUCTION
Product title generation aims to comprehend the content of a given product provided by merchants, which may come in various forms such as an input product image and a set of attributes, and then automatically generate an appealing and informative title.
The generated title should contain essential product characteristics, along with the product details, e.g., brand name, category, style, size, material, and colour <cit.>.
Therefore, a desirable title can highlight the characteristics and advantages of the product, leading to time savings for consumers, enhancing their overall shopping experience, and ultimately increasing product sales.
Admittedly, in E-commerce, the ability to perform product title generation automatically offers the possibility of relieving merchants from the time-consuming analysis of complex product details and writing concise and appealing titles; and alerting merchants of important product characteristics and advantages <cit.>.
In general, the task of product title generation can be defined as a data-to-text problem.
Following existing efforts on data-to-text tasks <cit.>, Figure <ref>(a) shows the conventional product title generation approach: the encoder-decoder framework.
The image encoder and attribute encoder respectively transform the product image and product attributes into visual and attribute representations, which the text decoder subsequently decodes into a product title.
Such encoder-decoder-based methods have achieved great success in advancing the state-of-the-art of various data-to-text tasks, e.g., image captioning <cit.>, multimodal machine translation <cit.>, and video captioning <cit.>.
However, these methods rely on a large volume of annotated data, which is particularly time-consuming to collect.
This issue is especially severe in the E-commerce title generation scenario, where products from different categories always contain category-specific attributes. Therefore, the product title generation model trained on existing products cannot be directly used on novel products, such as with new categories or new designs. Nevertheless, it is difficult to collect and label sufficient training data in a timely manner, which prevents the rapid deployment of such encoder-decoder models online.
As shown in Figure <ref>(b), we propose the Multimodal Prompt Learning (MPL) framework, which deals with the situation where the training data is scarce.
In detail, we observe that novel product title involves different domain product characteristics (e.g., category-specific attributes) and different writing styles, directly adopting a model or transferring a model pre-trained on existing available product data to novel product data will significantly degrade the performance, especially when the labelled data (i.e., image-attribute-title pairs) is insufficient in quantity <cit.>.
To this end, we first construct a set of multimodal prompts from different modalities, i.e., visual prompts, attribute prompts, and language prompts.
During training, given the limited data of novel products (i.e., Image I - Attribute A - Title T), to make full use of it, MPL introduces the unimodal prompt training to enable the different prompts to preserve the corresponding domain characteristics and the writing styles of novel products from different modalities/perspectives.
In implementations,
(i) we introduce the visual prompts 𝒫_I to train the model by generating the title T in the I →𝒫_I → T pipeline;
(ii) we introduce the attributes prompts 𝒫_A to train the model in the A →𝒫_A → T pipeline;
(iii) we introduces the textual language prompts 𝒫_T to train the model by reconstructing the title T in the T →𝒫_T → T auto-encoding pipeline.
It is worth noting that the auto-encoding pipeline aims to reconstruct the same input sentence, therefore, it is straightforward for the model to be trained <cit.> to learn the necessary domain characteristics and the writing styles of novel products via the small amount of data.
Besides, the unsupervised auto-encoding process provides opportunities for our model to be further improved by incorporating more unlabelled text-only data <cit.>.
At last, MPL introduces multimodal prompt training to learn to generate accurate novel product titles with the help of learned multimodal prompts.
In the implementation, we first introduce a Cycle Alignment Network to highlight and capture the important characteristics from multiple modalities by cycle aligning three types of prompts; then take the input images I and attributes A of novel products as queries to retrieve the learned domain characteristics in the aligned prompts; and finally rely on the learned writing styles in the text decoder to generate the titles for the novel products.
In this way, the proposed MPL framework can accurately and efficiently generate novel product titles with limited training data by 1) introducing multimodal prompts to learn domain characteristics and writing styles of novel products;
2) learning to accurately highlight the product characteristics and advantages across multiple modalities.
It enables our approach to be rapidly well-adapted to the novel product domain, helping sellers save time in deploying new products, optimizing consumers' consumption experience, and thus boosting sales.
The experiments and analyses on a large-scale dataset, i.e., Amazon Product Dataset <cit.>, across five novel product categories prove the effectiveness of our approach.
Overall, the contributions are as follows:
* We propose the Multimodal Prompt Learning (MPL) framework to generate few-shot novel product titles, where the training data in the novel product domain is scarce.
* Our MPL framework first introduces multiple types of prompts to learn the domain characteristics and writing styles of novel products, and then learns to generate accurate final titles by highlighting and capturing the important characteristics from multiple modalities.
* Our experiments on five novel products prove the effectiveness of our approach, which generates desirable product titles for novel products with only 1% of the training data otherwise required by previous methods, and significantly outperforms state-of-the-art results with the full training data.
§ RELATED WORK
The related works are discussed from 1) Product Description and 2) Few-shot Learning.
§.§ Product Description
Generating the product titles to describe the given products is similar to the multimodal language generation tasks, e.g., image captioning <cit.> and multimodal machine translation <cit.>.
To perform multimodal language generation tasks, a large number of encoder-decoder-based models have been proposed <cit.>,
in which a CNN <cit.> and a LSTM/Transformer <cit.> is used as the image encoder and text encoder to encode the input images and texts, and an LSTM <cit.> or a Transformer <cit.> is used as the text encoder to generate the final sentences.
Inspired by the great success of an encoder-decoder framework in multimodal language generation tasks, existing efforts on product description have proposed a wide variety of encoder-decoder based frameworks <cit.> to describe given products.
However, these existing models are trained on large-scale datasets, while collecting data on novel products, e.g., novel categories and novel designs, to train the models is typically very limited.
To this end, we propose multimodal prompt learning to relax the reliance on the training dataset for the few-shot novel product description - with the goal of quick deployment of new products.
§.§ Few-shot Learning
Recently, few-shot learning <cit.> has received growing research interest across many AI domains <cit.>. Inspired by the success of few-shot learning, several works <cit.> explored such an approach for the domain of E-commerce.
However, most focus on unimodal tasks, either on the graph data (e.g., node classification, recommendation) <cit.>, or on the text data (e.g., sentiment analysis and recommendation) <cit.>, or on the image data (e.g., image classification) <cit.>.
As a multimodal task incorporating disparities between the visual and the textual modalities <cit.>, few-shot product title generation is far more challenging.
To prove our hypothesis, we re-implement existing few-shot learning methods for novel product title generation, demonstrating with our experiments that our approach significantly outperforms existing methods.
§ APPROACH
In this section, we will introduce the proposed Multimodal Prompt Learning (MPL) method in detail.
§.§ Formulation
Given the basic product information, i.e., product image I and product attribute A, the goal of product title generation is to generate an accurate and concise product title T={w_1,w_2,…,w_N}, including N words.
Current state-of-the-art methods usually consist of an image encoder and a text encoder to extract the image representations R_I and attribute representations R_A, and a text decoder to generate the target title T, which is formulated as:
{[ Image Encoder : I → R_I ;; Attribute Encoder : A → R_A ;; Text Decoder : {R_I, R_A}→ T . ].
Existing works rely on the annotated data image-attribute-title pairs to train the model by minimizing a supervised training loss, e.g., cross-entropy loss.
However, for many novel products, only a small amount of data is available.
In this case, we have to collect sufficient data to train the model, while collecting and labelling data is particularly labour-intensive and expensive.
As a result, insufficient training data poses a great challenge for building models to describe novel products.
To this end, we propose the MPL generation framework to generate accurate and desirable titles when encountering a novel product.
MPL includes two components: Unimodal Prompt Training (UPT) and Multimodal Prompt Training (MPT), where the former introduces three types of prompts (visual prompts 𝒫_I, attribute prompts 𝒫_A, and textual language prompts 𝒫_T), and the latter includes a cycle alignment network. Our proposed framework can be formulated as:
[ UPT ]{[ Visual Prompts: I →𝒫_I → T; Attribute Prompts: A →𝒫_A → T; Language Prompts: T →𝒫_T → T ].
[ MPT ]{[ Cycle Alignment: {𝒫_I, 𝒫_A, 𝒫_T}→𝒫̂; Aligned Prompts: {I, A}→𝒫̂→ T ].
The prompts across different modalities are used to learn the novel product domain characteristics from the limited available data in the UPT and then are used by the cycle alignment network to highlight and capture the important characteristics 𝒫̂, which is retrieved by the image and attributes to learn to generate novel product titles T in the MPT.
We adopt the ViT <cit.> from CLIP <cit.> as the image encoder and the BERT <cit.> from CLIP <cit.> as the attribute/text encoder.
For the text decoder, we adopt the Transformer-BASE <cit.>.
In particular, CLIP and Transformer have shown great success in bridging/aligning multi-modalities <cit.> and image-based natural language generation <cit.>, respectively.
During inference, we directly follow the {I, A}→𝒫̂→ T pipeline to generate final novel product titles.
§.§ Multimodal Prompt Learning
When encountering a new product, the deep learning model usually suffers from significant performance degradation <cit.>, which is caused by the new domain characteristics and new writing styles of the novel product.
Therefore, to efficiently train and deploy the data-driven deep learning models on a few samples of novel products, we propose the Multimodal Prompt Learning framework, consisting of a Unimodal Prompt Training module and a Multimodal Prompt Training module.
§.§.§ Unimodal Prompt Training
The module introduces visual prompts, attribute prompts, and textual language prompts to learn the novel product domain characteristics and the writing styles.
We first acquire the representations of image R_I, attribute R_A, and title R_T.
Then, we build three sets of trainable soft prompts <cit.>:
visual prompts 𝒫_I, attribute prompts 𝒫_A, and textual language prompts 𝒫_T.
The dimensions of different prompts are all N_P× d, where N_P denotes the total number of soft prompts, which are used to learn and store the new characteristics of the novel product through our method, defined as follows:
𝒫̂_I = [𝒫_I;R_I],
𝒫̂_A = [𝒫_A;R_A],
𝒫̂_T = [𝒫_T;R_T]
[·;·] denotes the concatenation operation.
Then, the prompts of images, attributes, and titles are directly inputted to the decoder as prefixes to train the model by generating (i.e., reconstructing) the titles.
Given the ground truth T={w_1,w_2,…,w_N}, we train the model by minimizing the widely-used natural language generation loss, i.e., cross-entropy loss, defined as follows:
L^I_XE =-∑_t=1^Nlog(p(w_t| w_1: t-1; 𝒫̂_I, I))
L^A_XE =-∑_t=1^Nlog(p(w_t| w_1: t-1; 𝒫̂_A, A))
L^T_XE =-∑_t=1^Nlog(p(w_t| w_1: t-1; 𝒫̂_T, T))
Finally, by combining the L^I_XE, L^A_XE, and L^T_XE, the full training objective of the Unimodal Prompt Training process is:
L_full= λ_1 L^I_XE + λ_2 L^A_XE + λ_3 L^T_XE
where λ_1,2,3∈ [0,1] is the hyperparameters that controls the regularization.
We find that our approach can achieve competitive results with the state-of-the-art models with only 1% training data when setting λ_1 = λ_2 = λ_3 = 1, thus we do not attempt to explore other settings.
Through the above operation, our Unimodal Prompt Training process can enable the model to learn the domain characteristics and the writing styles of novel products on a small amount of data.
It is worth noting that the auto-encoding process in L^T_XE, which reconstructs the input titles, is unsupervised.
It indicates that our method 1) can be further improved by using more large-scale unlabeled texts; 2) can control the style of the generated titles by adjusting the style of input titles; and 3) can continuously learn from newly added texts of novel products to boost the performance as novel products are developed.
§.§.§ Multimodal Prompt Training
After learning the novel domain characteristics and the new writing styles of novel products in the Unimodal Prompt Training process, we further propose the Multimodal Prompt Training process to train the framework, learning to capture the important characteristics in different prompts and describe the novel product based on the input image and attributes of the novel product.
In implementations, we first extract the representations of input image R_I and input attributes R_A.
Then, to boost performance, we propose to capture important characteristics and filter noisy characteristics from the visual prompts 𝒫_I, attribute prompts 𝒫_A, and language prompts 𝒫_T.
Considering that important characteristics will appear in the three prompts simultaneously, we introduce the Cycle Alignment Network to perform cycle alignment of different prompts.
As shown in Figure <ref>, we take the visual prompts 𝒫_I as a `query' to retrieve the related novel product characteristics preserved in visual prompts 𝒫_I, attribute prompts 𝒫_A, and language prompts 𝒫_T:
𝒫_I → I = α𝒫_I = ∑_k=1^N_Pα_k p_k , where α = softmax(𝒫_I 𝒫_I^⊤)
𝒫_I → A = β𝒫_A = ∑_k=1^N_Pβ_k p_k , where β = softmax(𝒫_I 𝒫_A^⊤)
𝒫_I → T = γ𝒫_T = ∑_k=1^N_Pγ_k p_k , where γ = softmax(𝒫_I 𝒫_T^⊤)
Similarly, we can take the attribute prompts 𝒫_A and language prompts 𝒫_T as a `query' to retrieve the related novel product characteristics across different modalities, acquiring 𝒫_A → A, 𝒫_A → I, 𝒫_A → T, 𝒫_T → T, 𝒫_T → I, 𝒫_T → A.
Then, we can obtain the aligned prompts 𝒫̂ by concatenating them.
Finally, given the ground truth titles T={w_1,w_2,…,w_N}, we again adopt the cross-entropy loss to train our framework to generate the final novel product titles based on 𝒫̂:
L_XE=-∑_t=1^Nlog(p(w_t| w_1: t-1; 𝒫̂, I, A)) .
During inference, we follow the {I,A}→𝒫̂→ T pipeline to generate titles of the test products.
In this way, our MPL framework can relax the reliance on large-scale annotated datasets and achieve competitive results with previous works with only 1% training data.
§ EXPERIMENTS
In this section, we first describe a large-scale dataset, the widely-used metrics, and the settings used for evaluation.
Then, we present the results of in-domain and out-of-domain experiments.
§.§ Datasets, Metrics, and Settings
Datasets
We evaluate our proposed framework on a publicly available dataset, i.e., Amazon Product Dataset <cit.>, which consists of around 15M products.
For data preparation, we first exclude entries without images/attributes/titles, which results in around 5.2M products across 15 categories.
The detailed statistics are summarized in the supplementary material. We randomly partition the dataset into 70%-20%-10% train-validation-test partitions according to products.
Therefore, there is no overlap of products between train, validation, and test sets.
Metrics
Following common practice in multimodal language generation tasks <cit.>, we adopt the widely-used generation metrics, i.e., BLEU-4 <cit.>, ROUGE-L <cit.>, and CIDEr <cit.>, which measure the match between the generated and ground truth sentences.
Implementations
We follow the state-of-the-art method CLIP <cit.>, which has shown great success on various multimodal tasks.
Therefore, we adopt CLIP as our base model.
In particular, the ViT <cit.> is used as the image encoder, the BERT <cit.> is used as the attribute/text encoder, and the Transformer-BASE <cit.> is used as the text decoder.
The model size d is set to 512.
Based on the average performance on the validation set, the number of prompts N_P is set to 16.
For optimization, we adopt the AdamW optimizer <cit.> with a batch size of 128 and a learning rate of 1e-4. We perform early stopping based on CIDEr. We apply a beam search of size 3 for inference.
Our framework is trained on 4 V100 GPUs using mixed-precision training <cit.>.
Settings
As shown in Table <ref>, we perform the out-of-domain and in-domain experiments.
* Out-of-Domain Experiments are conducted by directly transferring the CLIP pre-trained on natural images and texts datasets, such as MSCOCO <cit.>, WIT <cit.>, and Conceptual Captions <cit.>, to the novel products.
* In-Domain Experiments are conducted by pre-training the models on the top ten products in terms of quantity and then testing on the remaining five novel products. Therefore, there is no overlap of products between training and testing sets.
To improve the evaluation significantly, we further re-implement five state-of-the-art fully-supervised multimodal language generation methods, i.e., KOBE <cit.>, CLIP-Captioning <cit.>, M2-Transformer <cit.>, X-Transformer <cit.>, and LVP-M^3<cit.>, in which the KOBE is specifically designed for E-commerce, and two previous few-shot learning methods, i.e., VL-BART <cit.> and VL-ADAPTER <cit.>, in our experiments.
§.§ Out-of-Domain Results
The results are reported in Table <ref>, which shows the superior performance of our approach.
As we can see, our framework outperforms previous few-shot learning methods by an average of 3.76% BLEU-4, 7.9% ROUGE-L, and 10.46% CIDEr scores.
Therefore, our MPL framework not only significantly outperforms previous few-shot learning methods, but also achieves competitive results with existing state-of-the-art fully-supervised methods trained on 100% training data with 1% training data.
It enables our framework to provide a solid bias for novel product title generation, helping sellers save time in deploying new products.
As a result, with full training data, our method achieves the best results across different novel products.
The performances prove the validity of our method in learning the domain characteristics and the writing styles of novel products, thus relaxing the dependency on the training data to generate accurate titles for novel products with lesser annotated data.
§.§ In-Domain Results
Table <ref> shows that under the in-domain setting, with only 1% training data, our MPL framework can surpass several state-of-the-art fully-supervised methods, e.g., X-Transformer <cit.> and M2-Transformer <cit.>, and significantly outperforms previous few-shot methods across all products on all metrics.
Meanwhile, with 100% training data as in previous works, our approach achieves average 1.46%, 1.86%, and 2.72% absolute margins to current best results produced by CLIP <cit.> in terms of BLEU-4, ROUGE-L, and CIDEr, respectively.
The best results validate the effectiveness of our approach in producing higher-quality product titles, under both the few-shot and supervised experimental settings, verifying its generalization capabilities.
§ ANALYSIS
In this section, we conduct several analyses under the out-of-domain setting to better understand our proposed approach,
§.§ Ablation Study
We perform the ablation study of our MPL framework to show how our approach achieves competitive results with previous works with only 1% training data.
The results in Table <ref> show that our unimodal prompt training and multimodal prompt training of the framework all contribute to improved performances.
It proves our arguments and the effectiveness of each proposed component.
In detail, by comparing (a-c) and Base, we can observe that the language prompts lead to the best improvements in the few-shot learning setting.
It may be explained by the fact that the language prompts 𝒫_T are used to reconstruct the original same input sentence, it is straightforward for the model to be trained through auto-encoding to learn the necessary domain characteristics and the writing styles using a small amount of data in the few-shot setting.
Meanwhile, the visual prompts 𝒫_I lead to the best improvements in the supervised learning setting.
It means that when the training data is sufficient, it is important to further capture accurate and rich visual information from the product's image to generate a desirable and concise title.
We observe an overall improvement in setting (d) by combining the three unimodal prompts, which can improve performance from different perspectives.
Table <ref> (d) and MPL show that the MPT, which includes a cycle alignment network, can bring improvements on all metrics.
It proves the effectiveness of highlighting and capturing important characteristics by aligning prompts across multiple modalities to improve performances under both few-shot and supervised settings.
§.§ Qualitative Analysis
Figure <ref> gives an example to better understand our method.
As shown in the Blue-colored text, our method is significantly better aligned with ground truth than CLIP.
For example, our framework correctly describes the key characteristics, e.g., the brand name “Lenox” and the category “wedding cake”, and advantages, e.g., “tasty cake”.
However, the CLIP generates several wrong words (Red-colored text) and can not well describe the products.
More importantly, the visualization of the prompts shows that our approach can accurately learn the novel product domain characteristics to boost the generation of novel product titles.
For example, the visual prompts can accurately capture the “cake”, especially the attribute prompts can correctly capture the brand name “Lenox” and characteristics “bride and groom”, and the language prompts can capture the “tasty” and “wedding” according to the “cake” and “bride and groom”, respectively.
Overall, it qualitatively proves that our approach can capture important domain characteristics of novel products by multimodal prompt learning. It results in achieving competitive results with the previous supervised method CLIP with only 1% labelled data for training, which qualitatively verifies the effectiveness of our approach in novel title generation with extremely limited labels.
§ CONCLUSION
In this paper, we present the Multimodal Prompt Learning (MPL) framework to accurately and efficiently generate titles of novel products with limited training data.
Our MPL introduces various prompts across different modalities to sufficiently learn novel domain characteristics and writing styles, which are aligned and exploited to generate desirable novel product titles.
The out-of-domain and in-domain experiments on a large-scale dataset across five novel product categories show that, with only 1% downstream labelled data for training, our approach achieves competitive results with fully-supervised methods.
Moreover, with the full training data used in previous works, our method significantly sets the state-of-the-art performance, which proves the effectiveness of our approach and shows its potential to deploy novel products online in time to boost product sales.
§ LIMITATIONS
This paper introduces the problem of few-shot novel product title generation to efficiently and accurately generate informative and appealing titles for novel products with limited labeled data.
However, the training of our proposed model relies on the paired image-attribute-title data, which may not be easily obtained simultaneously in the real world.
Therefore, our model may not work well when high-quality image data or textual profile is missing.
The limitations could be alleviated using techniques such as knowledge distillation or self-training.
Besides, the writing styles of the generated titles are highly correlated with the training data. Hence, it requires specific and appropriate treatment by experienced practitioners, when deploying new products online.
§ ETHICS STATEMENT
We conduct the experiments on the public dataset, which is exclusively about E-commerce and does not contain any information that names or uniquely identifies individual people or offensive content. Therefore, we ensure that our paper conforms to the ethics review guidelines.
§ ACKNOWLEDGEMENTS
This paper was partially supported by NSFC (No: 62176008) and Shenzhen Science & Technology Research Program (No: GXWD20201231165807007-20200814115301001).
youclass,you2022bootstrapping,you2022mine,you2023rethinking,you2022momentum,you2022simcvd
acl_natbib
|
http://arxiv.org/abs/2307.00941v1
|
20230703113039
|
Topological control for min-max free boundary minimal surfaces
|
[
"Giada Franz",
"Mario B. Schulz"
] |
math.DG
|
[
"math.DG"
] |
Externally validating the IoTDevID device identification methodology using the CIC IoT 2022 DatasetKahraman Kostas supported by Republic of Turkey - Ministry of National Education
Kahraman Kostas10000-0002-4696-1857 Mike Just10000-0002-9669-5067 Michael A. Lones10000-0002-2745-9896
August 1, 2023
===================================================================================================================================================================================
We establish general bounds on the topology of free boundary minimal surfaces obtained via min-max methods in compact, three-dimensional ambient manifolds with mean convex boundary.
We prove that the first Betti number is lower semicontinuous along min-max sequences converging in the sense of varifolds to free boundary minimal surfaces.
In the orientable case, we obtain an even stronger result which implies that if the number of boundary components increases in the varifold limit, then the genus decreases at least as much.
We also present several compelling applications, such as the variational construction of a free boundary minimal trinoid in the Euclidean unit ball.
§ INTRODUCTION
Free boundary minimal surfaces are critical points for the area functional among all surfaces whose boundaries are constraint to the boundary of an ambient Riemannian manifold M.
Equivalently, a free boundary minimal surface has vanishing mean curvature and meets ∂ M orthogonally along its own boundary.
The case where the ambient manifold is the Euclidean unit ball is particularly interesting due to the connection with the optimization problem for the first Steklov eigenvalue
(cf. <cit.>).
Constructing examples of embedded free boundary minimal surfaces is a challenging problem, especially in ambient manifolds (such as the Euclidean unit ball) which only allow unstable solutions.
Among the most successful approaches towards existence results are gluing techniques and min-max methods.
Gluing methods provide straightforward control on the topology and symmetry of solutions, but usually require the genus or the number of boundary components to be sufficiently large
(cf. <cit.>).
With min-max methods one can construct examples of embedded free boundary minimal surfaces with low genus and few boundary components, as demonstrated e. g. in <cit.>.
However, controlling the topology of the limit surface is notoriously difficult because min-max sequences converge, a priori, only in the sense of varifolds.
Within the context of the min-max theory established by Almgren–Pitts <cit.> and Marques–Neves <cit.>, direct control on the topology of the limit surface is out of reach so far.
In fact, when aiming to construct free boundary minimal surfaces with prescribed topology, it is advantageous to employ the min-max theory by Simon–Smith <cit.> and Colding–De Lellis <cit.> because this variant imposes stricter criteria on the regularity and convergence of the sweepouts (see <cit.>*§ 2.11).
In this setting, general genus bounds have been obtained in <cit.>*Theorem 0.6, <cit.>*Theorem 9.1 and <cit.>.
However, there appears to be no result that provides general control on the number of boundary components of a free boundary minimal surface obtained through min-max methods.
Li <cit.> has even noted the impossibility of obtaining such a bound – at least without any assumption on the ambient manifold.
In this article, we address this gap in the literature by presenting a result that holds in ambient manifolds with strictly mean convex boundary.
This setting has the advantage that embedded free boundary minimal surfaces are necessarily properly embedded.
We prove in Theorem <ref> that the first Betti number is lower semicontinuous along min-max sequences converging to free boundary minimal surfaces.
Moreover, we introduce the notions of genus complexity and boundary complexity (which are closely related to the genus and the number of boundary components respectively) and prove in Theorem <ref> that their sum is lower semicontinuous along min-max sequences consisting of orientable surfaces.
A key ingredient in the proof is Simon's lifting lemma, as well as a novel free boundary version of that result, Lemma <ref>.
We also present several applications.
In Section <ref>, we recover the topological control on the examples with arbitrary genus and connected boundary constructed in <cit.> as a special case of our main result.
Moreover, our estimates can be applied to recover the existence of free boundary minimal discs in convex bodies shown in <cit.>.
Besides, we use our result to prove that there exists a free boundary minimal surface with Morse index equal to 5 in the Euclidean unit ball, refining <cit.>*Theorem 1.4.
Last but not least, Section <ref> contains a variational construction of embedded free boundary minimal surfaces in the three-dimensional Euclidean unit ball with genus zero and an arbitrary number of pairwise isometric boundary components which are aligned along the equator (cf. <cit.>).
§.§ Notation and convention
Throughout the article, we consider M to be a compact, three-dimensional Riemannian manifold with strictly mean convex boundary ∂ M.
We refer to M as the ambient manifold.
In this context, G denotes a finite group of orientation-preserving isometries of M.
A subset Σ⊂ M is called G-equivariant if the action of G on M restricts to an action on Σ which commutes with the inclusion map of Σ into M.
A submanifold Σ⊂ M is called properly embedded, if it is embedded and
satisfies ∂Σ=Σ∩∂ M.
Throughout this article, we call two submanifolds of M isometric if there exists an ambient isometry mapping one onto the other.
(In the literature, the term “congruent” is occasionally used interchangeably.)
A compact manifold without boundary is called closed.
The k-dimensional Hausdorff measure on M is denoted by ^k.
In particular, the area of a smooth surface Σ⊂ M is given by ^2(Σ).
The kth Betti number of any topological space X, i. e. the rank of the kth homology group H_k(X) is denoted by β_k(X).
Our work relies on the theory of varifolds for which we refer the reader e. g. to Chapters 4 and 8 of <cit.> (see also <cit.> and <cit.> for a brief introduction).
We only consider 2-varifolds in M.
The support Γ of a varifold Γ is the smallest closed subset of M outside which the mass measure Γ vanishes identically.
The ε-neighbourhood around any subset N⊂ M is denoted by
U_ε N.
Given a varifold Γ in M we abuse notation slightly by defining
U_εΓ={p∈ M_M(p,Γ)<ε}.
More generally, f(Γ) shall be understood as short-hand notation for f(Γ) whenever the latter is well-defined, for example when Γ is induced by a smooth surface and f=.
In particular, this notation does not take multiplicity into account, as f(mΓ)=f(Γ) for any positive integer m.
§.§ Equivariant min-max theory
We briefly recall the min-max theory à la Simon–Smith <cit.> and Colding–De Lellis <cit.>, for the equivariant, free boundary setting.
We provide the basic definitions and summarize the known results in Theorem <ref>.
The theorem is based on several contributions, among which we emphasize <cit.>.
For more details and comments, we refer to <cit.>*Part II (where similar notation as in this paper is employed).
Given an ambient manifold M and a group G of isometries as above, we say that {Σ_t}_t∈[0,1]^n is a (n-parameter) G-sweepout of M if the following properties are satisfied:
* Σ_t is a G-equivariant subset of M for all t∈[0,1]^n;
* Σ_t is a smooth, properly embedded surface in M for all t∈0,1^n;
* Σ_t varies smoothly for t∈0,1^n and continuously, in the sense of varifolds, for t∈[0,1]^n.
One can relax Definition <ref> slightly by allowing finite sets of points in M and parameters in [0,1]^n where the smoothness in <ref> and <ref> is not satisfied.
With this more general definition, which is given explicitly in <cit.>*Definition 9.1.4, all the results in this paper remain true.
Given a G-sweepout {Σ_t}_t∈[0,1]^n of M, we define its G-saturation Π to be the set of all
{Φ(t,Σ_t)}_t∈[0,1]^n, where
Φ[0,1]^n× M→ M is a smooth map such that Φ(t,·) is a diffeomorphism which commutes with the G-action for all t∈[0,1]^n and coincides with the identity for all t∈∂[0,1]^n.
The min-max width of Π is then defined as
W_Π=inf_{Λ_t}∈Π sup_t∈[0,1]^n^2(Λ_t).
If a sequence {{Λ_t^j}_t∈[0,1]^n}_j∈ in Π is minimizing in the sense that
sup_t∈[0,1]^n^2(Λ_t^j)→ W_Π as j→∞ and if {t_j}_j∈ is a sequence in [0,1]^n such that ^2(Λ_t_j^j)→ W_Π as j→∞, then we call
{Λ_t_j^j}_j∈ a min-max sequence.
Below, we recall the equivariant min-max theorem in the free boundary setting (cf. <cit.>*Theorem 9.2.1 for a reference with the same notation).
As aforementioned, the result builds upon several contributions, among which we would like to mention the foundational papers <cit.> and <cit.>, the results about the lower semicontinuity of the genus in <cit.>, the adaptations to the free boundary setting in <cit.> and to the equivariant setting in <cit.>, and the estimate on the equivariant index in <cit.>.
We would like to remark that the statement <cit.>*Lemma 3.5 attributed to <cit.> requires the additional assumption that M has mean convex boundary, which is satisfied in our setting.
In fact, under this assumption, the lemma follows from <cit.>*Theorem 6.1.
Let M be a three-dimensional Riemannian manifold with strictly mean convex boundary and let G be a finite group of orientation-preserving isometries of M.
Let {Σ_t}_t∈[0,1]^n be a G-sweepout of M.
If the min-max width W_Π of its G-saturation satisfies
W_Π > sup_t∈∂[0,1]^n^2(Σ_t)
then there exists a min-max sequence {Σ^j}_j∈ of (smooth) G-equivariant surfaces converging in the sense of varifolds to
Γ= ∑_i=1^ m_iΓ_i,
where the varifolds Γ_1,…,Γ_ are induced by pairwise disjoint, connected free boundary minimal surfaces in M and where the multiplicities m_1,…,m_ are positive integers.
Moreover, the G-equivariant index of the support of Γ is less or equal than n.
Finally, if all surfaces in the sweepout are orientable, then the genus bound
∑_i∈𝒪(Γ_i)+1/2∑_i∈𝒩((Γ_i)-1)
≤lim inf_j→∞(Σ^j)
holds.
Here, 𝒪 respectively 𝒩 denote the set of indices i∈{1,…,} such that Γ_i is orientable respectively nonorientable.
The analogous genus bound for min-max sequences in closed ambient manifolds has been improved by Ketover <cit.>, taking the multiplicity of the convergence into account on the left-hand side of (<ref>).
In this article, we focus on estimates without multiplicity because this is sufficient for all our current applications.
§.§ Main results
The goal of this paper is to improve the topological control (<ref>) in Theorem <ref>,
especially in terms of the number of boundary components of the resulting free boundary minimal surface, which has been obscure until now.
In order to state our main results Theorems <ref> and <ref>, we introduce a novel notion of topological complexity.
The genus of any compact, connected surface Σ is defined as the maximum number of disjoint simple closed curves which can be removed from Σ without disconnecting it.
Given any compact, possibly disconnected surface Σ, let 𝒪(Σ) respectively 𝒩(Σ) denote the set of its orientable respectively nonorientable connected components and let ℬ(Σ) be the set of all connected components of Σ with nonempty boundary.
We define the genus complexity (Σ) and the boundary complexity (Σ) of Σ by
(Σ) =∑_Σ̂∈𝒪(Σ)genus(Σ̂) + 1/2∑_Σ̂∈𝒩(Σ)(genus(Σ̂)-1),
(Σ) = ∑_Σ̂∈ℬ(Σ)(β_0(∂Σ̂)-1)
where β_0(∂Σ̂) denotes the number of boundary components of Σ̂ (the 0th Betti number of ∂Σ̂).
We recall that whenever Γ is a varifold induced by a smooth surface then (Γ) is understood as the genus of the support of Γ; analogously (Γ)=(Γ) and (Γ)=(Γ).
The complexities (Σ) and (Σ) are nonnegative numbers for any surface Σ.
Moreover, and are additive with respect to taking unions of connected components and therefore nonincreasing for the operation of discarding connected components.
The definition of (Σ) is consistent with the left-hand side in <cit.>, which is further elaborated and supported in <cit.>.
Corollary <ref> in the appendix states that the first Betti number β_1(Σ) coincides with the sum of 2(Σ)+(Σ) and the number of nonorientable connected components of Σ with nonempty boundary.
In the setting of Theorem <ref>, where the min-max sequence {Σ^j}_j∈ converges in the sense of varifolds to Γ,
the first Betti number and the genus complexity are lower semicontinuous along the min-max sequence in the sense that
β_1(Γ) ≤lim inf_j→∞β_1(Σ^j), (Γ) ≤lim inf_j→∞(Σ^j).
If all the surfaces in the sweepout are orientable, then the lower semicontinuity of the genus complexity stated in Theorem <ref> recovers estimate (<ref>) of Theorem <ref>.
The control on the first Betti number is the main novelty in Theorem <ref>.
In the case where all surfaces in the min-max sequence are orientable,
we obtain the following sharpened version of this estimate.
In the setting of Theorem <ref>, if all surfaces in the min-max sequence {Σ^j}_j∈ are orientable, then the sum of the genus and boundary complexities is lower semicontinuous along the min-max sequence in the sense that
(Γ)+(Γ) ≤lim inf_j→∞(Σ^j)+(Σ^j).
[3]
Properly embedded surfaces in simply connected, orientable, three-dimensional Riemannian manifolds are necessarily orientable (see <cit.>*Lemma C.1).
Therefore, Theorem <ref> applies when the ambient manifold M is simply connected and orientable, hence, in particular, when M is the three-dimensional Euclidean unit ball.
It is clear that we cannot expect the boundary complexity alone to be lower semicontinuous when passing to the limit along a min-max sequence.
As an example one can imagine that a (hypothetical) sweepout of the Euclidean unit ball ^3 consisting of annuli is given such that the corresponding min-max limit is a catenoid Γ.
Suppose we modify each surface in the sweepout by attaching a very thin “half-neck” ribbon which connects the pair of boundary components.
Any min-max sequence would then consist of surfaces Σ^j with genus one and connected boundary.
However, since the number of parameters has not been increased, we would still expect the min-max limit to be a catenoid, and in particular, (Γ)=(Σ^j)-1 and (Γ)=(Σ^j)+1 for all j.
In this sense we expect Theorem <ref> to be sharp.
A similar line of reasoning indicates that it is necessary to assume orientability of the min-max sequence in Theorem <ref>.
Suppose that the ambient manifold M allows us to attach a thin “twisted half-neck” to connect the two boundary components of an annulus, in such a way that the resulting surface is properly embedded with the topology of a punctured Klein bottle.
If we have a min-max sequence {Σ^j}_j∈ consisting of such surfaces, which again converges to a free boundary minimal annulus Γ in M, then (Σ^j)=1/2 and (Σ^j)=0 but (Γ)=0 and (Γ)=1.
In the hypothetical scenarios above, we modified a given sweepout by attaching a dispensable half-neck.
In Section <ref>, we rigorously define the “inverse” of this operation.
The idea is to modify any given sweepout by cutting away necks and half-necks that disappear anyway when passing to the min-max limit, such that after this surgery procedure (cf. Definition <ref>) we do have lower semicontinuity of the genus complexity and the boundary complexity independently.
This statement is made precise in Theorem <ref> which serves as an intermediate step in the proof of Theorems <ref> and <ref>.
We emphasize that in the aforementioned Theorem <ref>, an estimate on the boundary complexity is more meaningful than an estimate on the number of boundary components.
Indeed, the surgery procedure described in Section <ref> could generate arbitrarily many spurious topological discs (with limit equal to zero in the sense of varifolds).
These discs contribute to the number of boundary components, making this quantity a weaker upper bound, but they do not contribute to the boundary complexity since vanishes for topological discs.
Acknowledgements.
The authors would like to thank Alessandro Carlotto for many helpful comments and discussions.
This project has received funding from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy EXC 2044 – 390685587, Mathematics Münster: Dynamics–Geometry–Structure, and the Collaborative Research Centre CRC 1442, Geometry: Deformations and Rigidity, and it has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 947923).
§ DIRECT APPLICATIONS
In this short section, we demonstrate the strength of our result by recovering the topological control on some of the known free boundary minimal surfaces constructed via min-max methods.
Discs.
As a first example, we choose a strictly convex subset of ^3 as ambient manifold M and the trivial group as G.
Grüter and Jost <cit.> proved that M contains an embedded, free boundary minimal disc and our estimates can be applied to reproduce the topological control established in <cit.>.
Indeed, since the sweepout defined in <cit.> consists of topological discs and satisfies the width estimate, Theorem <ref> implies the existence of some Γ=∑_i=1^ m_iΓ_i induced by embedded free boundary minimal surfaces in M.
Theorem <ref> then yields β_1(Γ)=0.
Since ^3 does not contain any closed minimal surfaces,
we directly obtain (e. g. by Proposition <ref>) that each connected component Γ_i has genus zero and connected boundary.
In other words, Γ is a union of embedded free boundary minimal discs.
Our argument remains valid if the convexity assumption on ∂ M is relaxed to strict mean convexity,
provided that M still allows a sweepout consisting of topological discs and satisfying the width estimate.
Connected boundary and arbitrary genus.
Our second application concerns the free boundary minimal surfaces constructed in <cit.>.
The ambient manifold is the Euclidean unit ball ^3 and G is the dihedral group _g+1 (cf. Section <ref>).
Building upon the following facts, we present an independent proof of the topological control established in <cit.>.
* For any given g∈ there is a sweepout of ^3 consisting of connected, _g+1-equivariant surfaces with genus g and connected boundary and its _g+1-saturation satisfies the width estimate (cf. <cit.>).
* A corresponding min-max sequence converges in the sense of varifolds (with multiplicity one) to a connected, _g+1-equivariant free boundary minimal surface M_g⊂^3 which contains the origin but is not a topological disc (cf. <cit.>).
All surfaces in question are properly embedded in ^3 and hence orientable by Remark <ref>.
In order to determine the topology of M_g we apply Theorems <ref> and <ref> and obtain
(M_g)=(M_g) ≤ g, (M_g)+(M_g) ≤ g.
Since M_g contains the origin but is not a topological disc, we also know (M_g)≥1 by <cit.>.
From the _g+1-equivariance, we then obtain (M_g)=g as stated in <cit.>.
The second estimate in (<ref>) now directly yields (M_g)≤ 0.
Since M_g is connected with nonempty boundary, β_0(∂ M_g)=1 follows.
With the same arguments as above, one can prove that the free boundary minimal surface Σ_g from <cit.> has genus g and at most three boundary components for all g∈.
Only for sufficiently large g, it is known that Σ_g has exactly three boundary components (cf. <cit.>).
Index five.
In <cit.> Chun Pong Chu applied (nonequivariant) min-max theory to show that the Euclidean unit ball contains an embedded free boundary minimal surface Γ with genus 0 or 1 and Morse index 4 or 5, which is neither isometric to the equatorial disc nor the critical catenoid.
By gaining topological control on Γ using our results, we can show additionally that Γ has in fact Morse index equal to 5 as conjectured in <cit.>.
Indeed, since all surfaces in the sweepout constructed in <cit.> have genus at most one and connected boundary, Theorems <ref> and <ref> yield (Γ)≤1 and (Γ)+(Γ)≤1.
In particular, Γ has either genus one and connected boundary, or genus zero and two boundary components.
(Any solution with genus zero and connected boundary is isometric to the equatorial disc by <cit.>, and this case has already been excluded.)
In either case, it follows that the Morse index of Γ must be equal to 5, indeed:
* if Γ has genus one then its Morse index cannot be equal to 4 by <cit.>*Corollary 7.2 because by construction, Γ is orientable with embedded boundary;
* if Γ has genus zero and two boundary components, then its Morse index cannot be equal to 4 because otherwise it would be isometric to the critical catenoid by <cit.>*Corollary 7.3 and this case has already been excluded.
We emphasize that the argument above relies crucially on the sharpness of the estimate for + in Theorem <ref>.
For comparison, the bound β_1(Γ)≤2 on the first Betti number (as given by Theorem <ref>) would allow a surface Γ with genus zero and three boundary components, and the results in <cit.> do not exclude the possibility of such a surface having Morse index equal to 4.
It unknown whether the surface Γ with index 5 and the surface M_1 with genus g=1 and connected boundary constructed in <cit.> are isometric.
Numerical simulations suggest that M_1 has indeed Morse index equal to 5 but even then the uniqueness question remains open.
§ SURGERY
In geometric topology, the concept of surgery originated from the work of Milnor <cit.> and refers to techniques that allow the controlled construction of a finite dimensional manifold from another.
More specifically, surgery involves removing parts of a manifold and replacing them with corresponding parts of another manifold such that the cut matches up.
Through surgery, we can modify the topology of a manifold while retaining certain desired properties.
Let Σ and Σ̃ be two smooth, compact surfaces which are properly embedded in the ambient manifold M.
*
We say that Σ̃ is obtained from Σ by cutting away a neck if
* Σ∖Σ̃ is homeomorphic to ^1×0,1,
* Σ̃∖Σ is homeomorphic to the disjoint union of two open discs,
* the closure of Σ△Σ̃ is a topological sphere contained in the interior of M.
*
We say that Σ̃ is obtained from Σ by cutting away a half-neck if
* Σ∖Σ̃ is homeomorphic to [0,1]×0,1,
* Σ̃∖Σ is homeomorphic to the disjoint union of two relatively open half-discs, i. e.
sets homeomorphic to 0,1×0,1,
* the closure of Σ△Σ̃ is topological disc which is properly embedded in M.
*
We say that Σ̃ is obtained from Σ through surgery if there is a finite number of surfaces Σ=Σ_0,Σ_1,…,Σ_N=Σ̃ such that each Σ_k for k=1,…,N is
* either the union of some of the connected components of Σ_k-1,
* or obtained from Σ_k-1 by cutting away a neck, or a half-neck, as defined above.
Orientability is preserved under surgery.
By cutting away a neck or a half-neck,
* the number of connected components can increase at most by one;
* the number of orientable connected components can increase at most by one;
* the number of nonorientable connected components can increase at most by one.
Let Σ̃ be obtained from Σ through surgery.
If Σ is orientable, then Σ̃ is orientable since any orientation on Σ̃∩Σ can be extended to the union Σ̃∖Σ of discs respectively half-discs.
In Definition <ref> it is clear that the number of connected components can increase at most by one when cutting away a neck or a half-neck.
For the remaining claims, we may consider without loss of generality a connected surface Σ
which is split into the disjoint union of two connected surfaces Σ_1 and Σ_2 by cutting away a neck or a half-neck.
Suppose Σ_1 and Σ_2 are both orientable.
By Definition <ref>, any chosen orientation on Σ∩Σ_1 can be extended consistently to Σ∖Σ_2.
Similarly, any orientation on Σ∩Σ_2 can be extended to Σ∖Σ_1.
Since Σ∖(Σ_1∪Σ_2) is connected by Definition <ref>, these two orientations either agree or disagree at every point of Σ∖(Σ_1∪Σ_2).
If they disagree, it suffices to reverse the chosen orientation on Σ∩Σ_1 in order to obtain an orientation on all of Σ.
Therefore, Σ is orientable, which shows that the number of orientable connected components can increase at most by one.
Conversely, if Σ_1 and Σ_2 are both nonorientable, then Σ is also nonorientable because surgery preserves orientability.
This completes the proof.
The following two lemmata show that the first Betti number and the genus complexity are both nonincreasing under surgery.
This holds without making any assumptions on the connectivity or orientability of the surfaces in question.
Let Σ⊂ M be any smooth, compact, properly embedded surface and let Σ̃ be obtained from Σ through surgery.
Then, β_1(Σ̃)≤β_1(Σ).
Since the first Betti number is additive with respect to taking unions of connected components, it suffices to consider the case when Σ̃ is obtained from Σ by cutting away a neck or a half-neck.
The corresponding Euler characteristics then satisfy
χ(Σ̃)
=χ(Σ)+2
if Σ̃ is obtained from Σ by cutting away a neck,
χ(Σ)+1
if Σ̃ is obtained from Σ by cutting away a half-neck,
which follows from the fact that χ(X)=χ(U)+χ(V)-χ(U∩ V) for any pair U,V of open subsets covering a topological space X.
We also recall (e. g. from <cit.>) that the Euler characteristic coincides with the alternating sum of the Betti numbers, that is
χ(Σ) =β_0(Σ)-β_1(Σ)+β_2(Σ), χ(Σ̃) =β_0(Σ̃)-β_1(Σ̃)+β_2(Σ̃).
In the case when Σ̃ is obtained from Σ by cutting away a neck,
equations (<ref>) and (<ref>) imply
β_1(Σ̃)-β_1(Σ)
=β_0(Σ̃)-β_0(Σ)
+β_2(Σ̃)-β_2(Σ)-2.
Lemma <ref> implies β_0(Σ̃)-β_0(Σ)≤1 and β_2(Σ̃)-β_2(Σ)≤1, because for the surfaces in question, the second Betti number coincides with the number of their orientable connected components without boundary.
Thus, (<ref>) yields β_1(Σ̃)-β_1(Σ)≤0.
In the case when Σ̃ is obtained from Σ by cutting away a half-neck, equations (<ref>) and (<ref>) imply
β_1(Σ̃)-β_1(Σ)
=β_0(Σ̃)-β_0(Σ)
+β_2(Σ̃)-β_2(Σ)-1.
Lemma <ref> again implies β_0(Σ̃)-β_0(Σ)≤1.
Moreover, β_2(Σ̃)-β_2(Σ)=0 because the connected components of Σ and Σ̃ which are affected by the half-neck surgery necessarily have nonempty boundary.
Thus, (<ref>) yields β_1(Σ̃)-β_1(Σ)≤0 and the claim follows.
Let Σ⊂ M be any smooth, compact, properly embedded surface and let Σ̃ be obtained from Σ through surgery.
Then, (Σ̃)≤(Σ).
Recalling Remark <ref>, it again suffices to consider the case when Σ̃ is obtained from Σ by cutting away a neck or a half-neck.
Let us define the short-hand notation =(Σ),
b = number of boundary components of Σ,
c_𝒪 = number of orientable connected components of Σ,
c_𝒩 = number of nonorientable connected components of Σ,
and, analogously, , b̃, c̃_𝒪, c̃_𝒩 for the surface Σ̃ after surgery.
By Corollary <ref>, the Euler characteristics of Σ and Σ̃ are given by
χ(Σ) =2c_𝒪+c_𝒩-2-b, χ(Σ̃) =2c̃_𝒪+c̃_𝒩-2-b̃.
Moreover, Lemma <ref> implies
(c̃_𝒪+c̃_𝒩) - (c_𝒪+c_𝒩) ∈{0,1}, c̃_𝒪-c_𝒪 ∈{0,1}.
In the case when Σ̃ is obtained from Σ by cutting away a neck, the boundary of Σ is not modified.
Hence we have b̃=b.
Recalling χ(Σ̃)=χ(Σ)+2 from (<ref>) and combining this identity with (<ref>) and (<ref>), we obtain
2-2 =2(c̃_𝒪-c_𝒪)+(c̃_𝒩-c_𝒩)-2
≤ (c̃_𝒪-c_𝒪)-1≤0.
In the case when Σ̃ is obtained from Σ by cutting away a half-neck, we have χ(Σ̃)=χ(Σ)+1 by (<ref>) and (<ref>) yields
2-2 =2(c̃_𝒪-c_𝒪)+(c̃_𝒩-c_𝒩)-(b̃-b)-1.
We distinguish the following cases.
* The two boundary segments which are affected by the surgery belong to the same boundary component of Σ, in which case we have one of the following:
* b̃=b.
Then necessarily c̃_𝒪+c̃_𝒩=c_𝒪+c_𝒩 and (<ref>) and (<ref>) imply ≤.
* b̃=b+1. Then (<ref>) and (<ref>) imply the same estimate as in (<ref>).
* The two boundary segments which are affected by the surgery belong to two different boundary components of Σ, in which case we have b̃=b-1 and (c̃_𝒪+c̃_𝒩)=(c_𝒪+c_𝒩).
Then (<ref>) implies 2-2=c̃_𝒪-c_𝒪. Below we prove that in this case, we indeed have c̃_𝒪-c_𝒪=0 which completes the proof.
Let Σ and Σ̃ be as in case <ref>.
Then Σ has at least two boundary components γ_1 and γ_2 forming a half-neck N=Σ∖Σ̃ between them.
We claim that if Σ is nonorientable, then Σ̃ is nonorientable, too.
Without loss of generality we may assume that Σ and Σ̃ are both connected because cutting away a half-neck between two different boundary components does not disconnect a surface.
Any neighbourhood U of γ_1 in Σ which is homeomorphic to γ_1×0,1 is orientable.
Moreover, the intersection U∩ N is connected since γ_1 and γ_2 are disjoint.
In particular, any chosen orientation O on U can be extended to a neighbourhood of the half-neck N.
This implies that any orientation-reversing path in Σ passing through N can be modified such that it instead passes through U∖ N and is still orientation-reversing.
Consequently Σ̃ is nonorientable as claimed.
Case <ref> in the proof of Lemma <ref> occurs for example during half-neck surgery on a connected component which is homeomorphic to a Möbius band, in which case we have (c̃_𝒪,c̃_𝒩)=(c_𝒪+1,c_𝒩-1) and ==0.
Let Σ⊂ M be any orientable, smooth, compact, properly embedded surface and let Σ̃ be obtained from Σ through surgery.
Then, Σ̃ is orientable and
(Σ̃)+(Σ̃) ≤(Σ)+(Σ).
As stated in Lemma <ref>, orientability is preserved under surgery.
By Remark <ref>, it suffices to consider the case when Σ̃ is obtained from Σ by cutting away a neck or a half-neck.
In analogy with the proof of Lemma <ref> we employ the short-hand notation =(Σ), =(Σ),
b = number of boundary components of Σ,
c = number of connected components of Σ,
and, similarly, , , b̃, c̃ for the surface Σ̃ after surgery.
In the case when Σ̃ is obtained from Σ by cutting away a neck, we have b̃=b because the boundary is not modified.
If cutting away the neck adds a connected component with nonempty boundary, =-1 and otherwise =.
In any case, ≤ and the claim follows directly recalling Lemma <ref>.
In the case when Σ̃ is obtained from Σ by cutting away a half-neck, we have χ(Σ̃)=χ(Σ)+1 by (<ref>).
In the orientable case, Corollary <ref> implies χ(Σ)=2c-2-b and we obtain
2-2 =2c̃-2c-(b̃-b) -1.
By Definition <ref>, we have - =(b̃-b)-(c̃-c).
Multiplying equation (<ref>) by two and adding it to (<ref>) yields
2(+)-2(+) =(b̃-b)-1.
Since the number of boundary components can increase at most by one when cutting away a half-neck, the claim follows.
The following proposition for surfaces with boundary is analogous to <cit.> in the closed case.
Roughly speaking, it states that, after suitable surgery, a sequence converging in the sense of varifolds to a limit Γ is contained in the tubular neighbourhood U_2εΓ, as defined in equation (<ref>).
The proof is similar to the arguments for <cit.> and for <cit.>.
The main difference here is the presence of the group G and the possibility for half-neck surgeries.
Let {Σ^j}_j∈ be a sequence of smooth, compact,
G-equivariant
surfaces which are properly embedded in M and converge in the sense of varifolds to Γ = ∑_i=1^ m_iΓ_i, where the varifolds Γ_1,…,Γ_ are induced by smooth, connected, pairwise disjoint surfaces.
Then, for every sufficiently small ε>0, there exists J_ε∈ such that for all j≥ J_ε there is a
G-equivariant
surface Σ̃^j obtained from Σ^j through surgery and satisfying
Σ̃^j ⊂ U_2εΓ, Σ̃^j∩ U_εΓ =Σ^j∩ U_εΓ.
Let ε>0 be so small that there exists a smooth retraction of U_2εΓ onto Γ.
Convergence in the sense of varifolds implies that given any η>0 there exists J_ε,η∈ such that ^2(Σ^j∖ U_εΓ)<η for every j≥ J_ε,η.
For s∈0,2ε the (relative) boundaries
V_sΓ=∂(U_sΓ)∖∂ M
foliate U_2εΓ smoothly.
Hence, by the coarea formula there is a finite constant C>0 depending only on the ambient manifold M such that
∫^2ε_ε^1(Σ^j∩ V_sΓ) ds
≤ C^2(Σ^j∖ U_εΓ)<Cη
for all j≥ J_ε,η.
Thus, there exists an open subset E⊂ε,2ε with measure at least ε/2 such that for all s∈ E
^1(Σ^j∩ V_sΓ)<2Cη/ε.
Since both surfaces are compact, the set of all s∈ E for which V_sΓ intersects Σ^j transversally is open and dense in E by Sard's theorem.
Hence, there exist s_0∈ε,2ε and δ>0 such that [s_0-δ,s_0+δ]⊂ E and such that
for all s∈[s_0-δ,s_0+δ] the intersection between Σ^j and V_sΓ is transverse.
In particular, any connected component of Σ^j∩ V_sΓ is either a simple closed curve or a segment connecting two points on ∂Σ^j.
There exists λ>0 (depending on Γ and ε) such that for any s∈ε,2ε any simple closed curve in V_sΓ with length less than λ bounds an embedded disc in V_sΓ and the closure of any segment in V_sΓ connecting two points on ∂ M with length less than λ bounds an embedded half-disc in V_sΓ.
Choosing first η>0 such that 2Cη<λε and then j≥ J_ε,η and
s∈[s_0-δ,s_0+δ]⊂ E⊂ε,2ε as above, we ensure that
each connected component v of Σ^j∩ V_sΓ has length less than λ and thus bounds either a disc or a half-disc in V_sΓ which we denote by D_s^v.
In particular,
Σ^j∩ (U_s_0+δΓ∖U_s_0-δΓ)
is a finite collection of embedded, pairwise disjoint necks and half-necks in the sense of Definition <ref>.
In principle these necks could be “nested” in the sense that
D_s_0^v⊊ D_s_0^w for different connected components v,w of Σ^j∩ V_s_0Γ.
Note that in this case D_s^v⊊ D_s^w for all s∈[s_0-δ,s_0+δ] since Σ^j is embedded.
The image of the (possibly noninjective) map v↦^2(D_s_0^v) is a discrete set of values a_1<… <a_m.
Let v be a connected component of Σ^j∩ V_s_0Γ such that D_s_0^v has area a_k.
Removing the corresponding connected component of Σ^j∩ (U_s_0+δ/kΓ∖U_s_0-δ/kΓ) and replacing it with D_s_0±δ/k^v is admissible surgery in the sense of Definition <ref> provided it is done for all such v and all k∈{1,…,m} in increasing order.
This procedure preserves the G-equivariance, since at each step the union of all surfaces involved is G-equivariant.
Indeed, since Σ^j and V_sΓ are G-equivariant, the union of all discs D_s_0^v having area a_k is also G-equivariant.
Let Σ̂^j be the new surface obtained from Σ^j through the procedure described in the previous paragraph.
By construction, Σ̂^j is disjoint from V_s_0Γ.
We may regularize Σ̂^j such that Σ̂^j is smooth and still has the properties of being G-equivariant, disjoint from V_s_0Γ and obtained from Σ^j through surgery.
Then, Σ̃^j=Σ̂^j∩ U_s_0Γ is a surface obtained from Σ̂^j by dropping a finite number of connected components and it satisfies
Σ̃^j⊂ U_2εΓ and Σ̃^j∩ U_εΓ=Σ^j∩ U_εΓ.
§ TOPOLOGICAL LOWER SEMICONTINUITY
The goal of this section is to prove Theorems <ref> and <ref>.
We start by introducing further notation and terminology.
As before, M denotes the compact, three-dimensional ambient manifold with strictly mean convex boundary ∂ M and G is a finite group of orientation-preserving isometries of M.
An isotopy of M is a smooth map Ψ[0,1]× M→ M such that Ψ(s,·) is a diffeomorphism for all s∈[0,1] which coincides with the identity for s=0.
We say that an open set U⊂ M is G-compatible if φ(U) is either disjoint from U or equal to U for every φ∈ G.
Given a G-equivariant surface Σ⊂ M and a G-compatible set U⊂ M, we denote by _G^δ(U,Σ) the set of isotopies Ψ[0,1]× M→ M satisfying the following properties:
* Ψ is supported in U in the sense that Ψ(s,x)=x for all x∈ M∖ U and s∈[0,1];
* ^2(Ψ(s,Σ))≤^2(Σ)+δ for all s∈[0,1];
* Ψ(s,·) commutes with the action of all φ∈ G satisfying φ(U)=U.
Given δ,ϵ>0, a G-compatible open set U⊂ M, and a G-equivariant surface Σ⊂ M, we say that Σ is (G,δ,ϵ)-almost minimizing in U if for every isotopy Ψ∈_G^δ(U,Σ)
^2(Ψ(1,Σ))≥^2(Σ)-ϵ.
A sequence {Σ^j}_j∈ of G-equivariant surfaces is called G-almost minimizing in U if there exist δ_j,ϵ_j>0 with ϵ_j→ 0 as j→∞ such that Σ^j is (G,δ_j,ϵ_j)-almost minimizing in U for all j∈.
The open metric ball B_r(x)⊂ M of radius r>0 around any x∈ M is G-compatible provided that r is sufficently small (cf. <cit.>*Remark 10.1.3).
While min-max sequences are not even locally area minimizing in general, they are almost minimizing in sufficiently small annuli, as stated in the subsequent lemma, where 𝒜𝒩_r(x) denotes the collection of all annuli B_r_2(x)∖B_r_1(x)⊂ M for some 0<r_1<r_2<r.
There exists a G-invariant function r M →0,∞ such that the min-max sequence {Σ^j}_j∈ from Theorem <ref> is G-almost minimizing in every set in 𝒜𝒩_r(x)(x) for all x∈ M.
The regularity theorem for the limit of an almost minimizing sequence is stated below.
For the proof and the literature related to this result, we refer to <cit.>*Proposition 13.5.3.
Let M be a three-dimensional Riemannian manifold with strictly mean convex boundary and let G be a finite group of orientation-preserving isometries of M.
Let {Σ^j}_j∈ be a sequence of smooth surfaces which are properly embedded in M and G-almost minimizing in every set in 𝒜𝒩_r(x)(x) for all x∈ M, where r M →0,∞ is a G-invariant function.
Then (up to subsequence) {Σ^j}_j∈ converges in the sense of varifolds to Γ=∑_i=1^m_iΓ_i,
where the varifolds Γ_1,…,Γ_ are induced by pairwise disjoint, connected free boundary minimal surfaces in M and where the multiplicities m_1,…,m_ are positive integers.
§.§ Lifting lemmata
Simon's lifting lemma is key for the control on the topology of the limit surface Γ obtained in Proposition <ref>.
We recall the notation U_εΓ for the ε-neighbourhood around the support of Γ in M from equation (<ref>).
As in the proof of Proposition <ref>, there exists some ε_0>0 such that there is a smooth retraction of U_2ε_0Γ onto the support of Γ=∑_i=1^m_iΓ_i.
In the setting of Proposition <ref>, let γ be a simple closed curve in the support of Γ_i for some i∈{1,…,} and let 0<ε≤ε_0.
Then, for all sufficiently large j∈, there is a positive integer m≤ m_i and a closed curve γ^j in Σ^j∩ U_εΓ_i which is homotopic to mγ in U_εΓ_i.
Simon's lifting lemma does not require the surfaces Σ^j to be orientable.
Indeed, most of the proof relies on local arguments (which do not detect the orientability of the surface).
The only step where the argument is of global nature can be found in Section 4.3 of <cit.>, but one can easily check that this does not rely on orientability either (even though orientability is part of <cit.>).
Moreover, Lemma <ref> is robust against relaxing the assumptions on the smoothness of the sweepout according to Remark <ref> because curves can always be chosen to avoid finitely many points.
We aim at proving a free boundary version of Simon's lifting lemma in the sense that it applies to “unclosed” curves, or – more generally – a loopfree network of curves equipped with a tree structure.
Let U⊆ M be any submanifold.
A tree in U consists of finite sets of vertices {v_0,…,v_n}⊂ U and edges γ_1,…,γ_n⊂ U such that
* each edge is a smooth, embedded curve in U connecting two distinct vertices,
* the intersection of two different edges consists of at most one vertex,
* the union of all edges is connected and does not contain any closed, embedded curves.
Vertices contained in exactly one edge are called leaves.
We call a tree in U properly embedded in M if the set of leaves coincides with (⋃_i=1^nγ_i)∩(∂ M) and we call it rooted, if the vertex v_0 is an interior point in M.
We call two properly embedded trees in U properly homotopic in U if the two trees can be continuously deformed into each other while preserving the local structure around each vertex and each edge, and while constraining the leaves to ∂ M.
More precisely, if the first tree has vertices {v_0,…,v_n}⊂ U and edges γ_1,…,γ_n⊂ U then the second tree has the same number of vertices {v_0',…,v_n'}⊂ U and edges γ_1',…,γ_n'⊂ U, and there exists a continuous function H[0,1]×⋃_i=1^nγ_i→ U such that
* H(0,·) is the identity, H(1,v_i)=v_i' for all i=0,…,n and H(1,γ_i) = γ_i' for all i=1,…,n;
* for every t∈[0,1] the two sets {H(t,v_0),…,H(t,v_n)} and
{H(t,γ_1),…,H(t,γ_n)} form a properly embedded tree in U.
In the setting of Proposition <ref>, let T be a properly embedded, rooted tree in Γ_i for some i∈{1,…,}.
Given 0<ε≤ε_0 and any sufficiently large j∈, there is
a properly embedded tree T^j in Σ^j∩ U_εΓ_i which is properly homotopic to T in U_εΓ_i.
The proof of Lemma <ref> relies on the local behaviour of almost minimizing sequences.
This approach is facilitated by the following lemma.
In the setting of Proposition <ref>, let B⊂ M be a closed, G-compatible ball with sufficiently small radius such that there exist δ_j,ϵ_j>0 with ϵ_j→0 as j→∞ such that Σ^j is (G,δ_j,ϵ_j)-almost minimizing in B.
For every j∈, let {Φ_k^j}_k∈ be a sequence of isotopies in _G^δ_j(B,Σ^j) such that
lim_k→∞^2(Φ^j_k(1,Σ^j))
=inf_Ψ∈_G^δ_j(B,Σ^j)^2(Ψ(1,Σ^j)).
Then, the following statements hold (cf. Figure <ref>).
*
For all j∈ a subsequence of {Φ_k^j(1,Σ^j)}_k∈ converges in the sense of varifolds in B to a smooth, properly embedded, G-stable minimal surface V^j⊂ B, which satisfies
∂ V^j∖∂ M=Σ^j∩∂ B∖∂ M
and the free boundary condition on ∂ M∩ B (if the latter is nonempty).
*
The sequence {V^j}_j∈ converges in the sense of varifolds in B to the same limit
Γ=∑_i=1^m_iΓ_i as the original sequence {Σ^j}_j∈.
Furthermore, within any ball that is concentric with B but has smaller radius, the convergence (up to subsequence) is smooth away from finitely many points in the singular locus of the action of G.
*
If Σ̂^j⊂Σ^j is the union of some of the connected components of Σ^j∩ B, then a subsequence of {Φ_k^j(1,Σ̂^j)}_k∈ converges (with multiplicity one) in the sense of varifolds in B to V̂^j⊂ B, which is the union of some of the connected components of V^j and satisfies ∂V̂^j∖∂ M=∂Σ̂^j∖∂ M.
Moreover, if V̂^j intersects ∂ M, then Σ̂^j also has a nonempty intersection with ∂ M.
*
Let Σ̂^j⊂Σ^j and V̂^j be as in statement <ref>.
Then a subsequence of {V̂^j}_j∈ converges in the sense of varifolds in B to ∑_i=1^m̂_iΓ_i, where m̂_i∈{0,…,m_i} for i=1,…,.
Moreover, within any ball that is concentric with B but has smaller radius, the convergence is smooth away from finitely many points in the singular locus of the action of G.
In statement <ref> we do not make any claims about the convergence of {Σ̂^j}_j∈.
It is unclear whether the sequence {Σ̂^j}_j∈ is still G-almost minimizing in B, hence we cannot apply the same arguments as for {Σ^j}_j∈.
By smooth convergence away from a finite set 𝒫 we mean that around every point in M∖𝒫 there exists a ball where (with respect to some suitable coordinate chart) the convergence is smooth graphical, possibility with multiplicity if specified (as e. g. in <ref>).
<ref>
Standard compactness arguments yield that, given any j∈, a subsequence of {Φ^j_k(1,Σ^j)}_k∈ converges in the sense of varifolds to some V^j.
In B, the limit V^j is induced by a smooth, G-stable minimal surface with multiplicity one
which satisfies the desired boundary conditions by <cit.>*Theorem 13.4.3.
Moreover, by the same theorem, its genus and area are bounded uniformly with respect to j.
Note that V^j is properly embedded in B because it satisfies the boundary condition ∂ V^j∖∂ M=Σ^j∩∂ B∖∂ M and cannot have interior touching points on ∂ M because ∂ M is assumed to be strictly mean convex.
<ref> We may choose a sequence of indices k_j∈ such that the distance between Φ^j_k_j(1,Σ^j) and V^j is less than 1/j (with respect to a metric inducing the varifold convergence, see <cit.>*pp. 66 or <cit.>*pp. 703).
Since the initial sequence {Σ^j}_j∈ converges to Γ=∑_i=1^ m_iΓ_i in the sense of varifolds, <cit.>*Lemma 3.7 implies that {Φ^j_k_j(1,Σ^j)}_j∈, and thus {V^j}_j∈, also converge in the sense of varifolds to Γ.
As observed in the proof of part <ref>, genus and area of V^j are bounded uniformly with respect to j.
Ilmanen's localized Gauss-Bonnet estimate (see <cit.>) then implies uniform integral bounds on the squared norm of the second fundamental form of V^j in the ball B'⊂ B with the same center as B but only half the radius.
Consequently, a subsequence of {V^j}_j∈ converges smoothly away from finitely many points where curvature may concentrate.
These points lie on the singular locus of the action of G, because G-stability of V^j implies curvature estimates in B' away from the singular locus.
<ref>
Following the proof of <cit.>*Lemma 9.1,
the idea is that the sequence {Φ^j_k(1,Σ^j)}_k∈ converges in the sense of varifolds with multiplicity one, and this has further implications.
In particular, the convergence is consistent with the convergence in the sense of currents, wherein the behaviour of the boundary is well-defined.
By standard varifold and currents compactness, a subsequence of {Φ^j_k(1,Σ̂^j)}_k∈ converges in B to a varifold V̂^j=V in the sense of varifolds and to an integer rectifiable current T in the sense of currents.
Observe that the boundary ∂ T of T is the limit in the sense of currents as k→∞ of the current induced by ∂Φ^j_k(1,Σ̂^j)=∂Σ̂^j (with positive orientation).
Denoting by V and T the measures on B induced by V and T, we have that
T≤V≤V^j = ∑_i=1^L^2Δ_i,
where Δ_i are the connected components of the surface V^j. Since ∂ T and ∂Δ_i lie on ∂ B, there exist integers h_1,…,h_L (as in <cit.>*Lemma 9.1) such that
T=∑_i=1^L h_i [[ Δ_i ]],
where [[ Δ_i ]] is the current induced by Δ_i.
In particular, by (<ref>), we have that h_i∈{-1,0,1}.
Note that each Δ_i has nontrivial intersection with ∂ B∖∂ M, provided that B has sufficiently small radius such that it does not contain any closed minimal surfaces.
As a result, since ∂ T∖∂ M=[[∂Σ̂^j∖∂ M]], we have that h_i∈{0,1}, and h_i=1 if and only if ∂Δ_i∖∂ M⊂∂Σ̂^j.
Let V' and T' be the limits in the sense of varifolds and in the sense of currents, respectively, of {Φ^j_k(1,Σ^j∖Σ̂^j)}_k∈.
Arguing the same as before we have that T'=∑_i=1^L h_i'
[[Δ_i]] for some h_i'∈{0,1}.
By construction, {Φ^j_k(1,Σ^j)}_k∈ converges in the sense of currents to T+T'=∑_i=1^L(h_i+h_i')[[Δ_i]]. Moreover, since ∂ V^j∖∂ M =Σ^j∩ B∖∂ M by part <ref>, we have
∑_i=1^L[[∂Δ_i∖∂ M]]
=[[Σ^j∩ B ∖∂ M]]
=∂(T+T')∖∂ M
=∑_i=1^L(h_i+h_i')[[∂Δ_i∖∂ M]],
which implies that h_i+h_i'=1 for all i=1,…,L, and therefore V=T and V'=T'.
This proves that V̂^j=V is the varifold induced by the union of some of the connected components of V^j.
Moreover, since ∂ T is the limit in the sense of currents of ∂Σ̂^j and their intersection on ∂ B∖∂ M coincides, we have that V̂^j∩∂ M≠∅ implies Σ̂^j∩∂ M≠∅.
<ref>
By <ref>, V̂^j is the union of some of the connected components of V^j and {V^j}_j∈ converges to Γ as stated in <ref>.
On the one hand, by standard varifold compactness, a subsequence of {V̂^j}_j∈ converges in the sense of varifolds in B to a varifold Γ̂.
On the other hand, given any ball B' that is concentric with B but has smaller radius, statement <ref> implies that a further subsequence of {V̂^j}_j converges smoothly in B' away from finitely many points to ∑_i=1^m̂_iΓ_i for some m̂_i∈{0,…, m_i}.
In particular, Γ̂=∑_i=1^m̂_iΓ_i in B'.
Therefore, the multiplicities m̂_i do not depend on the choice of B' and the claim follows.
Below, we prove the tree lifting lemma following the arguments in <cit.>: the general idea is to lift the edges locally in small balls and to show that this can be done “consistently” in different balls.
A new difficulty occurs along edges with leaves on ∂ M.
Claim 1 in our proof deals with the transition from an interior point to a point possibly on the boundary and is analogous to <cit.> about the “continuation of the leaves” (not to be confused with the leaves of a tree).
Let r M →0,∞ be the G-invariant function from Proposition <ref>.
By compactness of M, there exists a finite set C⊂ M such that {B_r(z)/2 (z) z∈ C} is a finite covering of M.
Then, given any ball B⊂ M∖ C with radius r<min_z∈ Cr(z)/2, our assumptions imply that {Σ^j}_j∈ is G-almost minimizing in B.
Let T be a properly embedded, rooted tree in Γ_i, with vertices
v_0,…,v_n∈Γ_i, edges γ_1,…,γ_n⊂Γ_i and leaves
{ℓ_1,…,ℓ_q}={v_1,…,v_n}∩∂Γ_i.
In particular, every edge has at most one leaf.
Up to a slight perturbation, we may assume that the edges of T do not contain any points in C.
After perturbing the tree slightly near its leaves, we may choose 0<ρ≤ε_0 such that
for every edge γ containing a leaf ℓ the segment γ∩ B_ρ(ℓ) is a geodesic meeting ∂ M orthogonally, and
_M(
(⋃_k=1^nγ_k)
∖(⋃_k=1^qB_ρ(ℓ_k)),∂ M)≥ρ,
and such that for every edge γ there exists a finite set of points x_0,…,x_N on γ (labeled consecutively, where N may depend on γ) with the following properties (cf Figure <ref>).
* x_0 and x_N are vertices, i. e. the endpoints of the edge γ.
* Denoting the geodesic segment in M between x_α and x_α+1 by [x_α,x_α+1], there is a homotopy between the curves γ and ∑_α=0^N-1[x_α,x_α+1] in U_εΓ_i fixing their endpoints.
* The balls B^α=B_ρ(x_α) are pairwise disjoint and B^α∩∂ M=∅ for all α∈{0,…,N-1}.
* B^α∪ B^α+1 is contained in a ball B^α,α+1 of radius 3ρ
which satisfies B^α,α+1∩ C=∅ so that Σ^j is (G,δ_j,ϵ_j)-almost minimizing in B^α,α+1 for some δ_j,ϵ_j with ϵ_j→0.
* ∂ B^α intersects Σ^j transversely for all α∈{0,…,N} and all j∈.
We choose an interior vertex v=x_0 as root which has an edge γ containing a leaf ℓ=x_N and focus our analysis on this edge for now.
Given j∈ and α∈{0,…,N}, let {Φ_k^j,α}_k∈⊂_G^δ_j(B^α,Σ^j) be a minimizing sequence of isotopies in the sense of equation (<ref>).
In particular, Φ_k^j,α coincides with the identity on M∖ B^α.
In what follows, we assume α≤ N-1, so that B^α is in the interior of M.
Since B^α∩ B^α+1=∅, the map Φ_k^j,α,α+1[0,1]× M→ M given by
Φ_k^j,α,α+1(t,x) ={Φ_k^j,α(min{2t,1},x) if x∈ B^α,
Φ_k^j,α+1(max{2t-1,0},x) if x∈ B^α+1,
x otherwise.
is in _G^δ_j(B^α,α+1, Σ^j).
By Lemma <ref> <ref> applied in B^α, a subsequence of {Φ_k^j,α,α+1(1,Σ^j)}_k∈ converges in the sense of varifolds in B^α to a smooth, G-stable minimal surface V^j,α in B^α.
By the same argument, a further subsequence converges in the sense of varifolds in B^α+1 to a smooth, G-stable minimal surface V^j,α+1 in B^α+1
which satisfies the free boundary condition on B^α+1∩∂ M if the latter is nonempty, i. e. in the case α=N-1.
By standard varifold compactness, an even further subsequence of {Φ_k^j,α,α+1(1,Σ^j)}_k converges in the sense of varifolds in B^α,α+1 to some V^j,α,α+1 which coincides with V^j,α in B^α and with V^j,α+1 in B^α+1.
Given any connected component Σ̂^j,α of Σ^j∩ B^α, Lemma <ref> <ref> implies that a further subsequence of {Φ_k^j,α,α+1(1,Σ̂^j,α)}_k converges in the sense of varifolds to the union V̂^j,α of some of the connected components of V^j,α.
By Lemma <ref> <ref>, {V^j,α}_j∈ converges in the sense of varifolds in B^α to m_iΓ_i and in B_ρ/2(x_α) the convergence in smooth away from finitely many points in the singular locus of the group action.
In particular, the surface V̂^j,α∩ B_ρ/2(x_α) is arbitrary close, smoothly away from finitely many points, to m̂^j,α∈{0,…,m_i} copies of Γ_i∩ B_ρ/2(x_α) provided that j is sufficiently large.
Claim 0. In the notation of the previous paragraph, it is possible to choose the connected component Σ̂^j,α such that m̂^j,α≥1 provided that j is sufficiently large.
Since V^j,α∩ B_ρ/2(x_α) is arbitrary close (for j large) to m_iΓ_i∩ B_ρ/2(x_α) smoothly away from a finite set 𝒫, each of its connected components is either arbitrary close to a positive number of copies of Γ_i∩ B_ρ/2(x_α) (smoothly away from 𝒫) or it is contained in a small neighbourhood of 𝒫.
Hence, given Σ̂^j,α and the corresponding V̂^j,α⊂ V^j,α as above, we have that V̂^j,α∩ B_ρ/2(x_α) is either close to m̂^j,α≥ 1 copies of Γ_i∩ B_ρ/2(x_α) (smoothly away from 𝒫) or it is contained in a small neighbourhood of 𝒫.
Assume by contradiction that the latter case holds for every choice of Σ̂^j,α.
Then V^j,α∩ B_ρ/2(x_α) is also contained in a small neighbourhood of 𝒫, being the union of V̂^j,α∩ B_ρ/2(x_α) over all the possible choices of Σ̂^j,α.
This contradicts the fact that V^j,α∩ B_ρ/2(x_α) is close to m_iΓ_i∩ B_ρ/2(x_α), where m_i≥1 and the claim is proved.
Let Σ̂^j,α be as in Claim 0 and let Σ̂^j,α,α+1 be the connected component of Σ^j∩ B^α,α+1 containing Σ̂^j,α.
By Lemma <ref> <ref>, a further subsequence of {Φ_k^j,α,α+1(1,Σ̂^j,α,α+1)}_k converges in the sense of varifolds in B^α+1 to the union V̂^j,α+1 of some of the connected components of V^j,α+1 and an even further subsequence has a varifold limit V̂^j,α,α+1 in B^α,α+1 which coincides with V̂^j,α in B^α and with V̂^j,α+1 in B^α+1.
Claim 1. In B_ρ/2(x_α+1) the surface V̂^j,α+1 is arbitrary close (smoothly away from finitely many points on the singular locus of the group action) to a positive number of copies of Γ_i∩ B_ρ/2(x_α+1) provided that j is sufficiently large.
By Lemma <ref> <ref>, a subsequence of {V̂^j,α+1}_j∈ converges in the sense of varifolds in B^α+1 to an integer multiple m̂^α+1Γ_i of Γ_i, and in B_ρ/2(x_α+1), away from a finite set 𝒫 of concentration points on the singular locus, the convergence is smooth.
We want to prove that m̂^α+1≥ 1.
Towards a contradiction, assume that m̂^α+1=0.
Then, V̂^j,α+1∩ B_ρ/2(x_α+1) is contained in an arbitrarily small neighbourhood of 𝒫 provided that j is sufficiently large.
Up to a further subsequence we may extract a varifold limit W of {V̂^j,α,α+1}_j introduced in the previous paragraph.
On the one hand,
W(B_ρ/2(x_α+1)) =0,
because V̂^j,α,α+1 coincides with V̂^j,α+1 in B^α+1.
On the other hand, Claim 0 implies W(B_ρ/2(x_α)) ≥^2(Γ_i∩ B_ρ/2(x_α)).
For all j in the subsequence, let k_j∈ be sufficiently large such that
{Φ_k_j^j,α,α+1(1,Σ̂^j,α,α+1)}_j
also converges to W.
By <cit.>*Lemma 3.7, {Φ_k_j^j,α,α+1(1,Σ^j)}_j∈ converges in the sense of varifolds to m_iΓ_i in B^α,α+1.
Hence, W≤ m_iΓ_i∩ B^α,α+1 as varifolds.
Following <cit.>, we obtain W(∂ B_τ(z)) = 0 for any ball B_τ(z)⊂ B^α,α+1, which implies that the function
z↦W(B_ρ/2(z))
is continuous for z varying in the geodesic segment [x_α,x_α+1].
In view of (<ref>) and (<ref>), there exists some z_α∈[x_α,x_α+1] such that
W(B_ρ/2(z_α))=1/2^2(Γ_i∩ B_ρ/2(z_α)).
Moreover, B_ρ/2(z_α)∩∂ M=∅ because W(B_ρ/2(z))=0 for all z∈[x_α,x_α+1] at distance less than ρ/2 from ∂ M, since W vanishes in B^α+1.
Here, we also rely on property (<ref>) and on the fact that γ∩ B_ρ(ℓ) is a geodesic meeting ∂ M orthogonally.
In particular,
lim_j→∞^2(Φ_k_j^j,α,α+1(1,Σ̂^j,α,α+1)∩ B_ρ/2(z_α))
=1/2^2(Γ_i∩ B_ρ/2(z_α)),
lim_j→∞^2(Φ_k_j^j,α,α+1(1,Σ^j∖Σ̂^j,α,α+1)∩ B_ρ/2(z_α))
= (m_i-1/2)^2(Γ_i∩ B_ρ/2(z_α)).
We many now conclude as in <cit.> to reach a contradiction.
Indeed, equations (<ref>) and (<ref>) correspond exactly to <cit.>*(4.6–7) and B_ρ/2(z_α) is contained in the interior of M.
The idea is that (after performing a further minimization process as in Lemma <ref>) the two surfaces Φ_k_j^j,α,α+1(1,Σ̂^j,α,α+1)∩ B_ρ/2(z_α) and Φ_k_j^j,α,α+1(1,Σ^j∖Σ̂^j,α,α+1)∩ B_ρ/2(z_α) become close to an integer multiple of Γ_i and this contradicts equations (<ref>) and (<ref>).
The technical details are exactly the same as in <cit.> and therefore not repeated here.
Claim 2. In the case α=N-1, the intersection Σ̂^j,N∩∂ M is nonempty provided that j is sufficiently large.
The free boundary of Γ_i intersects B_ρ/2(x_N) because by assumption x_N is a leaf of the properly embedded tree in question.
In fact, since ρ is small, we may assume that ∂Γ_i∩ B_ρ/2(x_N) consists of a single boundary segment connecting two points on ∂ B_ρ/2(x_N)∖∂ M.
By Claim 1, V̂^j,N∩ B_ρ/2(x_N) is arbitrary close (smoothly away from finitely many points) to a positive number of copies of Γ_i∩ B_ρ/2(x_N) provided that j is sufficiently large along a subsequence.
In particular, some boundary segment of V̂^j,N∩ B_ρ/2(x_N) is close to the boundary of Γ_i∩ B_ρ/2(x_N).
Thus, being properly embedded, V̂^j,N intersects ∂ M.
Consequently, Σ̂^j,N intersects ∂ M as well by Lemma <ref> <ref>, and the claim follows.
Given the previous three claims, we now lift the edge γ, connecting an interior point v=x_0 to a leaf ℓ=x_N as follows.
We apply Claim 0 for α=0 and obtain (for all sufficiently large j in a subsequence) a connected component Σ̂^j,0 of Σ^j∩ B^0 such that a subsequence of {Φ_k^j,0(1,Σ̂^j,0)}_k∈ has a nonzero varifold limit in B_ρ/2(x_0). We fix a point y_0 in Σ̂^j,0⊂Σ^j.
Then we apply Claim 1 iteratively at every subsequent point x_α+1 with α=0,…,N-1 along γ.
In every step, we obtain a connected component Σ̂^j,α+1 belonging to the same connected component of Σ^j∩ B^α,α+1 as the previously selected Σ̂^j,α such that a subsequence of {Φ_k^j,α+1(1,Σ̂^j,α+1)}_k∈ again has a nonzero varifold limit in B_ρ/2(x_α+1).
In particular, we may choose a point y_α+1∈Σ̂^j,α+1 and a curve γ^j_α+1⊂Σ^j∩ B^α,α+1 connecting the previously selected y_α with y_α+1.
Note that, thanks to Claim 2, we can choose y_N∈Σ̂^j,N∩∂ M.
Hence, defining γ^j as the concatenation of the curves γ^j_0,…,γ^j_N, it is straightforward to check that γ^j is the desired lift of the edge γ (cf. <cit.>*pp. 64).
We now repeat the same process iteratively on all the edges of the tree (e. g. using breadth-first search) except that at each new starting vertex, the connected component of Σ^j and the first point y_0 have already been selected when lifting the preceding edge.
Thereby, the lift of every new edge connects to the previously lifted part of the tree.
This yields a properly embedded tree in Σ^j which is properly homotopic to the initial tree in U_εΓ_i, as desired.
§.§ Proof of the main results
The proof of our main theorems is based on the following result, for which we recall the notions of genus complexity and boundary complexity from Definition <ref> as well as the notation
β_1 for the first Betti number (cf. Appendix <ref>).
Let us assume to be in the setting of Proposition <ref>.
Then, for every sufficiently small ε>0, there exists J_ε∈ such that for all j≥ J_ε there is a G-equivariant surface
Σ̃^j⊂ U_2εΓ
obtained from Σ^j through surgery such that the sequence {Σ̃^j}_j≥ J_ε also converges in the sense of varifolds to Γ and we have the following bounds on the topology:
β_1(Γ) ≤lim inf_j→∞β_1(Σ̃^j), (Γ) ≤lim inf_j→∞(Σ̃^j), (Γ) ≤lim inf_j→∞(Σ̃^j).
For the purpose of this proof, we do not distinguish between Γ and its support, i. e. we may regard Γ as a smooth, compact, properly embedded, possible disconnected surface without taking multiplicity into account.
Given any ε>0, we apply Proposition <ref> to {Σ^j}_j∈ obtaining Σ̃^j from Σ^j for all j≥ J_ε through (equivariant) surgery such that Σ̃^j⊂ U_2εΓ and Σ̃^j∩ U_εΓ=Σ^j∩ U_εΓ.
The new sequence {Σ̃^j}_j≥ J_ε also converges to Γ in the sense of varifolds.
Given any x∈ M, let An∈𝒜𝒩_r(x)(x).
If An⊂ U_εΓ then {Σ̃^j}_j≥ J_ε is clearly G-almost minimizing in An because Σ̃^j coincides with Σ^j in U_εΓ.
In M∖ U_εΓ the sequence {Σ̃^j}_j≥ J_ε converges to zero in the sense of varifolds; hence we can achieve that {Σ̃^j}_j≥ J_ε is G-almost minimizing in every An∈𝒜𝒩_r(x)(x), by possibly reducing r(x)>0 depending on ε.
By <cit.>*Theorem 2A.1, every element of the first homology group can be represented by a closed curve.
Since the first Betti number is the rank of the first homology group,
there exist n=β_1(Γ) closed curves γ_1,…,γ_n in Γ which are linearly independent in the -module H_1(Γ).
Assuming 2ε≤ε_0, Simon's lifting lemma, Lemma <ref>, yields closed curves γ_1^j,…,γ_n^j⊂Σ̃^j∩ U_2εΓ for any sufficiently large j∈, such that γ_ℓ^j is homotopic to a multiple of γ_ℓ in U_2εΓ for all ℓ∈{1,…,n}.
In particular, γ_1^j,…,γ_n^j are linearly independent in the -module H_1(U_2εΓ)≅ H_1(Γ).
Suppose that α^j=k_1γ_1^j+…+k_nγ_n^j vanishes in the -module H_1(Σ̃^j) for some k_1,…, k_n∈.
Then α^j also vanishes in the -module H_1(U_2εΓ) since Σ̃^j⊂ U_2εΓ and we obtain k_1=…=k_n=0.
Consequently, H_1(Σ̃^j) has at least rank n, which concludes the proof of the inequality for the first Betti number β_1.
By Corollary <ref>, the -module H_1(Γ)/ι(H_1(∂Γ)) has rank m=2(Γ).
Hence there exist closed curves γ_1,…,γ_m in Γ which are linearly independent in the -module H_1(Γ)/ι(H_1(∂Γ)) and as before, Simon's lifting lemma yields closed curves γ_1^j,…,γ_m^j⊂Σ̃^j∩ U_2εΓ for any sufficiently large j∈, such that γ_ℓ^j is homotopic to a multiple of γ_ℓ in U_2εΓ for all ℓ∈{1,…,m}.
Suppose that there exist k_1,…,k_m∈ such that
α^j=k_1γ_1^j+…+k_mγ_m^j vanishes in the -module H_1(Σ̃^j)/ι(H_1(∂Σ̃^j)).
Then α^j can be represented by an integer linear combination of some boundary components of Σ̃^j.
Since ∂Σ̃^j⊂ U_2εΓ∩∂ M and since U_2εΓ∩∂ M is a union of pairwise disjoint annuli, every boundary component of Σ̃^j is homotopic to an integer multiple of some boundary component of Γ.
Therefore, k_1γ_1+…+k_mγ_m vanishes in H_1(Γ)/ι(H_1(∂Γ)) which implies k_1=…=k_m=0.
Consequently, H_1(Σ̃^j)/ι(H_1(∂Σ̃^j)) has at least rank m, which (again by Corollary <ref>) concludes the proof of the inequality for the genus complexity .
It remains to prove the estimate for the boundary complexity .
By assumption, ε>0 is sufficiently small such that there is a smooth retraction of U_2εΓ onto the support of Γ=∑_i=1^m_iΓ_i,
where we recall that Γ_1,…,Γ_ are induced by pairwise disjoint, connected free boundary minimal surfaces in M.
In particular, the sets U_2εΓ_i and U_2εΓ_l are disjoint for i≠ l.
Since Σ̃^j⊂ U_2εΓ, and since restriction respects varifold convergence, it is clear that the surfaces Σ̃^j∩ U_2εΓ_i converge for j→∞ to m_iΓ_i for every i∈{1,…,}.
By Remark <ref>, is additive with respect to taking unions of connected components.
Therefore, we may assume without loss of generality that Γ is connected.
Otherwise, we prove the inequality (Γ_i)≤(Σ̃^j∩ U_2εΓ_i) for each connected component Γ_i separately and conclude by summation.
(Note that even with the assumption that Γ is connected, it could happen that the surface Σ̃^j is disconnected.)
Let b denote the number of boundary components of Γ.
For each k∈{1,…,b}, let v_k∈∂Γ be a point on the kth boundary component and let T be a properly embedded, rooted tree in Γ whose leaves are exactly given by the set {v_1,…,v_b}.
Such a tree can be constructed e. g. by connecting each v_k to some interior root v_0∈Γ via a smooth embedded curve γ_k⊂Γ.
If j∈ is sufficiently large, then the tree lifting lemma, Lemma <ref>, implies that there exists a properly embedded tree T^j in Σ̃^j which is properly homotopic to T in U_2εΓ.
Let Σ̂^j denote the connected component of Σ̃^j that contains T^j.
By construction, Σ̂^j is contained in U_2εΓ and the 2ε-tubular neighbourhoods of different boundary components of Γ are disjoint.
Since the leaves of all the trees in the homotopy are restricted to these 2ε-tubular neighbourhoods, Σ̂^j must have at least b boundary components for all sufficiently large j.
Thus, recalling Definition <ref>,
(Γ)=b-1
≤lim inf_j→∞β_0(∂Σ̂^j)-1
=lim inf_j→∞(Σ̂^j)
≤lim inf_j→∞(Σ̃^j).
Let {Σ^j}_j∈ be the min-max sequence from Theorem <ref> and Γ its varifold limit.
By Lemma <ref>, we may apply Theorem <ref> to obtain a sequence {Σ̃^j}_j≥ J_ε such that Σ̃^j is obtained from Σ^j through surgery and such that
β_1(Γ) ≤lim inf_j→∞β_1(Σ̃^j), (Γ) ≤lim inf_j→∞(Σ̃^j), (Γ) ≤lim inf_j→∞(Σ̃^j).
Moreover, for all j≥ J_ε, Lemmata <ref> and <ref> imply
β_1(Σ̃^j) ≤β_1(Σ^j), (Σ̃^j) ≤(Σ^j).
Combining (<ref>) with the first and second estimate in (<ref>) completes the proof of Theorem <ref>.
If Σ^j is orientable for all j≥ J_ε then Lemma <ref> implies
(Σ̃^j)+(Σ̃^j)≤(Σ^j)+(Σ^j),
which combined with the second and third estimate in (<ref>) completes the proof of Theorem <ref>.
§ FREE BOUNDARY MINIMAL SURFACES IN THE UNIT BALL WITH GENUS ZERO
In this section, we choose the Euclidean unit ball ^3={(x,y,z)∈^3 x^2+y^2+z^2≤1} as ambient manifold M.
Given an oriented line ℓ⊂^3 and an angle α∈, let _ℓ^α
denote the rotation of angle α around ℓ.
Given 2≤ n∈, let _n denote the subgroup of Euclidean isometries acting on ^3 generated by the two rotations _x-axis^π and _z-axis^2π/n.
We call _n the dihedral group of order 2n.
Clearly, _n is orientation-preserving, being generated by rotations, and it contains the cyclic group _n generated by _z-axis^2π/n as a subgroup.
The goal of this section is to provide a self-contained proof of the following existence result.
For each 2≤ n∈ there exists an embedded, _n-equivariant free boundary minimal surface _n in the Euclidean unit ball ^3 with the following properties.
* _n has genus zero and exactly n pairwise isometric boundary components.
* The area of _n is strictly between π and 2π.
* The sequence {_n}_n≥2 converges in the sense of varifolds to the flat, equatorial disc with multiplicity two as n→∞.
Conjecturally the full symmetry group of _n is the prismatic group _n of order 4n (cf. <cit.>).
We prefer to work with the subgroup _n because _n is not orientation-preserving being generated by the reflection across the plane {y=0} in addition to the two rotations _x-axis^π and _z-axis^2π/n.
We also conjecture that _n coincides for all n with the surface Γ_n^FPZ described in <cit.> which, in turn, coincides with the genus zero free boundary minimal surfaces constructed by Folha–Pacard–Zolotareva <cit.> for all sufficiently large n∈.
For n=3, Theorem <ref> states the existence of a free boundary minimal trinoid as visualized in Figure <ref>, left image.
For n=2, Theorem <ref> provides a variational construction of an embedded, _2-equivariant free boundary minimal annulus in ^3.
We are not aware of any result stating that such an object must be isometric to the critical catenoid even though the uniqueness of the latter has been conjectured and proved under various other symmetry assumptions <cit.>.
The variational proof of Theorem <ref> involves the following steps.
* Design a sweepout of ^3 consisting of _n-equivariant surfaces with genus zero, n boundary components and area strictly less than 2π and define its _n-equivariant saturation.
* Verify the width estimate and apply equivariant min-max theory to extract a min-max sequence converging in the sense of varifolds to a free boundary minimal surface Γ.
* Control the topology: Show that Γ has genus zero and exactly n boundary components.
* Determine the asymptotics for n→∞.
In <cit.>, Ketover provides the details for step <ref>.
For the sake of completeness, we repeat the construction of the sweepout using similar notation as in <cit.>.
The min-max theory used in step <ref> has also been developed in <cit.>.
Regarding step <ref>, we recall that varifold convergence is too weak to preserve the topology in general, and in this specific case, boundary components could be lost or gained in the limit.
The idea presented in <cit.> is to classify all possible topological changes along the min-max sequence in question and to conclude that they would always result in two equatorial discs contradicting a strict upper area bound.
We provide an alternative, more general approach based on a combination of Theorems <ref> and <ref> and the following structural lemma about arbitrary (not necessarily minimal) equivariant surfaces of genus zero.
Given 3≤ n∈, let M⊂^3 be any convex, bounded domain with piecewise smooth boundary such that the cyclic group _n acts on M by isometries.
Let Σ⊂ M be any compact, connected, _n-equivariant surface of genus zero which is properly embedded and has b∈{2,3,…,n} boundary components.
Then b∈{2,n}.
Moreover, if Σ intersects the singular locus of the _n-action, then b=n and the group _n acts simply transitively on the collection of boundary components.
We claim that Σ can intersect the axis of rotation at most twice.
As described e. g. in the proof of <cit.>, every boundary component of Σ can be closed up by gluing in a topological disc while preserving embeddedness and _n-equivariance.
The resulting surface Σ̃ is closed and embedded, thus two-sided, allowing a global unit normal vector field ν which inherits the _n-equivariance.
Let ξ_0⊂^3 denote the singular locus of the _n-action, i. e. the axis of rotation.
If Σ̃ intersects ξ_0 in some point p_0, then ν(p_0)∈ξ_0 since p_0 is fixed by the group action.
Therefore, any intersection of Σ̃ with the axis ξ_0 must be orthogonal and the number of such intersections is well-defined and finite.
Moreover, the number of such intersections is even, i. e. equal to 2j for some integer j≥0, because Σ̃ is compact without boundary.
The quotient Σ̃'=Σ̃/_n is an orientable topological surface without boundary and therefore has Euler characteristic χ(Σ̃')=2-2g' for some nonnegative integer g'.
A variant of the Riemann–Hurwitz formula (see e. g. <cit.>) implies
2=χ(Σ̃) =nχ(Σ̃')-2j(n-1)
=2n-2ng'-2j(n-1)
or equivalently 0=n-ng'-j(n-1)-1 where all variables are nonnegative integers.
Consequently, g'=0 and j=1 which means that Σ̃ intersects ξ_0 exactly twice and Σ intersects ξ_0 at most twice.
Let i∈{0,1,2} be the number of intersections of the original surface Σ with the axis of rotation.
The quotient Σ'=Σ/_n is again an orientable topological surface with Euler characteristic χ(Σ')=2-b' for some nonnegative integer b'.
The Riemann–Hurwitz formula now implies
2-b=χ(Σ) =nχ(Σ')-i(n-1)
=2n-b'n-in+i
or equivalently, b=(b'-(2-i))n+(2-i).
Since n≥3 and (2-i)∈{0,1,2} as shown above, we may complete the proof by distinguishing the following cases.
* b'<(2-i)⇒ b<0, which is absurd.
* b'>(3-i)⇒ b>n, which contradicts our assumption b∈{2,3,…,n}.
* b'=(2-i)⇒ b=(2-i).
In this case the assumption b≥ 2 yields i=0 and b=2.
* b'=(3-i)⇒ b=n+(2-i).
Then the assumption b≤ n yields i=2 and b=n.
In the last case, we also have b'=1 which implies that the quotient Σ/_n has exactly one boundary component and its orbit is ∂Σ.
Consequently, _n acts simply transitively on the connected components of ∂Σ.
<ref> Sweepout construction.
For every k∈{1,…,n} let B_ε(p_k)⊂^3 denote the ball of radius ε>0 around the equatorial point
p_k=(cos(2π k/n), sin(2π k/n), 0)
and consider the subsets
D_ε =^3∩{z=0}∖⋃^n_k=1B_ε(p_k),
D_t,ε =(√(1-t^2)D_ε)+(0,0,t).
Given any t∈-1,1 and any 0<ε<sin(π/n) the set D_t,ε is a topological disc inside ^3
and we may _n-equivariantly connect the two sets D_± t,ε by means of n ribbons.
To be precise, with 0<ε_0≤ t_0≪1 to be chosen, we let εt_0,1→0,ε_0 be a continuous function of t such that ε(t)→0 as t→1 and define
Ω_t =⋃_τ∈[-t,t]D_τ ,ε(t), Σ_t =∂Ω_t∖∂^3
for all t∈t_0,1, where ∂Ω_t refers to the topological boundary of the set Ω_t⊂^3.
Naturally, we define Σ_1 to be the union of great circles on ∂^3 connecting the equatorial points p_1,…,p_n defined in (<ref>) with the north and the south pole
(see Figure <ref>, images 1–2).
The area of each ribbon connecting the two subsets Σ_t∩{z=t} is bounded from above by 2πε_0t (cf. <cit.>).
Hence, for all t∈t_0,1 the area of Σ_t satisfies
^2(Σ_t)≤2π(1-t^2+nε_0t).
If we choose ε_0=t_0/(2n) then
^2(Σ_t)≤ (2-t_0^2)π <2π
for all t∈t_0,1.
Setting p_0=p_n, let
Σ_0=⋃_k=1^n{r(p_k-1+p_k) r>0}∩^3.
As one decreases t further from t_0 to 0, the idea is to deform Σ_t continuously into Σ_0 without violating the strict upper area bound (<ref>).
This can be achieved by appealing to the catenoid estimate, <cit.>*Proposition 2.1 and Theorem 2.4,
which in our specific case can be carried out explicitly as follows.
Given 0<r<sin(π/n) and 0<h<e^-8n we consider the surfaces
C_s^r,h={(x,y,z)∈^3√(x^2+y^2)=rcosh(sz)/cosh(sh), z≤ h}
as in <cit.>
which interpolate between the cylinder C_0^r,h with radius r and height 2h and the union C_∞^r,h of two parallel discs with radius r connected by a vertical line segment of length 2h.
For each k∈{1,…,n} and each 0≤ s≤∞ let C_s,k^r,h=C_s^r,h+p_k be the horizontally translated surface centered at the equatorial point p_k.
By <cit.> the slice of maximal area in the family {C_s,k^r,h}_s>0 of surfaces is an unstable catenoid.
In particular, this implies that, if r, h and r/h are sufficiently small, then (cf. <cit.> and <cit.>*Proposition 2.1)
sup_s≥0^2(C_s,k^r,h∩^3)
≤^2(C_∞,k^r,h∩^3)+4π h^2/(-log h).
We are now free to choose t_0=h<e^-8n and s=s_0 such that r/cosh(s_0t_0)=ε(t_0).
For each k∈{1,…,n} we deform Σ_t_0∩{(x,y,z)∈^3((x,y,0),p_k)<r}
into a copy of C_s_0,k^r,t_0∩^3.
In fact, the two surfaces in question are arbitrarily close provided that both ε(t_0)∈0,ε_0 (and thus 1/s_0>0) are sufficiently small, such that we can continuously deform one into the other without significantly increasing the area.
This deformation (performed simultaneously and equivariantly for all k∈{1,…,n}) defines Σ_t for t∈2t_0/3,t_0 (see Figure <ref>, images 3–4).
As one decreases t further form 2t_0/3 to t_0/3, we decrease s from s_0 to 0 and define Σ_t accordingly such that (cf. <cit.>)
^2(Σ_t) ≤^2(Σ_t_0)+4π t_0^2 n/(-log t_0)≤(2- t_0^2/2)π
for all t∈t_0/3,2t_0/3.
While t decreases further from t_0/3 to 0 it is now straightforward to deform Σ_t into Σ_0 with area decreasing monotonically to zero (see Figure <ref>, images 5–6).
To conclude the construction of the sweepout {Σ_t}_t∈[0,1] we regularize the slices equivariantly without violating the strict 2π upper area bound such that Σ_t is a smooth surface for 0<t<1.
<ref> Width estimate.
Given the sweepout {Σ_t}_t∈[0,1] constructed in <ref>, we
consider the width W_Π of the _n-saturation Π, as introduced in Definition <ref>, and claim that
π<W_Π<2π.
The upper bound in (<ref>) follows directly from the fact that ^2(Σ_t)<2π for all t∈[0,1].
For the lower bound we define a family {F_t^Σ}_t∈[0,1] of _n-equivariant subsets of ^3 with finite perimeter such that Σ_t is the relative boundary of F_t^Σ in ^3 and such that F_t^Σ does not contain any of the points p_1,…,p_n defined in (<ref>) for all t∈0,1.
(In fact, F_t^Σ roughly corresponds to the set Ω_t defined explicitly in (<ref>), at least for all t∈t_0,1.)
We recall that given any {Λ_t}_t∈[0,1]∈Π there exists a smooth map Φ[0,1]×^3→^3, where Φ(t,·) is a diffeomorphism which commutes with the _n-action for all t∈[0,1] and coincides with the identity for all t∈{0,1}, such that Λ_t=Φ(t,Σ_t) for all t∈[0,1].
Then, F_t=Φ(t,F_t^Σ) is _n-equivariant such that ^3(F_0)=0 and ^3(F_1)=^3(^3), and such that for every t∈0,1
* Λ_t∖∂^3=∂ F_t∖∂^3, i. e. Λ_t is the relative boundary of F_t in ^3;
* (s→ t)⇒^3(F_s△ F_t)→0, where we recall that F_s△ F_t=(F_s∖ F_t)∪(F_t∖ F_s);
* {p_1,…,p_n}⊂^3∖ F_t.
In particular, the function [0,1]∋ t↦^3(F_t) is continuous and there exists t_*∈0,1 such that ^3(F_t_*)=1/2^3(^3).
By the isoperimetric inequality (cf. <cit.>, <cit.>), we obtain ^2(Λ_t_*)≥π and hence W_Π≥π.
To prove that the lower bound on the width is strict, we recall the stability of the isoperimetric inequality
(e. g. from <cit.>),
which implies that there exists δ_0>0 such that if ^2(Λ_t_*)<π+δ_0,
then there exists a half-ball Ω={p∈^3 p· v≥0} given by some v∈^2 satisfying ϕ(v)=± v for all ϕ∈_n, such that ^3(F_t_*△Ω)≤π/6.
If n≥3 then necessarily v=±(0,0,1).
In the case n=2 we could additionally have v=±(1,0,0) or v=±(0,1,0).
In any case, we can find some φ∈_n such that φ(v)=-v or equivalently, φ(Ω)=^3∖Ω.
Recalling that the set {p_1,…,p_n}⊂^3∖ F_t_* is fixed under the action of _n, we necessarily have φ(F_t_*)=F_t_*.
Since φ^3→^3 is injective,
φ(F_t_*△Ω)
=(φ(F_t_*)∖φ(Ω))∪(φ(Ω)∖φ(F_t_*))
=(F_t_*∖Ω^∁)∪(Ω^∁∖ F_t_*)
=(F_t_*∩Ω)∪(Ω^∁∩ F_t_*^∁)
=(F_t_*∪Ω^∁)∩(Ω∪ F_t_*^∁)
=(Ω∖ F_t_*)^∁∩(F_t_*∖Ω)^∁
=(F_t_*△Ω)^∁,
where A^∁=^3∖ A for any A⊂^3.
Since φ^3→^3 is also an isometry, we conclude
^3(F_t_*△Ω)=^3((F_t_*△Ω)^∁),
which leads to the contradiction
43π=^3(^3)
=^3(F_t_*△Ω)
+^3((F_t_*△Ω)^∁)
=2^3(F_t_*△Ω)≤13π.
Consequently, we must have ^2(Λ_t_*)≥π+δ_0 and since {Λ_t}_t∈[0,1]∈Π is arbitrary, the claim W_Π≥π+δ_0 follows.
The stability of the isoperimetric inequality has also been used in <cit.> and in <cit.> to prove a strict width estimate.
However, in both cases every slice Λ_t of the sweepouts in question divides ^3 into two sets F_t and F_t^∁ of equal volume and – unlike in our case – there exists φ∈_n such that φ(F_t)=F_t^∁.
Therefore, the details in <ref> differ from those in the aforementioned references.
By <cit.> the lower bound W_Π>0=max{^2(Σ_0),^2(Σ_1)} suffices to extract a min-max sequence {Σ^j}_j∈ consisting of _n-equivariant surfaces converging in the sense of varifolds to m_n, where _n is a smooth, properly embedded, compact, connected free boundary minimal surface in ^3 and where the multiplicity m is a positive integer.
Moreover, m^2(_n)=W_Π.
<ref> Topological control.
In step <ref> we proved the strict inequalities π<m^2(_n)<2π.
Since any free boundary minimal surface in ^3 has at least area π by <cit.>, we directly obtain m=1.
Moreover, the strict lower bound π<^2(_n) implies that _n is not isometric to the flat, equatorial disc.
By <cit.>, the equatorial disc is the only free boundary minimal disc in ^3 up to ambient isometries; hence _n is not a topological disc.
Being properly embedded in ^3, the surface _n is orientable and we have (_n)=(_n)≤ 0 by Theorem <ref>, recalling that every surface in our sweepout has genus zero.
Thus, _n has genus zero and since it is not a topological disc, _n must have at least two boundary components.
By Theorem <ref> we have
(_n)≤ n-1
where we used that (_n)=0=(Σ^j) and (Σ^j)=β_0(∂Σ^j)-1=n-1 for all j∈.
Consequently, _n (being connected) has at most n boundary components.
Lemma <ref> then implies that the number of boundary components for _n is either 2 or n.
Ruling out the annulus.
Towards a contradiction, suppose that _n has only two boundary components.
In this case, Lemma <ref> implies that _n is disjoint from the singular locus ξ_0=^3∩{x=y=0} of the _n-action.
Let ε>0 such that U_3ε_n is still disjoint from ξ_0.
For each j≥ J_ε, let Σ̃^j⊂ U_ε_n be the surface obtained from Σ^j through surgery as constructed in Theorem <ref>.
Note that {Σ̃^j}_j≥ J_ε inherits the property of being _n-almost minimizing in sufficiently small annuli from {Σ^j}_j∈, as shown in the beginning of the proof of Theorem <ref>.
Let M be the smooth ambient manifold which is obtained by regularizing the quotient ^3/_n in an ε-neighbourhood of the singular locus ξ_0.
The regularization at the poles can be done such that ∂ M is strictly mean convex.
Since the surfaces _n and Σ̃^j are _n-equivariant and disjoint from U_εξ_0,
their quotients _n=_n/_n and Σ̂^j=Σ̃^j/_n are smooth, properly embedded surfaces in M.
We claim that the sequence {Σ̂^j}_j≥ J_ε converges in the sense of varifolds to _n as j→∞ and is _2-almost minimizing, where _2 is the action of the symmetry group _n reduced to the regularization M of the quotient B^3/_n.
Indeed, since Σ̃^j does not intersect the singular locus of _n, for any annulus An with sufficiently small radius in M, there exists an annulus An in ^3 such that Σ̃^j∩An is isometric to Σ̂^j∩An.
Theorem <ref> then yields the contradiction
1=(Ẑ_n)≤lim inf_j→∞(Σ̂^j)=0.
Consequently, _n has exactly n pairwise isometric boundary components and by Lemma <ref>, the cyclic group _n acts simply transitively on their collection.
<ref> Asymptotic behaviour.
Let d(n) denote the maximal distance a point in _n can have from the flat, equatorial disc.
In fact, <cit.> implies that d(n) is attained by a point q_0∈∂_n.
The dihedral symmetry of _n implies that its n pairwise isometric boundary components must intersect the equator of ^3.
The area estimate from above and <cit.> imply ^1(∂_n)=2^2(_n)<4π for all 2≤ n∈.
Hence, every single boundary component of ∂_n has length less than 4π/n.
In particular, q_0 is contained in a curve of length less than 4π/n intersecting the equatorial disc.
As a result, we obtain that d(n)<4π/n→0 as n→∞.
Consequently, any subsequence of {_n}_n converges in the sense of varifolds to the equatorial disc with multiplicity m∈.
It remains to prove that m=2.
Since π<^2(_n)<2π for all n, we necessarily have m∈{1,2}.
Suppose a subsequence of {_n}_n converges to the equatorial disc with multiplicity one.
Then for any ε>0 there exists n_ε∈ such that ^2(_n_ε)<π+ε.
Choosing ε>0 as given by <cit.>, we obtain that _n_ε is isometric to the equatorial disc, contradicting the fact that _n_ε has n_ε boundary components.
Therefore, any subsequence of {_n}_n converges to the equatorial disc with multiplicity m=2.
§ RELATING TOPOLOGICAL INVARIANTS
Given any topological space X, the kth Betti number β_k(X) is defined as the rank of the kth homology group H_k(X) and the Euler characteristic χ(X) coincides with the alternating sum of the Betti numbers (cf. <cit.>).
In this appendix, we focus on the topological space formed by a compact surface Σ and determine how the Euler characteristic χ, the first Betti number β_1 and the topological complexities and defined in Definition <ref> are related.
Let Σ be a compact, connected surface with genus g≥0 and b≥0 boundary components.
Then, its first homology group, Betti number and Euler characteristic depend as follows on g, b and the orientability.
[ Σ H_1(Σ) β_1(Σ) χ(Σ); orientable and closed ^2g 2g 2-2g; nonorientable and closed ^g-1×_2 g-1 2-g; orientable with boundary ^2g+b-1 2g+b-1 2-2g-b; nonorientable with boundary ^g+b-1 g+b-1 2-g-b ]
We have β_0(Σ)=1 because Σ is connected and β_k(Σ)=0 for all k≥3 because Σ is two-dimensional.
Moreover, β_2(Σ)=1 if Σ is orientable without boundary and β_2(Σ)=0 otherwise. (Note that if Σ has nonempty boundary, then it does not enclose a volume.)
Hence,
χ(Σ) =
2-β_1(Σ) if Σ is orientable without boundary,
1-β_1(Σ) otherwise.
The computation of β_1(Σ) and H_1(Σ) in the cases where Σ is closed can be found e. g. in <cit.>*Chapter 4 Proposition 5.1.
Given a surface Σ with nonempty boundary, let Σ̂ be the closed surface which we obtain from Σ by closing up each of its b boundary components by gluing in a topological disc.
Since the Euler characteristic of a surface is the alternating sum of the number of vertices, edges and faces in any suitable triangulation, we have χ(Σ)=χ(Σ̂)-b and by (<ref>)
β_1(Σ)=1-χ(Σ)=1-χ(Σ̂)+b
=
2g+b-1 if Σ is orientable,
g+b-1 if Σ is nonorientable.
This completes the computation of the Euler characteristic and the rank of the first homology group of a surface with boundary.
The homology group itself can be determined as follows.
An orientable surface Σ with genus g∈ and 1≤ b∈ boundary components is homeomorphic to a 4g-gon, where the edges are identified according to the cyclic labeling
a_1,b_1,a_1^-1,b_1^-1,…,a_g,b_g,a_g^-1,b_g^-1
(see <cit.>*Chapter 4 Section 5.3) and where b pairwise disjoint topological discs are removed from the interior.
After the identification, the curves a_1,b_1,a_2,b_2,…,a_g,b_g are closed and have a point x_0∈Σ in common.
Let c_1,…,c_b-1 be closed curves based at x_0 and going around b-1 out of the b boundary components.
Then, the union of the 2g+b-1 curves a_1,b_1,…,a_g,b_g,c_1,…,c_b-1 is a deformation retract of Σ.
This implies that H_1(Σ) is the free -module of rank 2g+b-1, namely H_1(Σ)≅^2g+b-1.
In the nonorientable case, Σ is homeomorphic to a 2g-gon, where the edges are identified as in <cit.>*Chapter 4 Section 5.4 and where b pairwise disjoint topological discs are removed.
The conclusion that H_1(Σ)≅^g+b-1 then follows similarly as in the orientable case.
Let Σ be any compact, possibly disconnected surface and let c_𝒪 respectively c_𝒩 be the number of its orientable respectively nonorientable connected components.
Recalling the notation introduced in Definition <ref>, the Euler characteristic of Σ is equal to
χ(Σ)=2c_𝒪+c_𝒩-2(Σ)-β_0(∂Σ).
For every orientable connected component Σ̂ of Σ, Proposition <ref> implies
χ(Σ̂)
=2-2(Σ̂)-β_0(∂Σ̂)
=2-2(Σ̂)-β_0(∂Σ̂),
and for every nonorientable connected component Σ̂ of Σ, we haveχ(Σ̂)
=2-(Σ̂)-β_0(∂Σ̂)
=1-2(Σ̂)-β_0(∂Σ̂).
Since the Euler characteristic χ, the genus complexity and the number of boundary components are all additive with respect to taking the union of different connected components, the claim follows from (<ref>) and (<ref>) by summation.
Let Σ be any compact, possibly disconnected surface.
Then its first Betti number β_1(Σ) coincides with the sum of 2(Σ)+(Σ) and the number of its nonorientable connected components with nonempty boundary.
Given any connected component Σ̂ of Σ, Proposition <ref> and Definition <ref> imply:
* If Σ̂ is orientable and closed then
β_1(Σ̂)=2(Σ̂)=2(Σ̂)=2(Σ̂)+(Σ̂).
* If Σ̂ is orientable with boundary then
β_1(Σ̂)=2(Σ̂)+β_0(∂Σ̂)-1=2(Σ̂)+(Σ̂).
* If Σ̂ is nonorientable and closed then
β_1(Σ̂)=(Σ̂)-1=2(Σ̂)=2(Σ̂)+(Σ̂).
* If Σ̂ is nonorientable with boundary then
β_1(Σ̂)=(Σ̂)+β_0(∂Σ̂)-1=2(Σ̂)+(Σ̂)+1.
As in the proof of Corollary <ref>, the claim follows by additivity with respect to taking the union of different connected components.
Given any compact surface Σ with nonempty boundary, let ι H_1(∂Σ)→ H_1(Σ) be the map induced by the inclusion ∂Σ↪Σ.
Then the quotient H_1(Σ)/ι(H_1(∂Σ)) has rank 2(Σ).
Without loss of generality we may assume that Σ is connected.
Let g≥0 and b≥1 be the genus and the number of boundary components of Σ respectively.
By <cit.>, the relative homology groups H_k(Σ,∂Σ) fit into the long exact sequence
[row sep=tiny,column sep=small]
H_2(Σ) [r]
H_2(Σ,∂Σ) [r,hook]
H_1(∂Σ)[r,"ι"]
H_1(Σ) [r]
H_1(Σ,∂Σ) [r]
H_0(∂Σ) [r]
H_0(Σ).
By assumption, H_2(Σ)≅0 and H_1(∂Σ)≅^b.
If Σ is orientable, then H_2(Σ,∂Σ)≅.
Since (<ref>) is exact,
the kernel of ι has rank 1 and the image of ι has rank b-1.
Recalling H_1(Σ)≅^2g+b-1 from Proposition <ref>, we obtain
that H_1(Σ)/ι(H_1(∂Σ)) has rank 2g.
If Σ is nonorientable, then H_2(Σ,∂Σ)≅0 and ι is injective.
In this case, H_1(Σ)≅^g+b-1 by Proposition <ref>, and we obtain that H_1(Σ)/ι(H_1(∂Σ)) has rank g-1.
In either case, the rank of the quotient coincides with 2(Σ) by Definition <ref>.
|
http://arxiv.org/abs/2307.03259v1
|
20230706193553
|
On the Neutrino and Gamma-Ray Emission from NGC 1068
|
[
"Carlos Blanco",
"Dan Hooper",
"Tim Linden",
"Elena Pinetti"
] |
astro-ph.HE
|
[
"astro-ph.HE",
"astro-ph.GA"
] |
=1
.3ex<-.75em1ex∼
.3ex>-.75em1ex∼
|
http://arxiv.org/abs/2307.01348v1
|
20230703204348
|
Nonparametric Estimation of Large Spot Volatility Matrices for High-Frequency Financial Data
|
[
"Ruijun Bu",
"Degui Li",
"Oliver Linton",
"Hanchao Wang"
] |
econ.EM
|
[
"econ.EM",
"stat.ME"
] |
Nonparametric Estimation of Large Spot Volatility Matrices
for High-Frequency Financial Data
Ruijun Bu
Management School, University of Liverpool, UK., Degui Li
Department of Mathematics, University of York, UK. , Oliver Linton
Faculty of Economics, University of Cambridge, Cambridge, UK. The corresponding author, <obl20@cam.ac.uk>., Hanchao
Wang
Zhongtai Securities Institute for Financial Studies, Shandong University,
China.
University of Liverpool, University of York, University of Cambridge, Shandong University
This version: August 1, 2023
=========================================================================================================================================================================================================================================================================================================================================================================================================================================================
Abstract
In this paper, we consider estimating spot/instantaneous volatility matrices
of high-frequency data collected for a large number of assets. We first
combine classic nonparametric kernel-based smoothing with a generalised
shrinkage technique in the matrix estimation for noise-free data under a
uniform sparsity assumption, a natural extension of the approximate sparsity
commonly used in the literature. The uniform consistency property is derived
for the proposed spot volatility matrix estimator with convergence rates
comparable to the optimal minimax one. For the high-frequency data
contaminated by microstructure noise, we introduce a localised
pre-averaging estimation method that reduces the effective magnitude of the noise.
We then use the estimation tool developed in the noise-free scenario, and derive the uniform
convergence rates for the developed spot volatility matrix estimator. We further combine the kernel smoothing with the shrinkage technique to estimate the time-varying volatility matrix of the high-dimensional noise
vector. In addition, we consider large spot volatility matrix estimation in time-varying factor models with observable risk factors and derive the uniform convergence property. We provide numerical studies including simulation and empirical application to examine the performance of the proposed estimation methods in finite samples.
Keywords: Brownian semi-martingale, Factor model, Kernel smoothing,
Microstructure noise, Sparsity, Spot volatility matrix, Uniform consistency.
§ INTRODUCTION
Modelling high-frequency financial data is one of the most important topics
in financial economics and has received increasing attention in recent
decades. Continuous-time econometric models such as the Itô
semimartingale are often employed in the high-frequency data analysis. One
of the main components in these models is the volatility function or matrix. In the low-dimensional setting
(with a single or a small number of assets), the realised volatility is often used to estimate the integrated volatility over
a fixed time period <cit.>. In practice,
it is not uncommon that the high-frequency financial data are contaminated
by the market microstructure noise, which leads to biased realised
volatility if the noise is ignored. Hence,
various modification techniques such as the two-scale, pre-averaging and
realised kernel have been introduced to account for the microstructure noise
and produce consistent volatility estimation <cit.>. <cit.>, <cit.> and <cit.> provide
comprehensive reviews for estimating volatility with high-frequency
financial data under various settings.
In practical applications, financial economists often have to deal with the situation
that there are a large amount of high-frequency financial data collected for
a large number of assets. A key issue is to estimate the large volatility
structure for these assets, which has applications in various areas such as
the optimal portfolio choice and risk management. Partly motivated by
developments in large covariance matrix estimation for low-frequency data in
the statistical literature, <cit.>, <cit.> and <cit.>
estimate the large volatility matrix under an approximate sparsity
assumption <cit.>; <cit.> and <cit.> study large
volatility matrix estimation using the large-dimensional random matrix
theory <cit.>; and <cit.> propose a nonparametric
eigenvalue-regularised integrated covariance matrix for high-dimensional
asset returns. Given that there often exists co-movement between a large
number of assets and the co-movement is driven by some risk factors which
can be either observable or latent, <cit.>, <cit.>, <cit.>
extend the methodologies developed by <cit.> to estimate the large volatility matrix by imposing a
continuous-time factor model structure on the high-dimensional and
high-frequency financial data, and <cit.> study the principal component
analysis of high-frequency data and derive the asymptotic distribution for
the realised eigenvalues, eigenvectors and principal
components.
The estimation methodologies in the aforementioned literature often rely on the
realised volatility (or covariance) matrices, measuring the integrated
volatility structure over a fixed time interval. In practice, it is often
interesting to further explore the actual spot/instantaneous volatility
structure and its dynamic change over a certain time interval, which is a
particularly important measurement for the financial assets when the market
is in a volatile period (say, the global financial crisis or COVID-19
outbreak). For a single financial asset, <cit.> and <cit.>
introduce a kernel-based nonparametric method to estimate the spot
volatility function and establish its asymptotic properties including the
point-wise and global asymptotic distribution theory and uniform
consistency. For the noise-contaminated high-frequency data, <cit.>
combine the two-scale realised volatility with the kernel-weighted technique
to estimate the spot volatility, whereas <cit.> propose a
kernel-weighted pre-averaging spot volatility estimation method. Other
nonparametric spot volatility estimation methods can be found in <cit.>
and <cit.>. It seems straightforward to extend this local
nonparametric method to estimate the spot volatility matrix for a small
number of assets. However, a further extension to the setting with vast
financial assets is non-trivial. There is virtually no work on estimating
the large spot volatility matrix except <cit.>, which considers estimating
large spot volatility matrices and their integrated versions under the
continuous-time factor model structure for noise-free high-frequency data.
The main methodological and theoretical contributions of this paper are summarised as follows.
* Large spot volatility matrix estimation with noise-free high-frequency data. We use the nonparametric kernel-based smoothing method to estimate the volatility and co-volatility
functions as in <cit.> and <cit.>, and then apply a generalised
shrinkage to off-diagonal estimated entries. With small off-diagonal entries
forced to be zeros, the resulting large spot volatility matrix estimate
would be non-degenerate with stable performance in finite samples. We derive the consistency property for the proposed spot volatility matrix estimator uniformly over the entire time interval under a uniform sparsity
assumption, which is also adopted by <cit.>, <cit.> and <cit.> in the low-frequency data setting. In particular, the derived
uniform convergence rates are comparable to the optimal minimax rate in
large covariance matrix estimation <cit.>. The number of
assets is allowed to be ultra large in the sense that it can grow at an
exponential rate of 1/Δ with Δ being the sampling frequency.
* Large spot volatility matrix estimation with noise-contaminated
high-frequency data and time-varying noise volatility matrix estimation. When the high-frequency data are contaminated by the microstructure noise, we extend <cit.>'s localised pre-averaging estimation method to the high-dimensional data setting. Specifically, we first pre-average the log price data via a kernel
filter and then apply the same estimation method to the kernel fitted
high-frequency data (at pseudo-sampling time points) as in the noise-free
scenario. The microstructure noise vector is assumed to be heteroskedastic with the time-varying covariance matrix satisfying the uniform sparsity assumption. We show that the existence of microstructure noises slows down the uniform convergence rates, see Theorem <ref>. Furthermore, we combine the kernel
smoothing with generalised shrinkage to estimate the time-varying noise
volatility matrix and derive its uniform convergence property. To the best of our knowledge, there is virtually no work on large time-varying noise volatility matrix estimation for high-frequency data.
* Large spot volatility matrix estimation with risk factors. Since the uniform sparsity assumption is often too restrictive, we relax this restriction in Section <ref> and consider large spot volatility matrix estimation in the time-varying factor model at high frequency, i.e., a large number of asset prices are driven by a small number of observable common factors. By imposing the sparsity restriction on the spot idiosyncratic volatility matrix, we obtain the so-called “low-rank plus sparse" spot volatility structure. A similar structure (with constant betas) is adopted by <cit.> and <cit.> in estimation of large integrated volatility matrices. We use the kernel smoothing method to estimate the spot volatility and covariance of the observed asset prices and factors as well as the time-varying betas, and apply the shrinkage technique to the estimated spot idiosyncratic volatility matrix. We derive the uniform convergence property of the developed matrix estimates, partly extending the point-wise convergence property in <cit.>. The developed methodology and theory can be further modified to tackle the noise-contaminated high-frequency data.
We argue that all three of the scenarios we consider above may be practically relevant. Microstructure noise is considered important in the very highest frequency of data,
whereas researchers working with five minute data, say, often ignore the noise. For the lower frequency of data there is a lot comovement in returns and the factor model is
designed to capture that comovement, whereas at the ultra high frequency, comovement is less of an issue; indeed, under the so-called Epps effect this comovement shrinks to zero
with sampling frequency.
The rest of the paper is organised as follows. In Section <ref>, we
estimate the large spot volatility matrix in the noise-free high-frequency
data setting and give the uniform consistency property. In Section <ref>
, we extend the methodology and theory to the noise-contaminated data
setting and further estimate the time-varying noise volatility matrix. Section <ref> considers the large spot volatility matrix with systematic factors. Section <ref> reports the
simulation studies and Section <ref> provides an empirical application. Section <ref> concludes the paper. Proofs of the main theoretical results are available in Appendix A. The supplementary document contains proofs of some technical lemmas and propositions and discussions on the spot precision matrix estimation and the asynchronicity issue. Throughout the
paper, we let ‖·‖_2 be the Euclidean norm of a vector; and for
a d× d matrix 𝐀=(A_ij)_d× d, we let ‖𝐀‖ and ‖𝐀‖_F be the matrix spectral norm and Frobenius norm, |𝐀|_1=∑_i=1^d∑_i=1^d |A_ij|, ‖𝐀‖_1=max_1≤ j≤ d∑_i=1^d |A_ij|, ‖𝐀‖_∞,q=max_1≤ i≤ d∑_j=1^d |A_ij|^q and ‖𝐀‖_max=max_1≤ i≤ dmax_1≤ j≤ d |A_ij|.
§ ESTIMATION WITH NOISE-FREE DATA
Suppose that 𝐗_t=(X_1,t,⋯,X_p,t)^^⊺ is a p-variate Brownian semi-martingale solving the following stochastic
differential equation:
d𝐗_t=μ_tdt+σ_td𝐖
_t,
where 𝐖_t=(W_1,t,⋯,W_p,t)^^⊺ is a
p-dimensional standard Brownian motion, μ
_t=(μ_1,t,⋯,μ_p,t)^^⊺ is a p-dimensional drift
vector, and σ_t=(σ_ij,t)_p× p
is a p× p matrix. The spot volatility matrix of 𝐗_t is
defined as
Σ_t=(Σ_ij,t)_p× p=σ_tσ_t^^⊺.
Our main interest lies in estimating Σ_t when the size
p is large. As in <cit.> and <cit.>, we assume that the true
spot volatility matrix satisfies the following uniform sparsity condition: {Σ_t: 0≤ t≤ T}∈𝒮
(q,ϖ(p), T), where
𝒮(q,ϖ(p), T)={Σ_t=[Σ_ij,t]_p× p, t∈[
0, T] | sup_0≤ t ≤ T‖Σ
_t‖_∞,q≤Λϖ(p)},
where 0≤ q<1, ϖ(p) is larger than a positive constant, T is a fixed positive number and Λ is a positive
random variable satisfying 𝖤[Λ]≤ C_Λ<∞. This
is a natural extension of the approximate sparsity assumption
<cit.>. Section <ref> below will relax this assumption and consider estimating large spot volatility matrices with systematic factors. The asset prices are assumed to be collected
over a fixed time interval [0,T] at 0,Δ,2Δ,⋯,nΔ,
where Δ is the sampling frequency and n=⌊ T/Δ⌋
with ⌊·⌋ denoting the floor function. In the main text, we
focus on the case of equidistant time points in the high-frequency data
collection. The asynchronicity issue will be discussed in Appendix C.2 of the supplement.
For each 1≤ i,j≤ p, we estimate the spot co-volatility Σ_ij,t by
Σ_ij,t=∑_k=1^n K_h^∗(t_k-t)Δ X_i,kΔ X_j,k
with
K_h^∗(t_k-t)=K_h(t_k-t)/[Δ∑_l=1^nK_h(t_l-t)],
where t_k=kΔ, K_h(u)=h^-1K(u/h), K(·) is a kernel function,
h is a bandwidth shrinking to zero and Δ
X_i,k=X_i,t_k-X_i,t_k-1. The use of K_h^∗(t_k-t) rather than K_h(t_k-t) in the estimation (<ref>) is to correct a constant bias when t is close to the boundary points 0 and T. A naive method of estimating the spot
volatility matrix Σ_t is to directly use Σ_ij,t to form an estimated matrix. However, this
estimate often performs poorly in practice when the number of assets is very
large (say, p>n). To address this issue, a commonly-used technique is to
apply a shrinkage function to Σ_ij,t when i≠ j,
forcing very small estimated off-diagonal entries to be zeros. Let s_ρ(·) denote a shrinkage function satisfying the following three
conditions: (i) | s_ρ(u)|≤| u| for u∈ℛ; (ii) s_ρ(u)=0 if | u|≤ρ; and (iii) |
s_ρ(u)-u|≤ρ, where ρ is a user-specified tuning
parameter. With the shrinkage function, we construct the following
nonparametric estimator of Σ_t:
Σ_t=(Σ_ij,t^s)_p
× p with Σ_ij,t^s=s_ρ_1(t)(Σ_ij,t)I(i≠
j)+Σ_ii,t I(i=j),
where ρ_1(t) is a tuning parameter which is allowed to change over t
and I(·) denotes the indicator function. Section <ref> discusses the choice of ρ_1(t), ensuring that Σ_t is positive definite in finite samples. Our estimation method of the spot volatility matrix can be seen as a natural extension of the kernel-based large sparse covariance matrix estimation <cit.> from the low-frequency data setting to the high-frequency one. We next give some technical assumptions which are
needed to derive the uniform convergence property of Σ_t.
(i) {μ_i,t} and {σ_ij,t} are adapted locally bounded processes with continuous
sample path.
(ii) With probability one,
min_1≤ i≤ pinf_0≤ s≤ TΣ_ii, s>0, min_1≤
i≠ j≤ pinf_0≤ s≤ TΣ_ij,s^∗>0,
where Σ_ij, s^∗=Σ_ii, s+Σ_jj, s+2Σ_ij, s.
For the spot covariance process {Σ_ij,t}, there exist γ∈(0,1) and B(t,ϵ), a positive random function slowly
varying at ϵ=0 and continuous with respect to t, such that
max_1≤ i,j≤ p|Σ_ij,t+ϵ-Σ_ij,t|≤
B(t,ϵ)|ϵ|^γ+o(|ϵ|^γ), ϵ→0.
(i) The kernel K(·) is a bounded
and Lipschitz continuous function with a compact support [-1,1]. In
addition, ∫_-1^1 K(u)du=1.
(ii) The bandwidth h satisfies that h→0 and h/Δlog (p∨Δ^-1)→∞.
(iii) Let the time-varying tuning parameter ρ_1(t) in the
generalised shrinkage be chosen as
ρ_1(t)=M(t)ζ_Δ,p, ζ_Δ,p=h^γ+[Δlog (p∨Δ^-1)/h]^1/2,
where γ is defined in (<ref>) and M(t) is a positive function satisfying that
0<C_M≤inf_0≤ t≤ TM(t)≤sup_ 0≤ t≤
TM(t)≤C_M<∞.
Assumption <ref> imposes some mild restrictions on
the drift and volatility processes. By a typical localisation procedure as
in Section 4.4.1 of <cit.>, the local boundedness condition in
Assumption <ref>(i) can be strengthened to the bounded condition over the entire
time interval, i.e., with probability one,
max_1≤ i≤ psup_0≤ s≤ T |μ_i,s|≤ C_μ<∞, max_1≤ i≤ psup_0≤ s≤ TΣ_ii,s≤ C_Σ<∞,
which are the same as Assumption A2 in <cit.> and Assumptions (A.ii)
and (A.iii) in <cit.>. It may be possible to relax the uniform boundedness restriction (when T is allowed to diverge) at the cost of more lengthy proofs <cit.>. Assumption <ref>(ii) gives the smoothness condition on the spot covariance process, crucial to derive the uniform
asymptotic order for the kernel estimation bias. When the spot covariance
is driven by continuous semimartingales, (<ref>) holds with γ<1/2 <cit.>. Assumption <ref>(i)
contains some commonly-used conditions for the kernel function. Assumption <ref>(ii)(iii) imposes some mild conditions on the bandwidth and time-varying shrinkage parameter. In particular, when p
diverges at a polynomial rate of 1/Δ, Assumption <ref>(ii) reduces to the
conventional bandwidth restriction. Assumption <ref>(iii) is comparable to that
assumed by <cit.> and <cit.>. It is worthwhile to point out that the developed methodology and theory still hold when the time-varying tuning parameter in Assumption <ref>(iii) is allowed to vary over entries in the spot volatility matrix estimation, which is expected to perform well in finite samples. For example, we set ρ_ij(t)=ρ(t)(Σ_ii,tΣ_jj,t)^1/2 in the numerical studies and shrink the (i,j)-entry to zero if Σ_ij,t≤ρ(t)(Σ_ii,tΣ_jj,t)^1/2.
The following theorem gives the uniform convergence property (in the matrix spectral norm) for the
spot volatility matrix estimator Σ_t under the uniform sparsity assumption.
Suppose that Assumptions <ref> and <ref> are
satisfied, and {Σ_t: 0≤ t≤ T}∈𝒮(q,ϖ(p), T). Then we have
sup_0≤ t≤ T‖Σ_t-Σ_t‖=O_P(ϖ(p)
ζ_Δ,p^1-q),
where ϖ(p) is defined in (<ref>) and ζ_Δ,p is
defined in Assumption <ref>(iii).
(i) The first term of ζ_Δ,p is h^γ, which is the bias rate due to application of the local
smoothing technique. It is slower than the conventional h^2-rate since we do not assume existence of smooth derivatives of Σ_ij,t (with respect to t). The second
term of ζ_Δ,p is square root of Δ h^-1log (p∨Δ^-1), a typical uniform asymptotic rate for the kernel estimation
variance component. The uniform convergence rate in (<ref>) is
also similar to those obtained by <cit.> and <cit.> in the
low-frequency data setting (disregarding the bias order). Note that the dimension p affects the uniform convergence rate via ϖ(p) and log (p∨Δ^-1) and the
estimation consistency may be achieved in the ultra-high dimensional setting
when p diverges at an exponential rate of n=⌊ T/Δ⌋. Treating (nh) as
the “effective" sample size in the local estimation procedure and disregarding the bias rate h^γ, the rate in (<ref>) is comparable to the optimal minimax rate in large covariance
matrix estimation <cit.>.
(ii) If we further assume that Σ_ij,t is deterministic with continuous second-order derivative with respect to t, and K(·) is symmetric, we may improve the kernel estimation bias order. In fact, following the proof of Theorem <ref>, we may show that
sup_h≤ t≤ T-h‖Σ_t-Σ_t‖=O_P(ϖ(p)
ζ_Δ,p,⋆^1-q),
where ζ_Δ,p,⋆=h^2+[Δlog (p∨Δ^-1)/h]^1/2. The above uniform consistency property only holds over the trimmed time interval [h, T-h] due to the kernel boundary effect. In practice, however, it is often important to investigate the spot volatility structure near the boundary points. For example, when we consider one trading day as a
time interval, it is particularly interesting to estimate the spot
volatility matrix near the opening and closing times which are peak times in
stock market trading. To address this issue, we may replace K_h^∗(t_k-t) in (<ref>) by a boundary kernel weight defined by
K_h,t^⋆(t_k-t)=K_t(t_k-t/h)/[Δ∑_l=1^nK_t(t_l-t/h)],
where K_t(·) is a boundary kernel satisfying ∫_-t/h^(T-t)/huK_t(u)du=0 (a key condition to improve the bias order near the boundary points). Examples of boundary kernels can be found in <cit.> and <cit.>. With this adjustment in the kernel estimation, we can extend the uniform consistency result (<ref>) to the entire interval [0,T].
§ ESTIMATION WITH CONTAMINATED HIGH-FREQUENCY DATA
In practice, it is not uncommon that high-frequency financial data are
contaminated by the market microstructure noise. The kernel estimation
method proposed in Section <ref> would be biased if the noise is
ignored in the estimation procedure. Consider the
following additive noise structure:
𝐙_t_k=𝐗_t_k+ξ_k=𝐗
_t_k+ω(t_k)ξ_k^∗,
where t_k=kΔ, k=1,⋯,n, 𝐙_t=(Z_1,t,
⋯,Z_p,t)^^⊺ is a vector of observed asset prices at
time t, and ξ_k=(ξ_1,k,⋯,ξ_p,k)^^⊺ is a p-dimensional vector of noises with nonlinear heteroskedasticity, ω(·)=[ω_ij(·)]_p× p is
a p× p matrix of deterministic functions, and ξ
_k^∗=(ξ_1,k^∗,⋯,ξ_p,k^∗)^^⊺
independently follows a p-variate identical distribution. The noise
structure defined in (<ref>) is similar to the setting considered in
<cit.> which also contains a nonlinear mean function and allows the
existence of endogeneity for a single asset. Throughout this section, we
assume that {ξ_k^∗} is independent of the Brownian
semimartingale {𝐗_t}.
§.§ Estimation of the spot volatility matrix
To account for the microstructure noise and produce consistent volatility
matrix estimation, we apply the pre-averaging
technique as the realised kernel estimate <cit.> can be seen as a
member of the pre-averaging estimation class whereas the two-scale estimate
<cit.> can be re-written as the realised kernel estimate with the
Bartlett-type kernel (up to the first-order approximation). The
pre-averaging method has been studied by <cit.>, <cit.> and
<cit.> in estimating the integrated volatility for a single asset and
is further extended by <cit.> and <cit.> to the large
high-frequency data setting. <cit.> use a
localised pre-averaging technique to estimate the spot volatility function
for a single asset and derive the uniform convergence rate for the developed
estimate. A similar technique is also used by <cit.> to improve
convergence of the nonparametric spectral density estimator for time series
with general autocorrelation for low-frequency data.
We first pre-average the observed high-frequency data via a kernel filter,
i.e.,
𝐗_τ=T/n∑_k=1^n L_b^†(t_k-τ)𝐙
_t_k
with L_b^†(t_k-τ)=L_b(t_k-τ)/∫_0^TL_b(s-τ)ds, where L_b(u)=b^-1L(u/b), L(·) is a kernel function and b is a bandwidth. Let ΔX_i,l=X_i,τ_l-X_i,τ_l-1,
where X_i,τ_l is the i-th component of 𝐗_τ_l and τ_0,τ_1,⋯,τ_N are the
pseudo-sampling time points in the fixed interval [0,T] with equal
distance Δ_∗=T/N. Replacing Δ X_i,k by ΔX
_i,l in (<ref>), we estimate the spot
co-volatility Σ_ij,t by
Σ_ij,t=∑_l=1^N K_h^†(τ_l-t)ΔX
_i,lΔX_j,l,
where
K_h^†(τ_l-t)=K_h(τ_l-t)/[Δ_∗∑_k=1^NK_h(τ_k-t)].
Furthermore, to obtain a stable spot volatility matrix estimate in
finite samples when the dimension p is large, as in (<ref>), we
apply shrinkage to Σ_ij,t, 1≤ i≠ j≤ p, and
subsequently construct
Σ_t=(Σ
_ij,t^s)_p× p, Σ_ij,t^s=s_ρ_2(t)(Σ_ij,t)I(i≠ j)+Σ
_ii,tI(i=j),
where ρ_2(t) is another time-varying shrinkage parameter. We next give
some conditions needed to derive the uniform consistency property of Σ_t.
(i) Let {ξ_k^∗}
be an independent and identically distributed (i.i.d.) sequence of
p-dimensional random vectors. Assume that 𝖤(ξ_i,k^∗)=0 and
𝖤[exp(s|𝐮^^⊺ξ
_k^∗|)]≤ C_ξ<∞, 0<s≤ s_0,
for any p-dimensional vector 𝐮 satisfying ‖𝐮‖_2=1.
(ii) The deterministic functions ω_ij(·) are bounded
uniformly over i,j∈{1,⋯,p}, and satisfy that
max_1≤ i≤ psup_0≤ t≤ T∑_j=1^pω_ij^2(t)≤
C_ω<∞.
(i) The kernel function L(·) is
Lipschitz continuous and has a compact support [-1,1]. In addition, ∫_-1^1 L(u)du=1.
(ii) The bandwidth b and the dimension p satisfy that
b→0, Δ^2ι-1b/log (p∨Δ^-1)→∞, pΔexp{-sΔ^-ι}→0,
where 0<ι<1/2 and 0<s≤ s_0.
(iii) Let ν_Δ,p,N=√(Nlog(p∨Δ^-1))[b^1/2+(Δ^-1b)^-1/2]→0 and the time-varying
tuning parameter ρ_2(t) be chosen as ρ_2(t)=M(t)(ζ_N,p^
∗+ν_Δ,p,N), where M(t) is defined as in Assumption <ref>(iii) and ζ_N,p^∗ is defined as ζ_Δ,p with N
replacing Δ^-1.
We allow nonlinear heteroskedasticity on the
microstructure noise. The i.i.d. restriction on ξ_i^∗
may be weakened to some weak dependence conditions <cit.> at the cost of more lengthy proofs. The moment condition in
Assumption <ref>(i) is weaker than the sub-Gaussian condition
<cit.> which is commonly used in large covariance
matrix estimation when the dimension p is ultra large. The boundedness
condition on ω_ij(·) in Assumption <ref>(ii) is similar to the
local boundedness restriction in Assumption <ref>(i). Assumption <ref>(ii) imposes
some mild restrictions on b and p, which imply that there is a
trade-off between them. When ι is larger, p diverges at a faster
exponential rate of 1/Δ but the bandwidth condition becomes more
restrictive. If p is divergent at a polynomial rate of 1/Δ, we may
let ι be sufficiently close to zero, and then the bandwidth condition
reduces to the conventional one as in Assumption <ref>(ii). The condition ν_Δ,p,N→0 in Assumption <ref>(iii) is crucial to show that
the error of the kernel filter X_τ tends to
zero asymptotically, whereas the form of the time-varying shrinkage
parameter ρ_2(t) is relevant to the uniform convergence rate of Σ_ij,t (see Proposition <ref>).
Suppose that Assumptions <ref>(i)(ii), <ref>(i),
<ref> and <ref> are satisfied, and Assumption <ref>(ii) holds with
Δ^-1 replaced by N. When {Σ_t: 0≤
t≤ T}∈𝒮(q,ϖ(p), T), we have
sup_0≤ t≤ T‖Σ_t-Σ_t‖=O_P(ϖ(p) [
ζ_N,p^∗+ν_Δ,p,N]^1-q),
where ζ_N,p^∗ and ν_Δ,p,N are defined in Assumption <ref>(iii).
The uniform convergence rate in (<ref>)
relies on ϖ(p), ζ_N,p^∗ and ν_Δ,p,N. With the
high-frequency data collected at pseudo time points with sampling frequency Δ_∗=T/N, the rate ζ_N,p^∗ is comparable to ζ_Δ,p for the noise-free kernel estimator in Section <ref>.
The rate ν_Δ,p,N is due to the error of the kernel filter X_τ in the first step of the local
pre-averaging estimation procedure. In particular, when q=0, ϖ(p)
is bounded, b=Δ^1/4 and h=N^-1/2γ+1 with N=Δ^-2γ+1/2(4γ+1), the uniform
convergence rate in (<ref>) becomes Δ^γ/
2(4γ+1)√(log(p∨Δ^-1)). Furthermore, if γ=1/2, the rate is simplified to Δ^1/12√(log(p∨Δ^-1)), comparable to those derived by <cit.> and <cit.>
in the univariate high-frequency data setting.
§.§ Estimation of the time-varying noise volatility matrix
It is often interesting to further explore the volatility
structure of microstructure noise. <cit.> estimate
the constant covariance matrix for high-dimensional noise and derive the
optimal convergence rates for the developed estimate. In the present paper,
we consider the time-varying noise covariance matrix defined by
Ω(t)=ω(t)ω
^^⊺(t)=[Ω_ij(t)]_p× p, 0≤ t≤ T.
It is sensible to assume that {Ω(t): 0≤ t≤
T} satisfies the uniform sparsity condition as in (<ref>). For
each 1≤ i,j≤ p, we estimate Ω_ij(t) by the kernel smoothing
method:
Ω_ij(t)=Δ/2∑_k=1^nK_h_1^∗(t_k-t)
Δ Z_i,t_kΔ Z_j,t_k,
where h_1 is a bandwidth, Δ Z_i,t_k=Z_i,t_k-Z_i,t_k-1 and K_h_1^∗(t_k-t) is defined similarly to K_h^∗(t_k-t) in (<ref>) but with h_1 replacing h.
As in (<ref>) and (<ref>), we again apply shrinkage to Ω_ij(t), 1≤ i≠ j≤ p, and construct
Ω(t)=[Ω_ij^s(t)]
_p× p, Ω_ij^s(t)=s_ρ_3(t)(Ω_ij(t))I(i≠ j)+Ω_ii(t)I(i=j),
where ρ_3(t) is a time-varying shrinkage parameter. To derive the
uniform consistency property of Ω(t), we need
to impose stronger moment condition on ξ_k^∗ and
smoothness restriction on Ω_ij(·).
(i) For any p-dimensional vector 𝐮 satisfying ‖𝐮‖_2=1,
𝖤[exp(s(𝐮^^⊺ξ
_k^∗)^2)]≤ C_ξ^⋆<∞, 0<s≤ s_0.
(ii) The time-varying function Ω_ij(t) satisfies that
max_1≤ i,j≤ p|Ω_ij(t)-Ω_ij(s)|≤
C_Ω|t-s|^γ_1,
where C_Ω is a positive constant and 0<γ_1<1.
(iii) The bandwidth h_1 and the dimension p satisfy that
h_1→0, Δ^2ι_⋆-1h_1/log (p∨Δ^-1)→∞, pΔ^-1exp{-sΔ^-ι_⋆/C_ω}→0,
where 0<ι_⋆<1/2, 0<s≤ s_0 and C_ω is defined in
Assumption <ref>(ii).
Assumption <ref>(i) strengthens the moment condition in Assumption <ref>(i) and is equivalent to the sub-Gaussian condition, see Assumption A1 in <cit.>. The smoothness condition in
Assumption <ref>(ii) is similar to (<ref>), crucial to derive the
asymptotic order of the kernel estimation bias. The restrictions on h_1
and p in Assumption <ref>(iii) are similar to those in Assumption <ref>(ii),
allowing p to be divergent to infinity at an exponential
rate of 1/Δ.
In the following theorem, we state the uniform consistency result for Ω(t) with convergence rate comparable to that
in Theorem <ref>.
Suppose that Assumptions <ref>, <ref>(i), <ref>
and <ref> are satisfied, and Assumption <ref>(ii)(iii) holds when ρ_1(t), ζ_Δ,p and h are replaced by ρ_3(t), δ_Δ,p and h_1, respectively, where δ_Δ,p=h_1^γ_1+[Δlog (p∨Δ^-1)/h_1]^1/2. If
{Ω(t): 0≤ t≤ T}∈𝒮
(q,ϖ(p), T), we have
sup_0≤ t≤ T‖Ω(t)-Ω(t)‖=O_P(ϖ(p)δ_Δ,p^1-q).
If the bandwidth parameter h_1 in (<ref>) is the same as h in (<ref>), we may find that the uniform convergence rate O_P(ϖ(p)δ_Δ,p^1-q) would be
the same as that in Theorem 1. Treating (nh_1) as the “effective" sample
size and disregarding the bias order, we may show that the uniform
convergence rate in (<ref>) is comparable to the optimal minimax rate
derived by <cit.> for the constant noise covariance matrix
estimation. Meanwhile, the kernel estimation bias order h_1^γ_1 may be improved by strengthening the smoothness condition on Ω_ij(·) and adopting the boundary kernel weight as suggested in Remark <ref>(ii).
§ ESTIMATION WITH OBSERVED FACTORS
The large spot volatility matrix estimation with the shrinkage technique developed in Sections <ref> and <ref> heavily relies on the uniform sparsity assumption (<ref>). However, the latter may be too restrictive in practice since the price processes of a large number of assets are often driven by some common factors such as the market factors, resulting in strong correlation among assets and failure of the sparsity condition. To address this problem, we next consider the nonparametric time-varying regression at high frequency:
d𝐘_t=β(t) d 𝐅_t+d𝐗_t,
where β(t)=[β_1(t),⋯,β_p(t)]^^⊺ is a p× k matrix of time-varying betas (or factor loadings), 𝐅_t and 𝐗_t are k-variate and p-variate continuous semi-martingales defined by
d𝐅_t=μ_t^Fdt+σ_t^Fd𝐖_t^F and d𝐗_t=μ_t^Xdt+σ_t^Xd𝐖_t^X,
respectively, μ_t^F and μ_t^X are drift
vectors, σ_t^F=(σ_ij,t^F)_k× k, σ_t^X=(σ_ij,t^X)_p× p, 𝐖_t^F and 𝐖_t^X are
k-dimensional and p-dimensional standard Brownian motions. For the time being, we assume that 𝐘_t and 𝐅_t are observable and noise free but 𝐗_t is latent. Extension of the methodology and theory to the noise-contaminated high-frequency data will be considered later in this section.
Estimation of the constant betas via the ratio of realised covariance to realised variance is proposed by <cit.>, and extension to time-varying beta estimation has been studied by <cit.>, <cit.> and <cit.>, some of which allow jumps in the semi-martingale processes. The main interest of this section lies in estimating the large spot volatility structure Σ_t^Y of 𝐘_t. Letting Σ_t^F=σ_t^F(σ_t^F)^^⊺ and Σ_t^X=σ_t^X(σ_t^X)^^⊺, and assuming orthogonality between 𝐗_t and 𝐅_t, see Assumption <ref>(iii) below, it follows from (<ref>) that
Σ_t^Y=β(t)Σ_t^Fβ(t)^^⊺+Σ_t^X.
As in <cit.>, we impose the uniform sparsity restriction on Σ_t^X instead of Σ_t^Y, i.e., {Σ_t^X: 0≤ t≤ T}∈𝒮(q,ϖ(p), T). This is a reasonable assumption in practical applications as the asset prices, after removing the influence of systematic factors, are expected to be weakly correlated. <cit.> and <cit.> use a similar framework with constant betas to estimate large integrated volatility matrices.
Suppose that we observe 𝐘_t and 𝐅_t at regular points: t_k=kΔ, k=1,⋯,n, as in Sections <ref> and <ref>. Let Σ_t^YF be the spot covariance between 𝐘_t and 𝐅_t. We may use the kernel smoothing method as in (<ref>) to estimate Σ_t^Y, Σ_t^F and Σ_t^YF, i.e.,
Σ_t^Y=∑_k=1^nK_h^∗(t_k-t)Δ𝐘_kΔ𝐘_k^^⊺,
Σ_t^F=∑_k=1^nK_h^∗(t_k-t)Δ𝐅_kΔ𝐅_k^^⊺,
Σ_t^YF=∑_k=1^nK_h^∗(t_k-t)Δ𝐘_kΔ𝐅_k^^⊺,
where Δ𝐘_k=𝐘_t_k-𝐘_t_k-1, Δ𝐅_k=𝐅_t_k-𝐅_t_k-1, and K_h^∗(t_k-t) is defined as in (<ref>). Consequently, the time-varying betas β(t) and the spot idiosyncratic volatility matrix Σ_t^X are estimated by
β(t)=[β_1(t),⋯,β_p(t)]^^⊺=Σ_t^YF(Σ_t^F)^-1,
and
Σ_t^X=(Σ_ij,t^X)_p× p=Σ_t^Y-Σ_t^YF(Σ_t^F)^-1(Σ_t^YF)^^⊺.
With the uniform sparsity condition, it is sensible to further apply shrinkage to Σ_ij,t^X, i.e.,
Σ_t^X,s=(Σ_ij,t^X,s)_p
× p with Σ_ij,t^X,s=s_ρ_4(t)(Σ_ij,t^X)I(i≠
j)+Σ_ii,t^X I(i=j),
where ρ_4(t) is a time-varying shrinkage parameter. We finally estimate Σ_t^Y as
Σ_t^Y, s=β(t)Σ_t^Fβ(t)^^⊺+Σ_t^X,s=Σ_t^YF(Σ_t^F)^-1(Σ_t^YF)^^⊺+Σ_t^X,s.
We need the following assumption to derive the uniform convergence property for Σ_t^X,s and Σ_t^Y,s.
(i) Assumption <ref> is satisfied for {X_t} defined in (<ref>) (with minor notational changes).
(ii) Let {μ_t^F}, {σ_t^F} and {Σ_t^F} satisfy the boundedness and smoothing conditions as in Assumption <ref>.
(iii) For any 1≤ i≤ p and 1≤ j≤ k, [X_it, F_jt]=0 for any t∈[0,T], where X_i,t is the i-th element of 𝐗_t, F_j,t is the j-th element of 𝐅_t, and [·,·] denotes the quadratic covariation.
(iv) The time-varying beta function β_i(·) satisfies that
max_1≤ i≤ psup_0≤ t≤ T‖β_i(t)‖_2≤ C_β<∞, max_1≤ i≤ p‖β_i(t)-β_i(s)‖_2≤ C_β|t-s|^γ,
where γ is the same as that in Assumption <ref>(ii). In addition, there exists a positive definite matrix Σ_β(t) (with uniformly bounded eigenvalues) such that
sup_0≤ t≤ T‖1/pβ(t)^^⊺β(t)-Σ_β(t)‖=o(1).
The uniform boundedness and smoothness conditions imposed on the drift and spot volatility functions of 𝐗_t and 𝐅_t in Assumption <ref>(i)(ii) are the same as those in Assumption <ref>. This is crucial to ensure that the uniform convergence rates of Σ_t^Y, Σ_t^F and Σ_t^YF (in the max norm) derived in Proposition <ref> are the same as that in Proposition <ref>. The orthogonality condition in Assumption <ref>(iii) is commonly used to consistently estimate the time-varying factor model <cit.>. Assumption <ref>(iv) is a rather mild restriction on time-varying betas and may be strengthened to improve the estimation bias order, see the discussion in Remark <ref>(ii). The condition (<ref>) indicates that all the factors are pervasive.
We next present the convergence property of Σ_t^X,s and Σ_t^Y,s defined in (<ref>) and (<ref>), respectively. Due to the nonparametric factor regression model structure (<ref>), the largest k eigenvalues of Σ_t^Y are spiked, diverging at a rate of p. Hence, Σ_t^Y cannot be consistently estimated in the absolute term. To address this problem, as in <cit.>, we measure the spiked volatility matrix estimate in the following relative error:
‖Σ_t^Y,s-Σ_t^Y‖_Σ_t^Y=1/√(p)‖(Σ_t^Y)^-1/2(Σ_t^Y,s-Σ_t^Y) (Σ_t^Y)^-1/2‖_F,
where the normalisation factor p^-1/2 is used to guarantee that ‖Σ_t^Y‖_Σ_t^Y=1.
Suppose that Assumptions <ref>(i)(ii) and <ref> are satisfied, and Assumption <ref>(iii) holds with ρ_1(t) replaced by ρ_4(t). When {Σ_t^X: 0≤ t≤ T}∈𝒮(q,ϖ(p), T), we have
sup_0≤ t≤ T‖Σ_t^X,s-Σ_t^X‖=O_P(ϖ(p)ζ_Δ,p^1-q),
where ϖ(p) is defined in (<ref>) and ζ_Δ,p is defined in Assumption <ref>(iii); and
sup_0≤ t≤ T‖Σ_t^Y,s-Σ_t^Y‖_Σ_t^Y=O_P(p^1/2ζ_Δ,p^2+ϖ(p)ζ_Δ,p^1-q).
Although 𝐗_t is latent in model (<ref>), the uniform convergence rate for Σ_t^X,s in (<ref>) is the same as that in Theorem <ref> when 𝐗_t is observable. Treating (nh) as the effective sample size in kernel estimation and disregarding the bias order in ζ_Δ,p, the uniform convergence rate for Σ_t^Y,s in (<ref>) is comparable to the convergence rates derived by <cit.> in low frequency and <cit.> in high frequency. To guarantee uniform consistency in the relative matrix estimation error, we have to further assume that pζ_Δ,p^4=o(1), limiting the divergence rate of the asset number, i.e., p can only diverge at a polynomial rate of n=⌊ T/Δ⌋.
We next modify the above methodology and theory to accommodate microstructure noise in the asset prices and factors. Assume that
𝐙_Y,t_k=𝐘_t_k+ω_Y(t_k)ξ_Y,k^∗, 𝐙_F,t_k=𝐅_t_k+ω_F(t_k)ξ_F,k^∗,
where ω_Y(·) and ω_F(·) are matrices of deterministic functions similar to ω(·), and {ξ_Y,k^∗} and {ξ_F,k^∗} are i.i.d. sequences of random vectors similar to {ξ_k^∗}. Since both 𝐘_t and 𝐅_t are latent, we need to first adopt the kernel pre-averaging technique proposed in Section <ref> to obtain the approximation of 𝐘_t and 𝐅_t, and then apply the kernel smoothing and generalised shrinkage as in (<ref>)–(<ref>). This results in a three-stage estimation procedure which we describe as follows.
* As in (<ref>), we pre-average the noise-contaminated 𝐙_Y,t_k and 𝐙_F,t_k via the kernel filter:
𝐘_τ=T/n∑_k=1^n L_b^†(t_k-τ)𝐙_Y,t_k, 𝐅_τ=T/n∑_k=1^n L_b^†(t_k-τ)𝐙_F,t_k,
where L_b^†(t_k-τ) is defined as in (<ref>) and we consider τ as the pseudo-sampling time points: τ_l=lΔ_∗, l=0,1,⋯, N=⌊ T/Δ_∗⌋.
* With 𝐘_τ_l and 𝐅_τ_l, l=1,⋯,N, we estimate Σ_t^Y, Σ_t^F and Σ_t^YF by the kernel smoothing as in (<ref>)–(<ref>):
Σ_t^Y=∑_l=1^N K_h^†(τ_l-t)Δ𝐘_lΔ𝐘_l^^⊺,
Σ_t^F=∑_l=1^N K_h^†(τ_l-t)Δ𝐅_lΔ𝐅_l^^⊺,
Σ_t^YF=∑_l=1^N K_h^†(τ_l-t)Δ𝐘_lΔ𝐅_l^^⊺,
where K_h^†(τ_l-t) is defined as in (<ref>), Δ𝐘_l=𝐘_τ_l-𝐘_τ_l-1 and Δ𝐅_l=𝐅_τ_l-𝐅_τ_l-1. Furthermore, estimate β(t) and Σ_t^X by
β(t)=Σ_t^YF(Σ_t^F)^-1, Σ_t^X=(Σ_ij,t^X)_p× p=Σ_t^Y-Σ_t^YF(Σ_t^F)^-1(Σ_t^YF)^^⊺.
* Apply the generalised shrinkage to Σ_ij,t^X, i.e.,
Σ_t^X,s=(Σ_ij,t^X,s)_p
× p with Σ_ij,t^X,s=s_ρ_5(t)(Σ_ij,t^X)I(i≠
j)+Σ_ii,t^X I(i=j),
where ρ_5(t) is the shrinkage parameter, and then estimate Σ_t^Y by
Σ_t^Y, s=β(t)Σ_t^Fβ(t)^^⊺+Σ_t^X,s=Σ_t^YF(Σ_t^F)^-1(Σ_t^YF)^^⊺+Σ_t^X,s.
As shown in Theorem <ref>, the existence of microstructure noises slows down the uniform convergence rates. Following the proof of Lemma B.1 in Appendix B, we may show that
max_0≤ l≤ N|𝐘_τ_l-𝐘_τ_l|_max+max_0≤ l≤ N|𝐅_τ_l-𝐅_τ_l|_max=O_P(ν_Δ,p,N),
where |·|_max denotes the L_∞-norm of a vector, and ν_Δ,p,N is defined in Assumption <ref>(iii). Modifying Proposition <ref> and the proof of Theorem <ref> in Appendix A, we can prove that (<ref>) and (<ref>) hold but with ζ_Δ,p replaced by ζ_N,p^∗+ν_Δ,p,N defined in Assumption <ref>(iii), i.e.,
sup_0≤ t≤ T‖Σ_t^X,s-Σ_t^X‖=O_P(ϖ(p)(ζ_N,p^∗+ν_Δ,p,N)^1-q),
sup_0≤ t≤ T‖Σ_t^Y,s-Σ_t^Y‖_Σ_t^Y=O_P(p^1/2(ζ_N,p^∗+ν_Δ,p,N)^2+ϖ(p)(ζ_N,p^∗+ν_Δ,p,N)^1-q).
§ MONTE-CARLO STUDY
In this section, we report the Monte-Carlo simulation studies to assess the numerical performance of the proposed large spot volatility matrix and time-varying noise volatility matrix estimation methods under the sparsity condition and the factor-based spot volatility matrix estimation. Here we only consider the synchronous high-frequency data. Additional simulation results for asynchronous high-frequency data are provided in the supplement.
§.§ Simulation for sparse volatility matrix estimation
5.1.1. Simulation setup
We generate the noise-contaminated high-frequency data according to model (<ref>), where ω(t) is taken as the Cholesky decomposition of the noise covariance matrix Ω(t)=[Ω _ij(t)] _p× p, ξ_k^∗=(ξ _1,k^∗,⋯ ,ξ _p,k^∗)^^⊺ is an independent p-dimensional random vector of cross-sectionally independent standard normal random variables, the latent return process 𝐗_t of p assets is generated from the following drift-free model:
d𝐗_t=σ_td𝐖_t^X, t∈ 0,T],
𝐖_t^X=( W_1,t^X,⋯ ,W_p,t^X)^^⊺ is a standard p-dimensional Brownian motion, and σ_t is chosen as the Cholesky decomposition of the spot covariance matrix Σ_t=(Σ_ij,t) _p× p. In the simulation, we consider the volatility matrix estimation over the time interval of a full trading day, and set the sampling interval to be 15 seconds, i.e., Δ =1/(252× 6.5× 60× 4), to generate synchronous data. We consider three structures in Σ_t and Ω(t): “banding", “block-diagonal", and “exponentially decaying". Following <cit.>, we generate the diagonal elements of Σ_t from the following geometric Ornstein-Uhlenbeck model <cit.>:
dlogΣ _ii,t=-0.6(0.157+logΣ _ii,t)dt+0.25dW_i,t^Σ, W_i,t^Σ=ι_iW_i,t^X+√(1-ι_i^2)W_i,t^∗,
where 𝐖_t^∗=(W_1,t^∗,⋯,W_p,t^∗)^^⊺ is a standard p-dimensional Brownian motion independent of 𝐖_t^X, and ι_i is a random number generated uniformly between -0.62 and -0.30, reflecting the leverage effects. The diagonal elements of Ω(t) are defined as daily cyclical deterministic functions of time:
Ω _ii( t) =c_i{1/2[ cos(2π t/T) +1] ×( ω-ω) +ω},
where ω=1 and ω=0.1 reflect the observation by <cit.> that the noise level is high at both the opening and the closing times of a trading day and is low in the middle of the day, and the scalar c_i controls the noise ratio for each asset which is chosen to match the highest noise ratio considered by <cit.>. As in <cit.>, we define a continuous-time stochastic process κ^Σ_t by
κ^Σ_t = e^2κ_t-1/e^2κ_t+1,
dκ_t=0.03( 0.64-κ_t) dt+0.118κ_tdW_t^κ,
W_t^κ = √(0.96)W_t^♢-0.2∑_i=1^pW_i,t^X/√(p)
where W_t^♢ is a standard univariate Brownian motion independent of 𝐖_t^X and 𝐖_t^∗. Let
κ^Ω_t =κ-κ/2[ cos( 2π t/T) +1] +κ,
where κ=0.5 and κ=-0.5. We will use κ^Σ_t and κ^Ω_t to define the off-diagonal elements in Σ_t and Ω(t), respectively, which are specified as follows.
* Banding structure for Σ_t and Ω(t): The off-diagonal elements are defined by
Σ _ij,t =(κ^Σ_t)^| i-j|√(Σ_ii,tΣ_jj,t)· I( | i-j|≤ 2),
and
Ω_ij(t) =(κ^Ω_t) ^| i-j|√(Ω _ii( t) Ω _jj(t) )· I( | i-j|≤ 2),
for 1≤ i≠ j≤ p.
* Block-diagonal structure for Σ_t and Ω(t): The off-diagonal elements are defined by
Σ _ij,t =(κ^Σ_t)^| i-j|√(Σ_ii,tΣ_jj,t)· I( (i,j)∈ B),
Ω _ij(t)=(κ^Ω_t) ^| i-j|√(Ω _ii( t) Ω _jj(t) )· I((i,j)∈ B) ,
for 1≤ i≠ j≤ p, where B is a collection of row and column indices (i,j) located within our randomly generated diagonal blocks [As in <cit.>, to generate blocks with random sizes, we fix the largest block size at 20 when p=200 and randomly generate the sizes of the remaining blocks from a random integer uniformly picked between 5 and 20. When p=500, the largest size is 40, and the random integer is uniformly picked between 10 and 40. Block sizes are randomly generated but fixed across all Monte Carlo repetitions.].
* Exponentially decaying structure for Σ_t and Ω(t): The off-diagonal elements are defined by
Σ _ij,t =(κ^Σ_t)^| i-j|√(Σ_ii,tΣ_jj,t), Ω_ij(t) =(κ^Ω_t) ^| i-j|√(Ω _ii( t) Ω _jj(t) ), 1≤ i≠ j≤ p.
It is clear that the sparsity condition is not satisfied when the off-diagonal elements of Σ_t and Ω(t) are exponentially decaying as in (<ref>). The number of assets p is set as p=200 and 500 and the replication number is R=200
5.1.2. Volatility matrix estimation
In the simulation studies, we consider the following volatility matrix estimates.
* Noise-free spot volatility matrix estimate Σ_t. This infeasible estimate serves as a benchmark in comparing the numerical performance of various estimation methods. As in Section <ref>, we apply the kernel smoothing method to estimate Σ_ij,t by directly using the latent return process 𝐗_t, where the bandwidth is determined by the leave-one-out cross validation. We apply four shrinkage methods to Σ_ij,t for i≠ j: hard thresholding (Hard), soft thresholding (Soft), adaptive LASSO (AL) and smoothly clipped absolute deviation (SCAD). For comparison, we also compute the naive estimate without applying any regularisation technique.
* Noise-contaminated spot volatility matrix estimate Σ_t. We combine the kernel smoothing with pre-averaging in Section <ref> to estimate Σ_ij,t by using the noise-contaminated process 𝐙_t. As in the noise-free estimation, we apply four shrinkage methods to Σ_ij,t for i≠ j and also compute the naive estimate without applying the shrinkage.
* Time-varying noise volatility matrix estimate Ω(t). We combine the kernel smoothing with four shrinkage techniques in the estimation as in Section <ref> and also the naive estimate without shrinkage.
The choice of tuning parameter in shrinkage is similar to that in <cit.>. For example, in the noise-free spot volatility estimate, we set the tuning parameter as ρ_ij(t)=ρ(t)(Σ_ii,tΣ_jj,t)^1/2 where ρ(t) is chosen as the minimum value among the grid of values on [0,1] such that the shrinkage estimate of the spot volatility matrix is positive definite. To evaluate the estimation performance of Σ_t, we consider 21 equidistant time points on [0,T] and compute the following Mean Frobenius Loss (MFL) and Mean Spectral Loss (MSL) over 200 repetitions:
MFL = 1/200∑_m=1^200( 1/21∑_j=1^21‖Σ_t_j^(m)-Σ_t_j^(m)‖ _F),
MSL = 1/200∑_m=1^200( 1/21∑_j=1^21‖Σ_t_j^(m)-Σ_t_j^(m)‖),
where t_j, j=1,2,⋯,21 are the equidistant time points on the interval [0,T], and Σ_t_j^(m) and Σ_t_j^(m) are respectively the estimated and true spot volatility matrices at t_j for the m-th repetition. The “MFL" and “MSL" can be similarly defined for Σ_t and Ω(t).
5.1.3. Simulation results
Table 1 reports the simulation results when the dimension is p=200. The three panels in the table (from top to bottom) report the results where the true volatility matrix structures are banding, block-diagonal, and exponentially decaying, respectively. In each panel, the MFL results are reported on the left, whereas the MSL results are on the right. The first two rows of each panel contain the MFL and MSL results for the spot volatility matrix estimation whereas the third row contains the results for the time-varying noise volatility matrix estimation.
For the noise-free estimate Σ_t, when the volatility matrix structure is banding, the performance of the four shrinkage estimators are substantially better than that of the naive estimate (without any shrinkage). In particular, the results of the soft thresholding, adaptive LASSO and SCAD are very similar and their MFL and MSL values are approximately one third of those of the naive estimator. Meanwhile, the performance of the hard thresholding is less accurate (despite the much stronger level of shrinking used), but is still much better than the naive estimate. These results show that the shrinkage technique is an effective tool in estimating the sparse volatility matrix. Similar results are obtained for the noise-contaminated estimate Σ_t. Unsurprisingly, due to the microstructure noise, the MFL and MSL values of the local pre-averaging estimates are noticeably higher than the corresponding values of the noise-free estimates. We next turn the attention to the time-varying noise volatility matrix estimate Ω(t). As in the spot volatility matrix estimation, the naive method again produces the highest MFL and MSL values. The performance of the four shrinkage estimators are similar with the adaptive LASSO and SCAD being slightly better than the hard and soft thresholding. The simulation results for the block-diagonal and exponentially decaying covariance matrix settings, reported in the middle and bottom panels of Table 1, are fairly close to those for the banding setting. Overall, the results in Table 1 show that the shrinkage methods perform well not only in the sparse covariance matrix settings but also in the non-sparse one (i.e., the exponentially decaying setting).
Table 1: Estimation results for the spot volatility and time-varying noise covariance matrices when p=200
11c“Banding"
3-79-13
Naive Hard Soft AL SCAD 1l
Naive Hard Soft AL SCAD
3-79-13
Σ_t MFL 1r14.396
1r11.407 1r5.490 1r
4.038 1r4.830 1lMSL
1r3.963 1r1.799 1r
1.073 1r0.867 1r0.987
Σ_t MFL 1r18.497
1r12.899 1r12.196 1r
12.064 1r12.177 1lMSL
1r4.796 1r2.347 1r
2.260 1r2.255 1r2.262
Ω(t) MFL 1r11.714
1r4.226 1r4.740 1r
3.237 1r3.960 1lMSL
1r3.281 1r0.682 1r
1.039 1r0.571 1r0.753
11c“Block-diagonal"
3-79-13
Naive Hard Soft AL SCAD 1l
Naive Hard Soft AL SCAD
3-79-13
Σ_t MFL 1r14.398
1r11.277 1r5.818 1r
4.786 1r5.424 1lMSL
1r4.000 1r2.293 1r
1.310 1r1.233 1r1.386
Σ_t MFL 1r18.475
1r12.811 1r12.192 1r
12.059 1r12.158 1lMSL
1r4.915 1r2.777 1r
2.663 1r2.669 1r2.662
Ω(t) MFL 1r11.713
1r4.076 1r4.875 1r
3.240 1r3.964 1lMSL
1r3.274 1r0.741 1r
1.098 1r0.606 1r0.816
11c“Exponentially decaying"
3-79-13
Naive Hard Soft AL SCAD 1l
Naive Hard Soft AL SCAD
3-79-13
Σ_t MFL 1r14.402
1r12.033 1r6.091 1r
5.287 1r5.976 1lMSL
1r4.078 1r2.456 1r
1.410 1r1.348 1r1.510
Σ_t MFL 1r18.738
1r13.464 1r12.748 1r
12.655 1r12.739 1lMSL
1r4.977 1r2.934 1r
2.810 1r2.819 1r2.815
Ω(t) MFL 1r11.715
1r4.330 1r4.860 1r
3.355 1r4.077 1lMSL
1r3.297 1r0.774 1r
1.085 1r0.626 1r0.833
The selected bandwidths are h^∗=90 for Σ_t, h^∗=90 and b^∗=4 for Σ_t, and h_1^∗=90 for Ω(t), where h^∗=h/Δ, b^∗=b/Δ, and h_1^∗=h_1/Δ.
The simulation results when the dimension is p=500 are reported in Table 2. Overall the results are very similar to those in Table 1, so we omit the detailed discussion and comparison to save the space.
Table 2: Estimation results for the spot volatility and time-varying noise covariance matrices when p=500
11c“Banding"
3-79-13
Naive Hard Soft AL SCAD Naive Hard Soft AL SCAD
3-79-13
Σ_t MFL 1r21.971
1r4.067 1r5.167 1r
4.916 1r3.954 MSL 1r3.907
1r0.621 1r0.715 1r
0.698 1r0.568
Σ_t MFL 1r28.479
1r19.193 1r18.617 1r
17.930 1r18.466 MSL 1r4.767
1r2.339 1r2.281 1r
2.228 1r2.281
Ω(t) MFL 1r18.269
1r4.045 1r4.826 1r
5.532 1r4.547 MSL 1r3.307
1r0.461 1r0.540 1r
0.675 1r0.519
11c“Block-diagonal"
3-79-13
Naive Hard Soft AL SCAD Naive Hard Soft AL SCAD
3-79-13
Σ_t MFL 1r21.973
1r5.703 1r6.429 1r
5.928 1r5.480 MSL 1r3.999
1r0.855 1r1.134 1r
0.895 1r0.886
Σ_t MFL 1r28.682
1r19.685 1r19.155 1r
18.539 1r19.029 MSL 1r4.917
1r2.854 1r2.782 1r
2.736 1r2.798
Ω(t) MFL 1r18.271
1r4.208 1r4.935 1r
5.686 1r4.684 MSL 1r3.312
1r0.522 1r0.603 1r
0.751 1r0.572
11c“Exponentially decaying"
3-79-13
Naive Hard Soft AL SCAD Naive Hard Soft AL SCAD
3-79-13
Σ_t MFL 1r21.973
1r6.069 1r6.697 1r
6.120 1r5.739 MSL 1r4.035
1r0.894 1r1.173 1r
0.927 1r0.921
Σ_t MFL 1r28.867
1r20.195 1r19.561 1r
18.950 1r19.454 MSL 1r4.938
1r2.914 1r2.836 1r
2.788 1r2.850
Ω(t) MFL 1r18.275
1r4.335 1r5.001 1r
5.763 1r4.745 MSL 1r3.322
1r0.533 1r0.610 1r
0.757 1r0.578
The selected bandwidths are h^∗=240 for Σ_t, h^∗=240, b^∗=4 for Σ_t and h_1^∗=240 for Ω(t), where h^∗=h/Δ, b^∗=b/Δ, and h_1^∗=h_1/Δ.
§.§ Simulation for factor-based spot volatility matrix estimation
5.2.1. Simulation setup
We generate 𝐘_t via (<ref>), where the p-dimensional idiosyncratic returns follow the dynamics of d𝐗_t defined in (<ref>). In this simulation, we only consider p=500. As in <cit.>, we adopt a three-factor model, where the factors 𝐅_t=(F_1,t,F_2,t,F_3,t) ^^⊺ are generated by
(
[ dF_1,t; dF_2,t; dF_3,t ]) =(
[ μ _1^F; μ _2^F; μ _3^F ]) dt+(
[ σ _1,t 0 0; 0 σ _2,t 0; 0 0 σ _3,t ]) (
[ 1 ρ_12 ρ_13; ρ_12 1 ρ_23; ρ_13 ρ_23 1 ]) (
[ dW_1,t^F; dW_2,t^F; dW_3,t^F ]).
The factor volatilities are driven by
dσ _k,t^2=κ̃_k( α̃_k-σ
_k,t^2) dt+ν̃_kσ _k,tdW_k,t, k=1,2,3,
where E[ dW_k,t^FdW_k,t] =ρ_kdt, allowing for potential leverage effects in the factor dynamics. Both W_k,t^F and W_k,t are standard univariate Brownian motions. In the simulation, we set ( κ̃_1,κ̃_2,κ̃_3) =( 3,4,5), ( α̃_1,α̃_2,α̃_3) =( 0.09,0.04,0.06), ( ν̃_1,ν̃_2,ν̃_3) =(
0.3,0.4,0.3), ( μ _1^F,μ _2^F,μ _3^F)
=( 0.05,0.03,0.02), ( ρ _1,ρ _2,ρ
_3) =( -0.6,-0.4,-0.25) and ( ρ _12,ρ
_13,ρ _23) =( 0.05,0.10,0.15).
We consider the following three cases for generating the time-varying beta processes: β_i(t)=[β_i,1(t), β_i,2(t), β_i,3(t)]^^⊺, i=1,⋯,p.
* Constant betas. The factor loadings are constants over time, i.e., β _i,l(t)=β _i,l, i=1,⋯,p and l=1,2,3. For each i, we set β _i,1∼ U(
0.25,2.25) and β _i,2,β _i,3∼ U(-0.5,0.5).
* Deterministic time-varying betas. Consider the following deterministic function:
β _i,l( t) =1/2[ cos( π (
t-ω _i,l)/T ) +1] ×( β _i,l-β_i,l) +β _i,l, i=1,⋯,p, l=1,2,3,
where ω _i,1,ω_i,2,ω _i,3∼ U(0, 2T), (β _i,1,β_i,1) is a pair of two random numbers from U( 0.25,2.25) whereas
(β_i,2,β_i,2) and (β_i,3,β_i,3) are pairs of random numbers from U( -0.5,0.5).
* Stochastic time-varying betas. As in <cit.>, we consider the following diffusion process:
dβ _i,l(t)=κ _i,l^β( α _i,l^β-β
_i,l(t)) dt+υ _i,l^βdW_i,l,t^β, i=1,⋯,p, l=1,2,3,
where W_i,l,t^β are standard Brownian motions independently over i and l, κ _i,1^β,κ_i,2^β,κ _i,3^β∼ U(1,3), α _i,1^β∼ U( 0.25,2.25), α _i,2^β,α_i,3^β∼ U(-0.5, 0.5) and υ _i,1^β,υ _i,2^β,υ _i,3^β∼ U(2,4).
5.2.2. Simulation Results
The spot idiosyncratic volatility matrix is estimated via (<ref>). For ease of comparison, we use exactly the same bandwidth as in our first experiment. The results for the noise-free and noise-contaminated spot idiosyncratic volatility matrix estimates Σ_t and Σ_t measured by MFL and MSL are reported in Table 3, which reveal some desirable observations. Firstly, we note that our estimation results in terms of MFL and MSL are almost identical across different types of dynamics of factor loadings, indicating that the developed estimation procedure is robust in finite samples to different assumption of the factor loading dynamics as long as they satisfy our smooth restriction, see Assumption <ref>(iv). Secondly, the MFL and MSL values are similar to those reported in Table 2 which were obtained based on data generating model without common factors. This means that the proposed nonparametric time-varying high frequency regression can effectively remove common factors, resulting in accurate estimation of the spot idiosyncratic volatility matrix.
The factor-based spot volatility matrix of 𝐘_t is estimated via (<ref>). As discussed in Section <ref>, we measure the accuracy of the spiked volatility matrix estimate by the relative error defined above Theorem <ref>, i.e., consider the following Mean Relative Loss (MRL):
MRL=1/200∑_m=1^200( 1/21∑_j=1^21‖Σ_t_j^Y,(m)-
Σ_t_j^Y,(m)‖ _Σ
_t_j^Y,(m)) .
The relevant results are reported in Table 4, where Σ_t^Y and Σ_t^Y denote the noise-free and noise-contaminated factor-based spot volatility matrix estimates, respectively. We can see that the performance of the shrinkage estimates is substantially better than that of the naive estimate. Unsurprisingly, due to the presence of microstructure noise, the MRL results of Σ_t^Y are much higher than those of Σ_t^Y. As in Table 3, our proposed estimation is robust to different factor loading dynamics.
Table 3: Estimation results for the spot idiosyncratic volatility matrices
11c“Banding"
4-14
β Dynamics 5cFrobenius Norm
5cSpectral Norm
4-810-14
Naive Hard Soft AL SCAD 1l Naive Hard Soft AL SCAD
4-810-14
Constant Σ_t MFL 1r21.9037
1r4.2461 1r5.2485 1r
4.9880 1r3.9910 1lMSL
1r3.8887 1r0.6359 1r
0.7291 1r0.7154 1r0.5720
Σ_t MFL 1r30.6646
1r19.3752 1r18.3036
1r17.7552 1r18.1388
1lMSL 1r11.1910 1r
2.3576 1r2.2653 1r2.2160
1r2.2554
Deterministic Σ_t MFL 1r21.9127
1r4.1916 1r5.2503
1r4.9898 1r3.9842 1l
MSL 1r3.8901 1r0.6313
1r0.7288 1r0.7144 1r
0.5712
Σ_t MFL 1r30.5672
1r19.3633 1r18.2947
1r17.7267 1r18.1284
1lMSL 1r10.9662 1r
2.3571 1r2.2636 1r2.2128
1r2.2536
Stochastic Σ_t MFL 1r21.9099
1r4.2123 1r5.2498 1r
4.9893 1r3.9872 1lMSL
1r3.8896 1r0.6331 1r
0.7289 1r0.7149 1r0.5717
Σ_t MFL 1r30.7323
1r19.3896 1r18.3164
1r17.7839 1r18.1538
1lMSL 1r11.3262 1r
2.3603 1r2.2708 1r2.2203
1r2.2602
11c“Block-diagonal"
4-14
β Dynamics 5cFrobenius Norm
5cSpectral Norm
4-810-14
Naive Hard Soft AL SCAD 1l Naive Hard Soft AL SCAD
4-810-14
Constant Σ_t MFL 1r21.9047
1r5.6802 1r6.4718 1r
5.9421 1r5.4710 1lMSL
1r3.9741 1r0.8722 1r
1.1481 1r0.9106 1r0.9014
Σ_t MFL 1r30.7266
1r19.8195 1r18.8114
1r18.3162 1r18.6638
1lMSL 1r10.9751 1r
2.8701 1r2.7656 1r2.7097
1r2.7551
Deterministic Σ_t MFL
1r21.9137 1r5.6821
1r6.4738 1r5.9436 1r
5.4729 1lMSL 1r3.9754
1r0.8718 1r1.1479 1r
0.9103 1r0.9012
Σ_t MFL 1r30.6284
1r19.8161 1r18.8043
1r18.2953 1r18.6547
1lMSL 1r10.7452 1r
2.8706 1r2.7663 1r2.7092
1r2.7559
Stochastic Σ_t MFL
1r21.9108 1r5.6811
1r6.4732 1r5.9433 1r
5.4722 1lMSL 1r3.9751
1r0.8721 1r1.1480 1r
0.9104 1r0.9013
Σ_t MFL 1r30.7955
1r19.8314 1r18.8237
1r18.3434 1r18.6767
1lMSL 1r11.1149 1r
2.8719 1r2.7691 1r2.7142
1r2.7584
11c“Exponentially decaying"
4-14
β Dynamics 5cFrobenius Norm
1l 5cSpectral Norm
4-810-14
Naive Hard Soft AL SCAD 1l Naive Hard Soft AL SCAD
4-810-14
Constant Σ_t MFL 1r21.9057
1r6.0626 1r6.7715 1r
6.1573 1r5.7617 1lMSL
1r4.0142 1r0.9106 1r
1.1898 1r0.9453 1r0.9388
Σ_t MFL 1r30.8728
1r20.3802 1r19.2709
1r18.7715 1r19.1262
1lMSL 1r10.8858 1r
2.9381 1r2.8260 1r2.7707
1r2.8154
Deterministic Σ_t MFL
1r21.9147 1r6.0709
1r6.7737 1r6.1589 1r
5.7637 1lMSL 1r4.0156
1r0.9112 1r1.1896 1r
0.9450 1r0.9387
Σ_t MFL 1r30.7746
1r20.3564 1r19.2632
1r18.7460 1r19.1173
1lMSL 1r10.6538 1r
2.9354 1r2.8247 1r2.7673
1r2.8140
Stochastic Σ_t MFL
1r21.9118 1r6.0636
1r6.7730 1r6.1585 1r
5.7630 1lMSL 1r4.0151
1r0.9106 1r1.1897 1r
0.9451 1r0.9388
Σ_t MFL 1r30.9430
1r20.3820 1r19.2839
1r18.8017 1r19.1405
1lMSL 1r11.0295 1r
2.9381 1r2.8291 1r2.7745
1r2.8173
Table 4: Mean relative loss for the factor-based spot volatility matrix estimation
5c“Banding"
3-73-7
β Dynamics Naive Hard Soft AL SCAD
3-73-7
Constant Σ_t^Y 1r1.1192
1r0.5417 1r0.7802
1r0.7762 1r0.4391
Σ_t^Y 1r2.2280
1r1.7243 1r1.4939 1r
1.4478 1r1.4654
Deterministic Σ_t^Y 1r
1.1207 1r0.5257 1r0.7823
1r0.7775 1r0.4371
Σ_t^Y 1r2.2287
1r1.7182 1r1.4882 1r
1.4385 1r1.4586
Stochastic Σ_t^Y 1r1.1208
1r0.5389 1r0.7829
1r0.7780 1r0.4406
Σ_t^Y 1r2.2273
1r1.7279 1r1.4986 1r
1.4544 1r1.4719
5c“Block-diagonal"
3-73-7
β Dynamics Naive Hard Soft AL SCAD
3-73-7
Constant Σ_t^Y 1r1.1192
1r0.3842 1r0.3650
1r0.3962 1r0.3249
Σ_t^Y 1r1.7146
1r0.8421 1r0.7938 1r
0.7176 1r0.7514
Deterministic Σ_t^Y 1r
1.1201 1r0.3840 1r0.3651
1r0.3958 1r0.3241
Σ_t^Y 1r1.7152
1r0.8410 1r0.7911 1r
0.7155 1r0.7486
Stochastic Σ_t^Y 1r1.1202
1r0.3868 1r0.3678
1r0.3983 1r0.3272
Σ_t^Y 1r1.7146
1r0.8435 1r0.7949 1r
0.7188 1r0.7528
5c“Exponentially decaying"
3-73-7
β Dynamics Naive Hard Soft AL SCAD
3-73-7
Constant Σ_t^Y 1r1.1192
1r0.4086 1r0.3726
1r0.4055 1r0.3347
Σ_t^Y 1r1.7338
1r0.8636 1r0.8047 1r
0.7272 1r0.7619
Deterministic Σ_t^Y 1r
1.1201 1r0.4079 1r0.3727
1r0.4051 1r0.3339
Σ_t^Y 1r1.7344
1r0.8614 1r0.8016 1r
0.7249 1r0.7589
Stochastic Σ_t^Y 1r1.1203
1r0.4111 1r0.3754
1r0.4075 1r0.3370
Σ_t^Y 1r1.7338
1r0.8645 1r0.8058 1r
0.7283 1r0.7631
§ EMPIRICAL STUDY
We apply the proposed methods to the intraday returns of the S&P 500 component stocks to demonstrate the effectiveness of our nonparametric spot volatility matrix estimation in revealing time-varying patterns. We consider the 5-minute returns of the S&P 500 stocks collected in September 2008. On September 15 Lehman Brothers filed for bankruptcy, causing shockwaves throughout the global financial system. Hence, it is interesting to examine how the spot volatility structure of the returns evolved during this one-month period. In addition, to demonstrate the effectiveness of our model with observed risk factors in explaining the systemic component of the dependence structure, we also collect the 5-minute returns of twelve factors. The first three factors are constructed in <cit.> as our proxy for the market (MKT), small-minus-big market capitalisation (SMB), and high-minus-low price-earning ratio (HML). The other nine factors are the widely available sector SDPR ETFs, which are intended to tract the following nine largest S&P sectors: Energy (XLE), Materials (XLB), Industrials (XLI), Consumer Discretionary (XLY), Consumer Staples (XLP), Health Care (XLV), Financial (XLF), Information Technology (XLK), Utilities (XLU). We sort our stocks according to their GICS (Global Industry Classification Standard) codes, so that they are grouped by sectors in the above order. Consequently, the correlation (sub)matrix for stocks within each sector corresponds to a block on the diagonal of the full correlation matrix <cit.>.
We only use stocks that are included in the S&P 500 index and whose GICS codes are unchanged in September 2008. We also exclude stocks that do not belong to any of the above nine sectors. This leaves us with a total of p=482 stocks. All the returns are synchronised via the previous-tick subsampling technique <cit.>, and overnight returns are removed because of potential dividends and stock splits. Consequently, we have 1638 time series
observations for each of the 482 stocks. For the 5-minute returns, we may assume that the potential impact of microstructure noises are negligible. The smoothing parameter in our kernel estimation is chosen as h=2/252 (equivalent to 2 trading days)[We experimented three bandwidth choices, namely, 1 day, 2 days and 3 days.
We found that h=1/252 (h=3/252) produced clearly undersmoothed
(oversmoothed) time series of estimated deciles of the cross sectional
distribution of the variances and pairwise correlations of our returns,
whereas h=2/252 seems to be most reasonable. Our qualitative
conclusion is unaffected by the choices of h within the range of 1 to 3
days.].
We start with estimating the spot volatility matrices of the total returns (i.e., the observed returns) without incorporating the observed factors or applying any shrinkage. To visualise the potential time variation of the estimated spot matrices, as in <cit.>, we plot the time series of deciles of the distribution of the estimated spot variances and the pairwise correlations[The nine decile levels we use in this study are the 10th, 20th,..., and 90th percentiles.]. The patterns of the spot variances and correlations in Figure <ref>(a) and (b) reveal some clear evidence of time variation in our sampling period. We note that the distributions of the variances are relatively narrow and stay low on the first few days of the month. However, close to Lehman Brothers' announcement on the 15th, they start to rise and get wider quite rapidly and reach the peak around the 17th and the 18th. The spot variances at the peak are much higher than those on the earlier days of the month. The distributions return to the earlier level in the following week. In contrast, the distributions of pairwise spot correlations also start to shift up around the same time, but quickly reach the peak on the 16th (only one day after the bankruptcy news), and then dip to a relatively low point around the 19th before returning to the earlier level. Such time-varying features in the dynamics covariance structure are quite interesting and sensible, reflecting the impact of market news. Hence, our proposed spot volatility matrix estimation methodology provides a useful tool for revealing such dynamics.
To examine whether it is appropriate to directly apply shrinkage techniques to the spot volatility matrices of the total returns, following <cit.> we plot in Figure <ref>(a) and (b) their sparsity patterns on the 16th and the 19th of September [Recall that as in <cit.> our tuning parameter used for each pairwise spot covariance in our shrinkage method is proportional to the product of the spot standard deviations of the returns of that pair of assets. Therefore, the sparsity pattern is effectively determined by the spot correlation matrix.]. The deep blue dots correspond to the locations of pairwise correlations that are at least 0.15, whereas the white dots correspond to those smaller than 0.15. Note that the covariance structure of the total returns is very dense on these two days. Therefore, it is not appropriate to directly apply the shrinkage technique as in Sections <ref> and <ref>. Meanwhile, although both are quite dense, we can still clearly see their differences. Consistent with our observation from the decile plots of the correlations, we can see that the plot for the 16th is almost completely covered by blue dots, but the plot for the 19th in contrast has significantly more areas covered in white.
We next incorporate the twelve observed factors in the large spot volatility matrix estimation as suggested in Section <ref>. In particular, we are interested on estimating the spot idiosyncratic volatility matrix, which is expected to satisfy the sparsity restriction. To save space, we choose to only report results using the SCAD shrinkage due to its satisfactory performance in our simulations. In Figure <ref>(c) and (d), we plot the deciles of the estimated spot idiosyncratic variances and correlations over trading days. In Figure <ref>(c), we observe a significant upward shift of the distribution of the spot variances of the idiosyncratic returns around the time of Lehman Brothers' bankruptcy, indicating that the observed factors may not fully capture the time variation of the spot variances. In contrast, the deciles of the spot correlations in Figure <ref>(d) seem to be quite flat throughout the entire month, suggesting that the systematic factors may explain the time variation in the distribution of the pairwise correlations better than that of the variances.
We finally plot the sparsity patterns of the two estimated spot idiosyncratic volatility matrices on 16 and 19 September in Figure <ref>(c) and (d), respectively. Unlike Figure <ref>(a) and (b), we note that the estimated spot idiosyncratic volatility matrices are highly sparse on the two days. This is consistent with our observation from Figure <ref>(d), confirming that the observed factors can effectively account for the time variation in the spot covariance structure of the returns. Meanwhile, we also note that the two idiosyncratic volatility matrices are clearly not diagonal and still carry some visible time variation. Lastly, it is worth mentioning that the estimated spot idiosyncratic volatility matrices do not exhibit significant correlations within the blocks along the diagonal lines, except for some very limited actions in the lower right corner of the two matrices (the lower right corner corresponds to the XLU sector according to our sorting).
§ CONCLUSION
We developed nonparametric estimation methods for large spot volatility matrices under the uniform sparsity assumption. We allowed for microstructure noise and observed common risk factors and employed kernel smoothing and generalised shrinkage. In each scenario we obtained the uniform convergence rates for the large estimated covariance matrices and these reflect the smoothness and sparsity assumptions we made. The simulation results show that the proposed estimation methods work well in finite samples for both the noise-free and noise-contaminated data. The empirical study demonstrated the effectiveness on S&P 500 stocks five minute data. Several issues can be further explored. For example, it is worthwhile to further study the spot precision matrix estimation which is briefly discussed in Appendix C.1 of the supplement and explore its application to optimal portfolio choice.
§ ACKNOWLEDGEMENTS
The authors would like to thank a Co-Editor and two reviewers for the constructive comments, which helped to improve the article. The first author's research was partly supported by the BA Talent Development Award (No. TDA21\210027). The second author’s research was partly supported by the BA/Leverhulme Small Research Grant funded by the Leverhulme Trust (No. SRG1920/ 100603).
99
Aït-Sahalia Jacod2014AJ14
Aït-Sahalia, Y. & J. Jacod (2014) High-Frequency
Financial Econometrics. Princeton University Press.
Aït-Sahalia, Kalnina Xiu2020AKX20
Aït-Sahalia, Y., I. Kalnina, & D. Xiu (2020) High-frequency factor models and regressions. Journal of Econometrics 216, 86–105.
Aït-Sahalia Xiu2017AX17
Aït-Sahalia, Y. & D. Xiu (2017) Using principal component
analysis to estimate a high dimensional factor model with high-frequency
data. Journal of Econometrics 201, 384–399.
Aït-Sahalia Xiu2019AX19
Aït-Sahalia, Y. & D. Xiu (2019) Principal component
analysis of high-frequency data. Journal of the American Statistical
Association 114, 287–303.
Andersen and Bollerslev1998AB98
Andersen, T. G. & T. Bollerslev (1998) Answering the skeptics: yes,
standard volatility models do provide accurate forecasts.
International Economic Review 39, 885–905.
Andersen, Bollerslev Diebold2010ABD10 Andersen, T. G., T. Bollerslev, & F. X. Diebold (2010) Parametric and nonparametric volatility measurement. In
Handbook of Financial Econometrics: Tools and Techniques (Y. Aï
t-Sahalia and L. P. Hansen, eds.), 67–137.
Andersen et al.2003ABDL03
Andersen, T. G., T. Bollerslev, F. X. Diebold, & P. Labys (2003)
Modeling and forecasting realized volatility. Econometrica 71,
579–625.
Bai Silverstein2010BS10
Bai, Z. & J. W. Silverstein (2010) Spectral Analysis of
Large Dimensional Random Matrices. Springer Series in Statistics, Springer.
Barndorff-Nielsen Shephard2002BS02 Barndorff-Nielsen, O. E. & N. Shephard
(2002) Econometric analysis of realized volatility and its use in
estimating stochastic volatility models. Journal of the Royal
Statistical Society Series B 64, 253–280.
Barndorff-Nielsen Shephard2004BS04 Barndorff-Nielsen, O. E. & N. Shephard
(2004) Econometric analysis of realized covariation: High frequency based
covariance, regression and correlation in financial economics.
Econometrica 72, 885–925.
Barndorff-Nielsen et al.2008BHLS08
Barndorff-Nielsen, O. E., P. R. Hansen, A. Lunde, & N. Shephard
(2008) Designing realised kernels to measure the ex-post variation of
equity prices in the presence of noise. Econometrica 76, 1481–1536.
Bibinger et al.2019BHMR19Bibinger, M., N. Hautsch, P. Malec, & M. Reiss (2019) Estimating the spot covariation of asset prices - statistical theory and empirical evidence. Journal of Business and Economic Statistics 37(3), 419–435.
Bickel Levina2008BL08
Bickel, P. & E. Levina (2008) Covariance regularization by
thresholding. Annals of Statistics 36, 2577–2604.
Cai et al2020CHLZ20 Cai, T.
T., J. Hu, Y. Li, & X. Zheng (2020) High-dimensional minimum
variance portfolio estimation based on high-frequency data. Journal of
Econometrics 214, 482–494.
Cai Zhou2012CZ12 Cai,
T. T. & H. H. Zhou (2012) Optimal rates of convergence for
sparse covariance matrix estimation. Annals of Statistics 40,
2389–2420.
Chang et al.2021CHLT21 Chang,
J., Q. Hu, C. Liu, & C. Tang (2021) Optimal covariance matrix
estimation for high-dimensional noise in high-frequency data. Working paper
available at <https://arxiv.org/abs/1812.08217.>
Chen, Li Linton2019CLL19
Chen, J., D. Li, & O. Linton (2019) A new
semiparametric estimation approach of large dynamic covariance matrices with
multiple conditioning variables. Journal of Econometrics 212,
155–176.
Chen, Mykland Zhang2020CMZ20 Chen, D., P. A. Mykland, & L. Zhang (2020) The five trolls under the bridge: principal component analysis with asynchronous and noisy high frequency data. Journal of the American Statistical Association 115, 1960–1977.
Chen, Xu Wu2013CXW13
Chen, X., M. Xu, & W. Wu (2013) Covariance and precision
matrix estimation for high-dimensional time series. Annals of
Statistics 41, 2994–3021.
Chen Leng2016CL16
Chen, Z.& C. Leng (2016) Dynamic covariance models.
Journal of the American Statistical Association 111, 1196–1207.
Christensen, Kinnebrock Podolskij2010CKP10 Christensen, K., S. Kinnebrock, & M.
Podolskij (2010) Pre-averaging estimators of the ex-post covariance
matrix in noisy diffusion models with non-synchronous data. Journal of
Econometrics 159, 116–133.
Dai, Lu Xiu2019DLX19
Dai, C., K. Lu, & D. Xiu (2019) Knowing factors or factor loadings, or
neither? Evaluating estimators for large covariance matrices with noisy and
asynchronous data. Journal of Econometrics 208, 43–79.
Epps1979Ep79 Epps, T. W. (1979)
Comovements in stock prices in the very short run. Journal of the
American Statistical Association 74, 291–298.
Fan, Fan Lv2007FFL07
Fan, J., Y. Fan, & J. Lv (2007) Aggregation of nonparametric estimators
for volatility matrix. Journal of Financial Econometrics 5, 321–357.
Fan, Furger Xiu2016FFX16
Fan, J., A. Furger, & D. Xiu (2016) Incorporating global
industrial classification standard into portfolio allocation: A simple
factor-based large covariance matrix estimator with high frequency data.
Journal of Business and Economic Statistics 34, 489–503.
Fan Gijbels1996FG96Fan, J. & I. Gijbels (1996) Local Polynomial Modelling and Its Applications. Chapman and Hall, London.
Fan, Liao Mincheva2011FLM11
Fan, J., Y. Liao, & M. Mincheva (2011)
High-dimensional covariance matrix estimation in approximate factor models.
Annals of Statistics 39, 3320–3356.
Fan, Liao Mincheva2013FLM13
Fan, J., Y. Liao, & M. Mincheva (2013) Large
covariance estimation by thresholding principal orthogonal complements (with
discussion). Journal of the Royal Statistical Society, Series B 75,
603–680.
Fan Wang2008FW08 Fan,
J. & Y. Wang (2008) Spot volatility estimation for high-frequency data.
Statistics and Its Interface 1, 279–288.
Figueroa-López Li2020FL20
Figueroa-López, J. E. & C. Li (2020) Optimal
kernel estimation of spot volatility of stochastic differential equations.
Stochastic Processes and Their Applications 130, 4693–4720.
Hayashi Yoshida2005HY05
Hayashi, T. & N. Yoshida (2005) On covariance
estimation of non-synchronously observed diffusion processes. Bernoulli
11, 359–379.
Jacod et al.2009JLMPV09
Jacod, J., Y. Li, P. A. Mykland, M. Podolskij, & M. Vetter (2009)
Microstructure noise in the continuous case: the pre-averaging approach.
Stochastic Processes and Their Applications 119, 2249–2276.
Jacod Protter2012JP12
Jacod, J. & P. Protter (2012) Discretization of Processes.
Springer.
Kalnina Linton2008KL08
Kalnina, I. & O. Linton (2008) Estimating quadratic variation
consistently in the presence of endogenous and diurnal measurement error.
Journal of Econometrics 147, 47–59.
Kanaya Kristensen2016KK16
Kanaya, S. & D. Kristensen (2016) Estimation of stochastic
volatility models by nonparametric filtering. Econometric Theory 32,
861–916.
Kim, Wang Zou2016KWZ16
Kim, D., Y. Wang, & J. Zou (2016) Asymptotic theory for large
volatility matrix estimation based on high-frequency financial data.
Stochastic Processes and Their Applications 126, 3527–3577.
Kong2018K18 Kong, X. (2018) On
the systematic and idiosyncratic volatility with large panel high-frequency
data. Annals of Statistics 46, 1077–1108.
Kristensen2010K10 Kristensen, D.
(2010) Nonparametric filtering of the realized spot volatility: a
kernel-based approach. Econometric Theory 26, 60–93.
Lam Feng2018LF18 Lam,
C. & P. Feng (2018) A nonparametric eigenvalue-regularized integrated
covariance matrix estimator for asset return data. Journal of
Econometrics 206, 226–257.
Li Racine2007LR07 Li,
Q. & J. Racine (2007) Nonparametric Econometrics.
Princeton University Press, Princeton.
Mykland Zhang2006MZ06 Mykland, P. A. & L. Zhang (2006) ANOVA for diffusions and Itô processes. Annals of Statistics 34, 1931–1963.
Park, Hong Linton2016PHL16
Park, S., S. Y. Hong, & O. Linton (2016) Estimating the
quadratic covariation matrix for asynchronously observed high frequency
stock returns corrupted by additive measurement error. Journal of
Econometrics 191, 325-347.
Podolskij Vetter2009PV09
Podolskij, M. & M. Vetter (2009) Estimation of volatility
functionals in the simultaneous presence of microstructure noise and jumps.
Bernoulli 15, 634–658.
Reiß, Todorov Tauchen2015RTT15 Reiß, M., V. Todorov, & G. Tauchen (2015) Nonparametric test for a constant beta between Itô semi-martingales based on high-frequency data. Stochastic Processes and Their Applications 125, 2955–2988.
Revuz Yor1999RY99
Revuz, D. & M. Yor (1999) Continuous Martingales and Brownian
Motion. Grundlehren der mathematischen Wissenschaften 293, Springer.
Shephard2005S05 Shephard, N.
(2005) Stochastic Volatility: Selected Readings. Oxford University
Press.
Tao, Wang Zhou2013TWZ13
Tao, M., Y. Wang, & H. H. Zhou (2013) Optimal sparse
volatility matrix estimation for high-dimensional Itô processes with
measurement errors. Annals of Statistics 41, 1816–1864.
Wang Zou2010WZ10 Wang,
Y. & J. Zou (2010) Vast volatility matrix estimation for
high-frequency financial data. Annals of Statistics 38, 943–978.
Xia Zheng2018XZ18 Xia,
N. & X. Zheng (2018) On the inference about the spectral distribution
of high-dimensional covariance matrix based on high-frequency noisy
observations. Annals of Statistics 46, 500–525.
Xiao Linton2002XL02
Xiao, Z. & O. Linton (2002) A nonparametric prewhitened covariance
estimator. Journal of Time Series Analysis 23, 215–250.
Zhang2011Zh11 Zhang, L. (2011)
Estimating covariation: Epps effect, microstructure noise. Journal of
Econometrics 160, 33–47.
Zhang, Mykland Aït-Sahalia2005ZMA05 Zhang, L., P. A. Mykland, & Y. Aï
t-Sahalia (2005) A tale of two time scales: Determining integrated
volatility with noisy high-frequency data. Journal of the American
Statistical Association 100, 1394–1411.
Zheng Li2011ZL11
Zheng, X. & Y. Li (2011) On the estimation of integrated covariance
matrices of high dimensional diffusion processes. Annals of Statistics
39, 3121–3151.
Zu Boswijk2014ZB14 Zu,
Y. & H. P. Boswijk (2014) Estimating spot volatility with
high-frequency financial data. Journal of Econometrics 181, 117–135.
§ APPENDIX A: PROOFS OF THE MAIN RESULTS
In this appendix, we give the proofs of the main theorems. We start with four propositions whose proofs are available in Appendix B of the supplement.
Suppose that Assumptions <ref> and <ref>(i)(ii) are satisfied.
Then, we have
max_1≤ i,j≤ psup_0≤ t≤
T|Σ_ij,t-Σ_ij,t|=O_P(
ζ_Δ,p),
where ζ_Δ,p=h^γ+[Δlog (p∨Δ^-1)/h]^1/2.
Suppose that Assumptions <ref>, <ref>(i),
<ref> and <ref>(i)(ii) are satisfied, and Assumption <ref>(ii)
holds with Δ^-1 replaced by N.
max_1≤ i,j≤ psup_0≤ t≤
T|Σ_ij,t-Σ_ij,t|=O_P(
ζ_N,p^∗+ν_Δ,p,N),
where ζ_N,p^∗ and ν_Δ,p,N are defined in Assumption
<ref>(iii).
Suppose that Assumptions <ref>, <ref>(i),
<ref> and <ref> are satisfied. Then, we have
max_1≤ i,j≤ psup_0≤ t≤
T|Ω_ij(t)-Ω_ij(t)|=O_P(
δ_Δ,p),
where δ_Δ,p=h_1^γ_1+[Δlog (p∨Δ^-1)/h_1]^1/2.
Suppose that Assumptions <ref>(i)(ii) and <ref> are satisfied.
Then, we have
sup_0≤ t≤ T‖Σ_t^Y-Σ_t^Y‖_max=O_P(ζ_Δ,p),
sup_0≤ t≤ T‖Σ_t^F-Σ_t^F‖_max=O_P(ζ_Δ,p),
sup_0≤ t≤ T‖Σ_t^YF-Σ_t^YF‖_max=O_P(ζ_Δ,p).
where ζ_Δ,p is defined as in Proposition <ref>.
Proof of Theorem <ref>. By the definition of Σ_t^s and the property of s_ρ(·), we
readily have that
sup_0≤ t≤ T‖Σ_t^s-Σ_t‖ ≤ sup_0≤ t≤ Tmax_1≤ i≤
p∑_j=1^p|Σ_ij,t^s-Σ_ij, t|
= sup_0≤ t≤ Tmax_1≤ i≤
p|Σ_ii,t-Σ_ii, t|+sup_0≤ t≤ T max_1≤ i≤
p∑_j=1,≠ i^p|s_ρ_1(t)(Σ_ij,t)I(
|Σ_ij,t|>ρ_1(t))-Σ_ij,
t|
= sup_0≤ t≤ Tmax_1≤ i≤
p|Σ_ii,t-Σ_ii, t|+sup_0≤ t≤ T max_1≤ i≤
p∑_j=1,≠ i^p|s_ρ_1(t)(Σ_ij,t)I(
|Σ_ij,t|>ρ_1(t))-.
.Σ_ij,
tI(|Σ_ij,t|>ρ_1(t))-Σ_ij,
tI(|Σ_ij,t|≤ρ_1(t))|
≤ sup_0≤ t≤ Tmax_1≤ i≤
p|Σ_ii,t-Σ_ii, t|+sup_0≤ t≤ T max_1≤ i≤
p∑_j=1,≠ i^p|s_ρ_1(t)(Σ_ij,t)-
Σ_ij,t|
I(|Σ_ij,t|>ρ_1(t))+
sup_0≤ t≤ T max_1≤ i≤
p∑_j=1,≠ i^p|Σ_ij,t-Σ_ij,
t|I(|Σ_ij,t|>ρ_1(t))+
sup_0≤ t≤ Tmax_1≤ i≤ p∑_j=1,≠ i^p|Σ_ij,
t|
I(|Σ_ij,t|≤ρ_1(t))
=: Π_1+Π_2+Π_3+Π_4.
Define the event
𝒢(M)={max_1≤ i,j≤ psup_0≤ t≤ T
|Σ_ij,t-Σ_ij, t|≤
Mζ_Δ,p}
where M is a positive constant. For any small ϵ>0, by (<ref>
), we may find a sufficiently large constant M_ϵ>0 such that
𝖯(𝒢(M_ϵ))≥ 1-ϵ.
By property (iii) of the shrinkage function and (<ref>), we have
Π_2≤sup_0≤ t≤ T ρ_1(t)[ max_1≤ i≤
p∑_j=1^pI(|Σ_ij,t|>ρ_1(t)
)]
and
Π_3 ≤ M_ϵζ_Δ,p[sup_0≤ t≤ Tmax_1≤
i≤
p∑_j=1^pI(|Σ_ij,t|>ρ_1(t)
)]
conditional on the event 𝒢(M_ϵ). By the reverse triangle
inequality and Proposition <ref>,
|Σ_ij,t|≤|Σ_ij,
t|+M_ϵζ_Δ,p
on 𝒢(M_ϵ). Letting C_M=2M_ϵ in
Assumption <ref>(iii), as {Σ_t: 0≤ t≤
T}∈𝒮(q,ϖ(p), T), we have
Π_2+Π_3 ≤ ζ_Δ,p(C_M+M_ϵ)[
sup_0≤ t≤ T max_1≤ i≤
p∑_j=1^pI(|Σ_ij,t|>
C_Mζ_Δ,p)]
≤ ζ_Δ,p(C_M+M_ϵ)[ sup_0≤ t≤ T
max_1≤ i≤
p∑_j=1^pI(|Σ_ij,t|>M_ϵζ_Δ,p)]
= O_P(ζ_Δ,p)[sup_0≤ t≤ T max_1≤
i≤ p∑_j=1^p|Σ_ij, t|^q/(M_ϵζ_Δ,p)^q]
= O_P(Λϖ(p)ζ_Δ,p^1-q)=O_P(ϖ(p)
ζ_Δ,p^1-q).
on the event 𝒢(M_ϵ), where C_M is defined in
Assumption <ref>(iii). Note that the events {|Σ_ij,t|≤ρ_1(t)} and 𝒢(M_ϵ)
jointly imply that {|Σ_ij, t|≤(C
_M+M_ϵ)ζ_Δ,p}. Then, we may show that
Π_4 ≤ sup_0≤ t≤ T max_1≤ i≤
p∑_j=1^p|Σ_ij, t|I(|Σ_ij, t|≤(C
_M+M_ϵ)ζ_Δ,p)
≤ (C_M+M_ϵ)^1-qζ_Δ,p^1-qsup_0≤ t≤ T max_1≤ i≤ p∑_j=1^p|Σ_ij, t|^q
= O_P(Λϖ(p)
ζ_Δ,p^1-q)=O_P(ϖ(p)
ζ_Δ,p^1-q).
By Proposition <ref>, we readily have that
Π_1=O_P(ζ_Δ,p)=O_P(ϖ(p)
ζ_Δ,p^1-q).
By (<ref>)–(<ref>), and letting ϵ→0 in (
<ref>), we complete the proof of Theorem <ref>. ▪
Proof of Theorem <ref>. The proof is similar to the proof
of Theorem <ref> with Proposition <ref> replacing Proposition <ref>. Details are
omitted to save the space.▪
Proof of Theorem <ref>. The proof is similar to the proof
of Theorem <ref> with Proposition <ref> replacing Proposition <ref>. Details are
omitted to save the space.▪
Proof of Theorem <ref>. By Proposition <ref> and the definition of Σ_ij,t^X in (<ref>), we may show that
max_1≤ i,j≤ psup_0≤ t≤ T|Σ_ij,t^X-Σ_ij,t^X|=O_P(ζ_Δ,p).
With (<ref>), following the proof of Theorem <ref>, we complete the proof of (<ref>).
We next turn to the proof of (<ref>). Note that
sup_0≤ t≤ T‖Σ_t^Y,s- Σ_t^Y‖_Σ_t^Y^2≤ 2sup_0≤ t≤ T[‖Σ_t^X,s- Σ_t^X‖_Σ_t^Y^2+‖β(t)Σ_t^Fβ(t)^^⊺-β(t)Σ_t^Fβ(t)^^⊺‖_Σ_t^Y^2].
For any p× p matrix Σ, since all the eigenvalues of Σ_t^Y are strictly larger than a positive constant,
‖Σ‖_Σ_t^Y^2=1/p‖(Σ_t^Y)^-1/2Σ(Σ_t^Y)^-1/2‖_F^2≤C/p‖Σ‖_F^2,
where C>0 is a generic constant whose value may change from line to line. By (<ref>) and (<ref>), we prove
sup_0≤ t≤ T‖Σ_t^X,s- Σ_t^X‖_Σ_t^Y^2≤C/psup_0≤ t≤ T‖Σ_t^X,s- Σ_t^X‖_F^2≤ Csup_0≤ t≤ T‖Σ_t^X,s- Σ_t^X‖^2=O_P([ϖ(p)ζ_Δ,p^1-q]^2).
By the definition of β(t) in (<ref>) and Proposition <ref>, we readily have that
max_1≤ i≤ psup_0≤ t≤ T‖β_i(t)-β_i(t)‖_2=O_P(ζ_Δ,p).
Write 𝐃_t^β=β(t)-β(t) and 𝐃_t^F=Σ_t^F-Σ_t^F. Note that
β(t)Σ_t^Fβ(t)^^⊺-β(t)Σ_t^Fβ(t)^^⊺ = 𝐃_t^β𝐃_t^F𝐃_t^β^⊺+ 𝐃_t^βΣ_t^F𝐃_t^β^⊺+ 𝐃_t^β𝐃_t^Fβ(t)^^⊺+
𝐃_t^βΣ_t^Fβ(t)^^⊺+β(t)𝐃_t^F𝐃_t^β^⊺+ β(t)Σ_t^F𝐃_t^β^⊺+β(t)𝐃_t^Fβ(t)^^⊺.
By (<ref>), (<ref>) and Proposition <ref>, we have
sup_0≤ t≤ T‖𝐃_t^β𝐃_t^F𝐃_t^β^⊺‖_Σ_t^Y^2 ≤ Csup_0≤ t≤ T1/p‖𝐃_t^β𝐃_t^F𝐃_t^β^⊺‖_F^2
≤ C/psup_0≤ t≤ T‖𝐃_t^β‖_F^4 sup_0≤ t≤ T‖𝐃_t^F‖^2
= O_P( pζ_Δ,p^6).
Similarly, we can show that
sup_0≤ t≤ T‖𝐃_t^βΣ_t^F𝐃_t^β^⊺‖_Σ_t^Y^2≤ Csup_0≤ t≤ T1/p‖𝐃_t^β‖_F^4=O_P( pζ_Δ,p^4).
By (<ref>), Assumption <ref>(iv) and Sherman-Morrison-Woodbury formula, we may show that
sup_0≤ t≤ T‖β(t)^^⊺(Σ_t^Y)^-1β(t)‖=O_P(1).
Using (<ref>), (<ref>) and Proposition <ref>, we have
sup_0≤ t≤ T‖𝐃_t^β𝐃_t^Fβ(t)^^⊺‖_Σ_t^Y^2
= 1/psup_0≤ t≤ T trace{𝐃_t^F𝐃_t^β^⊺(Σ_t^Y)^-1𝐃_t^β𝐃_t^Fβ(t)^^⊺(Σ_t^Y)^-1β(t)}
≤ C/psup_0≤ t≤ T‖𝐃_t^β‖_F^2 sup_0≤ t≤ T‖𝐃_t^F‖^2 sup_0≤ t≤ T‖β(t)^^⊺(Σ_t^Y)^-1β(t)‖
≤ C/psup_0≤ t≤ T‖𝐃_t^β‖_F^2sup_0≤ t≤ T‖𝐃_t^F‖^2=O_P(ζ_Δ,p^4),
and
sup_0≤ t≤ T‖β(t)𝐃_t^F𝐃_t^β^⊺‖_Σ_t^Y^2=O_P(ζ_Δ,p^4).
Similar to the proof of (<ref>), we also have
sup_0≤ t≤ T‖𝐃_t^βΣ_t^Fβ(t)^^⊺‖_Σ_t^Y^2
≤C/psup_0≤ t≤ T‖𝐃_t^β‖_F^2 sup_0≤ t≤ T‖Σ_t^F‖^2 =O_P(ζ_Δ,p^2),
and
sup_0≤ t≤ T‖β(t)Σ_t^F𝐃_t^β^⊺‖_Σ_t^Y^2
=O_P(ζ_Δ,p^2).
By (<ref>) and Proposition <ref>, we may show that
sup_0≤ t≤ T‖β(t)𝐃_t^Fβ(t)^^⊺‖_Σ_t^Y^2 = 1/psup_0≤ t≤ T trace{𝐃_t^Fβ(t)^^⊺(Σ_t^Y)^-1β(t)𝐃_t^Fβ(t)^^⊺(Σ_t^Y)^-1β(t)}
≤ C/psup_0≤ t≤ T‖𝐃_t^F‖^2sup_0≤ t≤ T‖β(t)^^⊺(Σ_t^Y)^-1β(t)‖^2=O_P(ζ_Δ,p^2/p).
With (<ref>), (<ref>) and (<ref>)–(<ref>), we have
sup_0≤ t≤ T‖β(t)Σ_t^Fβ(t)^^⊺-β(t)Σ_t^Fβ(t)^^⊺‖_Σ_t^Y^2=O_P(pζ_Δ,p^4+ζ_Δ,p^2).
By virtue of (<ref>) and (<ref>), we complete the proof of (<ref>).▪
Supplement to “Nonparametric Estimation of Large Spot Volatility Matrices
for High-Frequency Financial Data"
In this supplement, we provide the detailed proofs of the propositions stated in Appendix A, discuss the spot precision matrix estimation, address the asynchronicity issue and report additional simulation results.
§ APPENDIX B: PROOFS OF TECHNICAL RESULTS
As discussed in Remark 1, the local boundedness condition in Assumption 1(i) can be strengthened to the following uniform boundedness condition:
max_1≤ i≤ psup_0≤ s≤ T |μ_i,s|≤ C_μ<∞, max_1≤ i≤ psup_0≤ s≤ TΣ_ii,t≤ C_Σ<∞,
with probability one. Throughout this appendix, we let C denote a generic positive constant
whose value may change from line to line.
Proof of Proposition A.1. Throughout this proof, we let ζ_Δ,p^∗=[Δlog (p∨Δ^-1)/h]
^1/2. By (2.1), we have
(Δ X_i,k)(Δ
X_j,k) = (∫_t_k-1^t_kμ_i,sds+∑_l=1^p∫_t_k-1^t_kσ_il,sdW_l,s)(∫_t_k-1^t_kμ_j,udu+∑_l=1^p∫_t_k-1^t_kσ_jl,udW_l,u)
= (∫_t_k-1^t_kμ_i,sds∫_t_k-1^t_kμ_j,udu
)+(∫_t_k-1^t_k∑_l=1^pσ_il,sdW_l,s∫_t_k-1^t_kμ_j,udu)+
(∫_t_k-1^t_kμ_i,sds∫_t_k-1^t_k∑_l=1^pσ_jl,udW_l,u)+(∫_t_k-1^t_k∑_l=1^pσ_il,sdW_l,s∫_t_k-1^t_k∑_l=1^pσ_jl,udW_l,u)
= M_ij,k(1)+M_ij,k(2)+M_ij,k(3)+M_ij,k(4).
This leads to the following decomposition:
∑_k=1^n K_h(t_k-t)Δ X_i,kΔ X_j,k = ∑_k=1^n K_h(t_k-t)M_ij,k(1)+∑_k=1^n
K_h(t_k-t)M_ij,k(2)+
∑_k=1^n K_h(t_k-t)M_ij,k(3)+∑_k=1^n
K_h(t_k-t)M_ij,k(4).
By (<ref>) and Assumption 2(i)(ii), we readily have that
max_1≤ i,j≤ psup_0≤ t≤ T|∑_k=1^n
K_h(t_k-t)M_ij,k(1)| ≤ max_1≤ i,j≤ pmax_1≤
k≤ n| M_ij,k(1)|sup_0≤ t≤ T∑_k=1^n
K_h(t_k-t)
≤ CΔsup_0≤ t≤ TΔ∑_k=1^n K_h(t_k-t)
= O_P(Δ)=o_P(ζ_Δ,p^∗),
as Δ∑_k=1^n K_h(t_k-t) is bounded uniformly over t.
We next show that
max_1≤ i,j≤ psup_0≤ t≤ T|∑_k=1^n
K_h(t_k-t)M_ij,k(4)-∑_k=1^n
K_h(t_k-t)∫_t_k-1^t_kΣ_ij,
sds|=O_P(ζ_Δ,p^∗).
Let dX_i,t^∗=∑_l=1^p σ_il,t dW_l,t, Δ
X_i,k^∗=∫_t_k-1^t_k∑_l=1^p σ_il,s dW_l,s=X_i,t_k^∗-X_i,t_k-1^∗ and X_i,t^∗ be adapted to the underlying filtration (ℱ
_t)_t≥0. Note that
M_ij,k(4) = Δ X_i,k^∗Δ X_j,k^∗=1/2[
(Δ X_i,k^∗+Δ X_j,k^∗)(Δ
X_i,k^∗+Δ X_j,k^∗)-(Δ
X_i,k^∗)^2-(Δ X_j,k^∗)^2]
=: 1/2[M_ij,k^∗(4)-(Δ
X_i,k^∗)^2-(Δ X_j,k^∗)^2].
Hence, to show (<ref>), it is sufficient to prove that
max_1≤ i≤ psup_0≤ t≤ T|∑_k=1^n
K_h(t_k-t)(Δ X_i,k^∗)^2-∑_k=1^n
K_h(t_k-t)∫_t_k-1^t_kΣ_ii,
sds|=O_P(ζ_Δ,p^∗)
and
max_1≤ i,j≤ psup_0≤ t≤ T|∑_k=1^n
K_h(t_k-t)M_ij,k^∗(4)-∑_k=1^n
K_h(t_k-t)∫_t_k-1^t_kΣ_ij, s^∗
ds|=O_P(ζ_Δ,p^∗),
where Σ_ij, s^∗ is defined in Assumption 1(ii).
We next only prove (<ref>) as the proof of (<ref>) is analogous. Consider covering the interval [0,T] by some disjoint intervals 𝒯_v with centre τ_v^∗ and length d=h^2ζ_Δ,p^
∗, v=1,2,⋯, V. Observe that
max_1≤ i≤ psup_0≤ t≤ T|∑_k=1^n
K_h(t_k-t)(Δ X_i,k^∗)^2-∑_k=1^n
K_h(t_k-t)∫_t_k-1^t_kΣ_ii, sds|
≤ max_1≤ i≤ pmax_1≤ v≤ V|∑_k=1^n
K_h(t_k-τ_v^∗)(Δ X_i,k^∗)^2-∑_k=1^n
K_h(t_k-τ_v^∗)∫_t_k-1^t_kΣ_ii, sds|+
max_1≤ i≤ pmax_1≤ v≤ Vsup_t∈𝒯_v|∑_k=1^n [K_h(t_k-t)-K_h(t_k-τ_v^∗)]
(Δ X_i,k^∗)^2|+
max_1≤ i≤ pmax_1≤ v≤ Vsup_t∈𝒯_v|∑_k=1^n [K_h(t_k-t)-K_h(t_k-τ_v^∗)]
∫_t_k-1^t_kΣ_ii,sds|.
As the kernel function has the compact support [-1,1], we have, for any t
∈[0, T],
∑_k=1^n K_h(t_k-t)[(Δ
X_i,k^∗)^2-∫_t_k-1^t_kΣ_ii, sds]
=∑_k=l(t)^u(t)K_h(t_k-t)[(Δ
X_i,k^∗)^2-∫_t_k-1^t_kΣ_ii, sds],
where l(t)=⌊ (t-h)/Δ⌋∨ 1 and u(t)=⌊
(t+h)/Δ⌋∧ n. Letting 𝒩 be a standard normal random variable, by Lemma 1 in
<cit.>, we have
𝖤(exp{ψ(𝒩^2-1)})≤exp{2ψ^2} for |ψ|≤1/4.
Following the argument in the proof of Lemma 3 in <cit.> and using (
<ref>), for k=l(τ_v^∗),l(τ_v^∗)+1,⋯,u(τ_v^∗),
𝖤(exp{θ(Δ^-1h)^1/2K_h(t_k-τ_v^∗)[(Δ
X_i,k^∗)^2-∫_t_k-1^t_kΣ_ii, sds]} |
ℱ_t_k-1)
≤ exp{2Δ/hθ^2 C_Σ^2 K^2(
t_k-τ_v^∗/h)},
where θ satisfies that |θ C_Σ (Δ
h^-1)^1/2K(t_k-τ_v^∗/h)|≤ 1/4
and C_Σ is defined in (<ref>). Consequently, we have
𝖤(exp{θ(Δ^-1h)^1/2∑_k=1^nK_h(t_k-τ_v^∗)[
(Δ X_i,k^∗)^2-∫_t_k-1^t_kΣ_ii, sds
]})
= 𝖤(exp{θ(Δ^-1h)^1/2∑_k=l(τ_v^∗)^u(τ_v^∗)K_h(t_k-τ_v^∗)[(Δ
X_i,k^∗)^2-∫_t_k-1^t_kΣ_ii, sds]})
≤ exp{2θ^2 C_Σ^2 ν(τ_v^∗)},
where ν(τ_v^∗)=(Δ/h)∑_k=l(τ_v^∗)^u(τ_v^∗)K^2(t_k-τ_v^∗). By (<ref>), using the Markov
inequality and choosing θ=√(log (p∨Δ^-1))/
C_Σ^2ν(τ_v^∗), we can prove that
𝖯(|∑_k=1^n K_h(t_k-τ_v^∗)(Δ
X_i,k^∗)^2-∑_k=1^n
K_h(t_k-τ_v^∗)∫_t_k-1^t_kΣ_ii, sds|>M
ζ_Δ,p^∗)≤ 2exp{-C(M) log (p∨Δ^-1)},
where C(M) is positive and becomes sufficiently large if we choose M to
be large enough. Then, by the Bonferroni inequality, we have
𝖯(max_1≤ i≤ pmax_1≤ v≤
V|∑_k=1^n K_h(t_k-τ_v^∗)(Δ
X_i,k^∗)^2-∑_k=1^n
K_h(t_k-τ_v^∗)∫_t_k-1^t_kΣ_ii, sds|>M
ζ_Δ,p^∗)
≤ ∑_i=1^p∑_v=1^V 2exp{-C(M) (log (p∨Δ^-1))}→0,
where the convergence is due to the fact pV=o(exp{C_M log (p∨Δ^-1)}) as V is divergent at a polynomial rate of 1/Δ
and C(M) is sufficiently large, which implies that
max_1≤ i≤ pmax_1≤ v≤ V|∑_k=1^n
K_h(t_k-τ_v^∗)(Δ X_i,k^∗)^2-∑_k=1^n
K_h(t_k-τ_v^∗)∫_t_k-1^t_kΣ_ii,
sds|=O_P(ζ_Δ,p^∗).
By the smoothness condition on the kernel function in Assumption 2(i), we
have
max_1≤ i≤ pmax_1≤ v≤ Vsup_t∈𝒯_v|∑_k=1^n [K_h(t_k-t)-K_h(t_k-τ_v^∗)]
(Δ X_i,k^∗)^2|
≤ max_1≤ v≤ Vsup_t∈𝒯_v|
K_h(t_k-t)-K_h(t_k-τ_v^∗)|max_1≤ i≤
p∑_k=1^n (Δ X_i,k^∗)^2
= O(dh^-2)max_1≤ i≤ p∑_k=1^n (Δ
X_i,k^∗)^2.
Similar to the proof of (<ref>), we may show that
max_1≤ i≤ p∑_k=1^n (Δ X_i,k^∗)^2≤max_1≤ i≤ p∫_0^TΣ_ii, sds +o_P(1)=O_P(1)
as T is fixed and Σ_ii,t is uniformly bounded by C_Σ.
Hence, by the choice of d, we have
max_1≤ i≤ pmax_1≤ v≤ Vsup_t∈𝒯_v|∑_k=1^n [K_h(t_k-t)-K_h(t_k-τ_v^∗)]
(Δ X_i,k^∗)^2|=O_P(ζ_Δ,p^∗).
Analogously, we also have
max_1≤ i≤ pmax_1≤ v≤ Vsup_t∈𝒯_v|∑_k=1^n [K_h(t_k-t)-K_h(t_k-τ_v^∗)]
∫_t_k-1^t_kΣ_ii,sds|=O_P(ζ_Δ,p^∗).
By (<ref>) and (<ref>)–(<ref>), we complete the proof of
(<ref>).
By (<ref>), (<ref>) and the Cauchy-Schwarz inequality, we have
max_1≤ i,j≤ psup_0≤ t≤ T|∑_k=1^n
K_h(t_k-t)M_ij,k(2)|^2
≤ max_1≤ i≤ psup_0≤ t≤ T∑_k=1^n
K_h(t_k-t)(Δ X_i,k^∗)^2max_1≤ j≤
psup_0≤ t≤ T∑_k=1^n
K_h(t_k-t)(∫_t_k-1^t_kμ_j,udu)^2
= O_P(Δ)· O_P(1)=O_P(Δ),
indicating that
max_1≤ i,j≤ psup_0≤ t≤ T|∑_k=1^n
K_h(t_k-t)M_ij,k(2)|=O_P(Δ^1/2)=o_P(
ζ_Δ,p^∗),
and similarly,
max_1≤ i,j≤ psup_0≤ t≤ T|∑_k=1^n
K_h(t_k-t)M_ij,k(3)|=O_P(Δ^1/2)=o_P(
ζ_Δ,p^∗).
With (<ref>), (<ref>), (<ref>) and (<ref>), we
prove that
max_1≤ i,j≤ psup_0≤ t≤ T|∑_k=1^n K_h(t_k-t)Δ X_i,kΔ X_j,k-∑_k=1^n
K_h(t_k-t)∫_t_k-1^t_kΣ_ij,
sds|=O_P(ζ_Δ,p^∗).
Since Δ∑_k=1^n K_h(t_k-t) is strictly larger than a positive constant uniformly over t, by (<ref>), we readily have that
max_1≤ i,j≤ psup_0≤ t≤ T|Σ_ij,t-∑_k=1^n
K_h^∗(t_k-t)∫_t_k-1^t_kΣ_ij,
sds|=O_P(ζ_Δ,p^∗).
On the other hand, by (2.6) in Assumption 1(ii), we may show that
max_1≤ i,j≤ psup_0≤ t≤ T|∑_k=1^n
K_h^∗(t_k-t)∫_t_k-1^t_kΣ_ij, sds-Σ_ij,
t|=O_P(h^γ).
Then we complete the proof of (A.1) by virtue of (<ref>) and (<ref>).
▪
We next turn to the proof of Proposition A.2, in which a crucial step is to
derive a uniform consistency for X_i,τ. The latter is
stated in Lemma <ref> below.
Suppose that Assumptions 1(i), 3 and 4(i)(ii) are satisfied. Then we have
max_1≤ i≤ pmax_0≤ l≤ N|X
_i,τ_l-X_i,τ_l|=O_P(√(log(p∨Δ^-1))[b^1/2+(Δ^-1b)^-1/2]).
Proof of Lemma <ref>. By the definition of 𝐗_τ in (3.2), we write
X_i,τ_l-X_i,τ_l = T/n∑_k=1^n
L_b^†(t_k-τ_l)Z_i,t_k-X_i,τ_l
= Π_i,l^†(1)+Π_i,l^†(2)+
Π_i,l^†(3)+Π_i,l^†(4),
where
Π_i,l^†(1) = T/n∑_k=1^n L_b^†(t_k-τ_l)ξ_i,k,
Π_i,l^†(2) = ∑_k=1^n
L_b^†(t_k-τ_l)∫_(k-1)Δ^kΔ(X_i,t_k-X_i,s)ds,
Π_i,l^†(3) = ∑_k=1^n ∫_(k-1)Δ^kΔ[
L_b^†(t_k-τ_l)-L_b^†(s-τ_l)]X_i,sds,
Π_i,l^†(4) = ∫_0^TL_b^†(s-τ_l)X_i,sds-X_i,τ_l.
Let ν_Δ,p^∗=[Δlog (p∨Δ^-1)/b]^1/2, ω_i(t_k)=[ω_i1(t_k),⋯,
ω_ip(t_k) ]^^⊺, and ω
_i,∗(t_k)=ω_i(t_k)/‖ω
_i(t_k)‖. We first consider Π_i,l(1). Define
ξ_i,k^⋆=ω_i^^⊺(t_k)ξ
_k^∗ I( |ω_i,∗^^⊺(t_k)ξ_k^∗|≤Δ^-ι), ξ_i,k^♢=ω_i^^⊺(t_k)ξ_k^∗ I( |ω_i,∗^^⊺(t_k)ξ_k^∗|> Δ^-ι),
where ι is defined in Assumption 4(ii). Note that
∑_k=1^n L_b(t_k-τ_l)ω
_i^^⊺(t_k)ξ_k^∗
=∑_k=1^n L_b(t_k-τ_l)[ξ_i,k^⋆-𝖤
(ξ_i,k^⋆)]+∑_k=1^n L_b(t_k-τ_l)[
ξ_i,k^♢-𝖤(ξ_i,k^♢)]
as 𝖤(ξ_i,k^⋆)+𝖤(ξ_i,k^♢)=0. By
the noise moment condition in Assumption 3(i) and the uniform boundedness
condition on ‖ω_i(t_k)‖ in
Assumption 3(ii), we have
𝖤( | ξ_i,k^♢|) ≤ C_ω·𝖤[|ω_i,∗^^⊺(t_k)ξ_k^∗| I( |ω
_i,∗^^⊺(t_k)ξ_k^∗|>
Δ^-ι) ] =O(Δ^ι
M_ξ^♢)=o(ν_Δ,p^∗),
where M_ξ^♢>0 is arbitrarily large. Then, by Assumptions 3(i),
4(ii) and the Bonferroni and Markov inequalities, we have, for any ϵ>0,
𝖯( max_1≤ i≤ pmax_0≤ l≤ N| T/n∑_k=1^n L_b(t_k-τ_l)[ξ_i,k^♢-𝖤
(ξ_i,k^♢)]| >ϵν_Δ,p^∗)
≤ 𝖯( max_1≤ i≤ pmax_0≤ l≤ N|
T/n∑_k=1^n L_b(t_k-τ_l)ξ_i,k^♢| >1/2ϵν_Δ,p^∗)
≤ 𝖯( max_1≤ i≤ pmax_1≤ k≤ n|
ξ_i,k^♢| >0)
≤ 𝖯( max_1≤ i≤ pmax_1≤ k≤ n| ω_i,∗^^⊺(t_k)ξ
_k^∗| >Δ^-ι)
≤ ∑_i=1^p∑_k=1^n 𝖯( |ω
_i,∗^^⊺(t_k)ξ_k^∗|
>Δ^-ι)
≤ pnexp{-sΔ^-ι}C_ξ=o(1)
for 0<s<s_0, where C_ξ is defined in Assumption 3(i). Hence, we have
max_1≤ i≤ pmax_0≤ l≤ N| T/n∑_k=1^n
L_b(t_k-τ_l)ω_i(t_k)[ξ_i,k^♢-𝖤
(ξ_i,k^♢)]|=o_P(ν_Δ,p^∗).
On the other hand, by Assumptions 3 and 4(i)(ii) as well as the Bernstein
inequality for the independent sequence <cit.>, we may show that
𝖯( max_1≤ i≤ pmax_0≤ l≤ N| T/n∑_k=1^n L_b(t_k-τ_l)[ξ_i,k^⋆-𝖤(ξ_i,k^⋆)
]| >Mν_Δ,p^∗)
≤ ∑_i=1^p∑_l=1^N𝖯( | T/n∑_k=1^n L_b(t_k-τ_l)[ξ_i,k^⋆-𝖤(ξ_i,k^⋆)
]| >Mν_Δ,p^∗)
= O( pN exp{ -C_⋆(M)log(p∨Δ^-1)})=o(1),
where N diverges to infinity at a polynomial rate of n, C_⋆(M) is
positive and could be sufficiently large by letting M be large enough.
Therefore, we have
max_1≤ i≤ pmax_0≤ l≤ N| T/n∑_k=1^n
L_b(t_k-τ_l)[ξ_i,k^⋆-𝖤(ξ_i,k^⋆)]
|=O_P(ν_Δ,p^∗).
By (<ref>) and (<ref>), and noting that ∫_0^TL_b(s-τ)ds is strictly larger than a positive constant uniformly over τ, we readily have that
max_1≤ i≤ pmax_0≤ l≤ N|
Π_i,l^†(1)|=O_P(ν_Δ,p^∗).
Write
Π_i,l(2)=∑_k=1^n
L_b(t_k-τ_l)∫_(k-1)Δ^kΔ(X_i,t_k-X_i,s)ds
note that
Π_i,l(2) = ∑_k=1^n
L_b(t_k-τ_l)∫_(k-1)Δ^kΔ(∫_s^kΔμ_i,udu)ds+ ∑_k=1^n
L_b(t_k-τ_l)∫_(k-1)Δ^kΔ(∫_s^kΔ∑_j=1^pσ_ij,udW_j,u)ds
= Π_i,l(2,1)+Π_i,l(2,2).
By (<ref>) and Assumption 4(i), we have
max_1≤ i≤ pmax_0≤ l≤ N|
Π_i,l(2,1)|=O_P(Δ)=o_P(ν_Δ,p^∗).
By the Bonferroni inequality, we may show that, for any ϵ>0
𝖯(max_1≤ i≤ psup_(k-1)Δ≤ s≤
kΔ|∫_s^kΔ∑_j=1^pσ_ij,udW_j,u|>ϵν_Δ,p^∗)
≤ ∑_i=1^p𝖯(sup_(k-1)Δ≤ s≤
kΔ|∫_s^kΔ∑_j=1^pσ_ij,udW_j,u|>ϵν_Δ,p^∗)
≤ ∑_i=1^p𝖯(sup_(k-1)Δ≤ s≤
kΔ|∫_(k-1)Δ^s∑_j=1^pσ_ij,udW_j,u|>1/2ϵν_Δ,p^∗).
By the conditional Jensen inequality, we may verify that both {|∫_(k-1)Δ^s∑_j=1^pσ_ij,udW_j,u|}_s≥ (k-1)Δ and {exp(ψ|∫_(k-1)Δ^s∑_j=1^pσ_ij,udW_j,u|)}_s≥ (k-1)Δ are sub-martingales, where ψ>0.
Using the moment generating function for the folded normal random variable
and (<ref>), we have
𝖤[exp(ψ|∫_(k-1)Δ^kΔ∑_j=1^pσ_ij,udW_j,u|)]≤exp(
ψ^2Δ C_Σ/2),
where C_Σ is defined in (<ref>). Combining the above
arguments and using Doob's inequality for sub-martingales, we may show that
𝖯(sup_(k-1)Δ≤ s≤
kΔ|∫_(k-1)Δ^s∑_j=1^pσ_ij,udW_j,u|>1/2ϵν_Δ,p^∗)
= 𝖯(sup_(k-1)Δ≤ s≤
kΔexp{ψ|∫_(k-1)Δ^s∑_j=1^p
σ_ij,udW_j,u|}>exp{1/2ψϵν_Δ,p^∗})
≤ exp(-ψϵν_Δ,p^∗/2)𝖤[exp(ψ|∫_(k-1)Δ^kΔ∑_j=1^p
σ_ij,udW_j,u|)]
≤ exp(ψ^2Δ C_Σ/2-ψϵν_Δ,p^∗/2).
Then, choosing ψ=ϵν_Δ,p^∗/(2Δ C_Σ), by (
<ref>) and(<ref>), we have
𝖯(max_1≤ i≤ psup_(k-1)Δ≤ s≤
kΔ|∫_s^kΔ∑_j=1^pσ_ij,udW_j,u|>ϵν_Δ,p^∗)
≤ pexp{-(ϵν_Δ,p^∗)^2/8Δ C_Σ}=O(pexp{-ϵ^2/8 C_Σ·log(p∨Δ^-1)/b})=o(1)
for any ϵ>0, which indicates that
max_1≤ i≤ pmax_0≤ l≤ N|
Π_i,l(2,2)|=o_P(ν_Δ,p^∗).
By (<ref>) and (<ref>), we readily have that
max_1≤ i≤ pmax_0≤ l≤ N|
Π_i,l(2)|=o_P(ν_Δ,p^∗), max_1≤ i≤ pmax_0≤ l≤ N|
Π_i,l^†(2)|=o_P(ν_Δ,p^∗).
For Π_i,l^†(3), we note that
|Π_i,l^†(3)|≤sup_0≤ u≤ T|X_i,u|·∑_k=1^n
∫_(k-1)Δ^kΔ|
L_b^†(t_k-τ_l)-L_b^†(s-τ_l)| ds.
By Assumption 4(i), we have
max_0≤ l≤ N∑_k=1^n ∫_(k-1)Δ^kΔ|
L_b^†(t_k-τ_l)-L_b^†(s-τ_l)| ds=O(Δ b^-1).
On the other hand, by (<ref>),
sup_0≤ u≤ T|X_i,u|=sup_0≤ u≤ T∫_0^u
|μ_i,u|du+sup_0≤ u≤ T|∫_0^u
∑_j=1^pσ_ij,udW_j,u|=sup_0≤ u≤ T|∫_0^u ∑_j=1^pσ_ij,udW_j,u|+O_P(1).
Following the proof of (<ref>), we may show that
sup_0≤ u≤ T|∫_0^u
∑_j=1^pσ_ij,udW_j,u|=O_P(√(log (p∨Δ^-1))),
indicating that
sup_0≤ u≤ T|X_i,u|=O_P(√(log (p∨Δ^-1))).
By virtue of (<ref>) and (<ref>), we prove that
max_1≤ i≤ pmax_0≤ l≤ N|
Π_i,l^†(3)|=O_P(Δ b^-1√(log (p∨Δ^-1)))=o_P(ν_Δ,p^∗).
Finally, for Π_i,l^†(4), we write it as
Π_i,l^†(4) = {∫_0^TL_b^†(s-τ_l)∫_0^sμ_i,ududs-
∫_0^τ_lμ_i,udu}+
{∫_0^TL_b^†(s-τ_l)∫_0^s∑_j=1^p
σ_ij,udW_j,uds-∫_0^τ_l∑_j=1^pσ_ij,udW_j,u}
=: Π_i,l^†(4,1)+Π_i,l^†(4,2).
By Assumptions 1(i) and 4(i), we readily have that
max_1≤ i≤ pmax_0≤ l≤ N|
Π_i,l^†(4,1)|=O_P(b).
Following the proof of (<ref>), we may show that
𝖯(max_1≤ i≤ pmax_1≤ l≤ Nsup_τ_l≤
s≤τ_l+b|∫_τ_l^s∑_j=1^pσ_ij,udW_j,u|>M√(blog(p∨Δ^-1)))→0
and
𝖯(max_1≤ i≤ pmax_1≤ l≤ Nsup_τ_l-b≤
s≤τ_l|∫_s^τ_l∑_j=1^pσ_ij,udW_j,u|>M√(blog(p∨Δ^-1)))→0
when M>0 is sufficiently large. Consequently, we have
max_1≤ i≤ pmax_0≤ l≤ N|
Π_i,l^†(4,2)|=O_P(√(blog(p∨Δ^-1))).
Combining (<ref>) and (<ref>),
max_1≤ i≤ pmax_0≤ l≤ N| Π_i,l^†(4)|=O_P(
√(blog(p∨Δ^-1))).
The proof of (<ref>) in Lemma <ref> is completed with (<ref>), (
<ref>), (<ref>) and (<ref>).▪
Proof of Proposition A.2. By (3.3), we have
Σ_ij,t-Σ_ij,t = ∑_l=1^N K_h^†(τ_l-t)ΔX_i,lΔX_j,l-Σ_ij,t
= ∑_l=1^N K_h^†(τ_l-t)Δ X_i,lΔ
X_j,l-Σ_ij,t+∑_k=1^3Ξ_ij,t(k),
where
Ξ_ij,t(1) = ∑_l=1^N K_h^†(τ_l-t)Δ X_i,l(ΔX_j,l-Δ X_j,l),
Ξ_ij,t(2) = ∑_l=1^N K_h^†(τ_l-t)(ΔX
_i,l-Δ X_i,l)Δ X_j,l,
Ξ_ij,t(3) = ∑_l=1^N K_h^†(τ_l-t)(ΔX
_i,l-Δ X_i,l)(ΔX_j,l-Δ
X_j,l).
By Proposition A.1, we have
max_1≤ i,j≤ psup_0≤ t≤ T|∑_l=1^N
K_h^†(τ_l-t)Δ X_i,lΔ
X_j,l-Σ_ij,t|=O_P(h^γ+[log
(p∨ N)/Nh]^1/2).
By Lemma <ref> and Assumption 2(i), we have
max_1≤ i,j≤ psup_0≤ t≤ T|Ξ_ij,t(3)|=O_P(Nlog(p∨Δ^-1)[
b^1/2+(Δ^-1b)^1/2]^2).
By Proposition A.1, (<ref>) and the Cauchy-Schwarz inequality, we have
max_1≤ i,j≤ psup_0≤ t≤ T(|Ξ_ij,t(1)|+|Ξ_ij,t(2)|)=O_P(
√(Nlog(p∨Δ^-1))[b^1/2+(Δ^-1b)^1/2]).
The proof of (A.2) in Proposition A.2 is completed by virtue of (<ref>)–(<ref>).▪
Proof of Proposition A.3. By (3.1) and (3.7), we write
Ω_ij(t) = Δ/2∑_k=1^nK_h_1^∗(t_k-t)
Δ X_i,kΔ X_j,k+ Δ/n∑_k=1^nK_h_1^∗(t_k-t)Δ X_i,k (ξ_j,k-ξ_j,k-1)+
Δ/2∑_k=1^nK_h_1^∗(t_k-t)(ξ_i,k-ξ_i,k-1)
Δ X_j,k +Δ/2∑_k=1^nK_h_1^∗(t_k-t)(ξ_i,k-
ξ_i,k-1)(ξ_j,k-ξ_j,k-1)
=: Ω_ij,1(t)+Ω_ij,2(t)+Ω
_ij,3(t)+Ω_ij,4(t).
By Proposition A.1, we have
max_1≤ i,j≤ psup_0≤ t≤ T|Ω
_ij,1(t)|=O_P(Δ).
To complete the proof of (A.3), it is sufficient to show
max_1≤ i,j≤ psup_0≤ t≤ T|Ω
_ij,4(t)-Ω_ij(t)|=O_P(δ_Δ,p).
In fact, combining (<ref>) and (<ref>), and using the Cauchy-Schwarz
inequality, we
max_1≤ i,j≤ psup_0≤ t≤ T[|Ω_ij,2(t)|+|Ω_ij,3(t)|]=O_P(Δ^1/2).
By virtue of (<ref>)–(<ref>), we readily have (A.3).
It remains to prove (<ref>). We aim to show that
max_1≤ i,j≤ psup_0≤ t≤
T|Δ∑_k=1^nK_h_1^∗(t_k-t)ξ_i,kξ_j,k-
Ω_ij(t)|=O_P(δ_Δ,p),
max_1≤ i,j≤ psup_0≤ t≤
T|Δ∑_k=1^nK_h_1^∗(t_k-t)ξ_i,k-1ξ_j,k-1-
Ω_ij(t)|=O_P(δ_Δ,p),
max_1≤ i,j≤ psup_0≤ t≤ T|Δ∑_k=1^nK_h_1^∗(t_k-t)(ξ_i,kξ_j,k-1+ξ_i,k-1ξ_j,k)|=O_P(δ_Δ,p^∗),
where δ_Δ,p^∗=[Δlog (p∨Δ^-1)/h_1
]^1/2. To save the space, we only provide the detailed proof of (
<ref>) as the proofs of (<ref>) and (<ref>) are similar
(with minor modifications).
Note that
Δ∑_k=1^nK_h_1^∗(t_k-t)ξ_i,kξ_j,k-Ω_ij(t)
= {Δ∑_k=1^nK_h_1^∗(t_k-t)[ξ_i,kξ_j,k-
Ω_ij(t_k)]}
+{Δ∑_k=1^nK_h_1^∗(t_k-t)Ω_ij(t_k)-Ω_ij(t)
}
=: Υ_ij,1(t)+Υ_ij,2(t).
Let χ_ij,k=ξ_i,kξ_j,k-Ω_ij(t_k),
χ_ij,k^⋆=χ_ij,kI( |χ_ij,k|≤Δ^-ι_⋆) and χ_ij,k^♢=
χ_ij,k-χ_ij,k^⋆,
where ι_⋆ is defined in Assumption 5(iii). Observe that
∑_k=1^nK_h_1(t_k-t)
χ_ij,k=∑_k=1^nK_h_1(t_k-t)[
χ_ij,k^⋆-𝖤(χ_ij,k^⋆)]
+∑_k=1^nK_h_1(t_k-t)[ χ_ij,k^♢-𝖤
(χ_ij,k^♢)].
By Assumptions 3(ii) and 5(i), we have 𝖤[|χ_ij,k^♢|] =O( Δ^ι_⋆ M_χ) with M_χ>0 being arbitrarily large. Then, by Assumption 5(i)(ii) and the
Markov inequality, we have that, for any ϵ>0,
𝖯( max_1≤ i,j≤ psup_0≤ t≤ T0|
Δ∑_k=1^nK_h_1(t_k-t)[ χ_ij,k^♢-𝖤
(χ_ij,k^♢)]| >ϵδ_Δ,p^∗)
≤ 𝖯( max_1≤ i,j≤ psup_0≤ t≤
T|
Δ∑_k=1^nK_h_1(t_k-t)χ_ij,k^♢| >1/2
ϵδ_Δ,p^∗)
≤ 𝖯( max_1≤ i,j≤ pmax_1≤ k≤ n|
χ_ij,k^♢| >0)≤𝖯( max_1≤
i,j≤ pmax_1≤ k≤ n| χ_ij,k|
>Δ^-ι_⋆)
≤ 𝖯( max_1≤ i,j≤ pmax_1≤ k≤
n|ξ_i,kξ_j,k| >Δ^-ι_⋆-M_Ω) ≤𝖯( max_1≤ i,j≤ pmax_1≤ k≤ n(
ξ_i,k^2+ξ_j,k^2) >2(Δ^-ι_⋆-M_Ω))
≤ 2𝖯( max_1≤ i≤ pmax_1≤ k≤
nξ_i,k^2>Δ^-ι_⋆-M_Ω)
≤2∑_i=1^p∑ _k=1^n𝖯(
ξ_i,k^2>Δ^-ι_⋆-M_Ω)
≤
2pnexp{-sC_ω^-1(Δ^-ι_⋆-M_Ω)}C_ξ^
⋆=o(1)
for 0<s<s_0, where M_Ω=max_1≤ i,j≤ psup_0≤ t≤
T|Ω_ij(t)|≤ C_ω, C_ω is defined in Assumption 3(ii)
and C_ξ^⋆ is defined in Assumption 5(i).
Cover the closed interval [0,T] by some disjoint intervals 𝒯_l^⋆, l=1,⋯,V_⋆, with the center t_l^⋆ and
length d_⋆=h_1^2δ_Δ,p^∗Δ^ι_⋆. By
the Lipschitz continuity of K(·) in Assumption 2(i), we have
max_1≤ i,j≤ psup_0≤ t≤ T|Δ∑_k=1^nK_h_1(t_k-t)[ χ_ij,k^⋆-𝖤
(χ_ij,k^⋆)] |
≤ max_1≤ i,j≤ pmax_1≤ l≤
V_⋆|Δ∑_k=1^nK_h_1(t_k-t_l^⋆)[
χ_ij,k^⋆-𝖤(χ_ij,k^⋆)] | +
max_1≤ i,j≤ pmax_1≤ l≤ V_⋆sup_t∈𝒯
_l^⋆|Δ∑_k=1^n[
K_h_1(t_k-t)-K_h_1(t_k-t_l^⋆)] [ χ_ij,k^⋆-
𝖤(χ_ij,k^⋆)] |
≤ max_1≤ i,j≤ pmax_1≤ l≤
V_⋆|Δ∑_k=1^nK_h_1(t_k-t_l^⋆)[
χ_ij,k^⋆-𝖤(χ_ij,k^⋆)] | +
O(Δ^-ι_⋆) max_1≤ l≤ V_⋆sup_t∈𝒯_l^⋆Δ∑_k=1^n|
K_h_1(t_k-t)-K_h_1(t_k-t_l^⋆)|
≤ max_1≤ i,j≤ pmax_1≤ l≤
V_⋆|Δ∑_k=1^nK_h_1(t_k-t_l^⋆)[
χ_ij,k^⋆-𝖤(χ_ij,k^⋆)] |
+O_P( δ_Δ,p^∗).
On the other hand, by the Bernstein inequality, we may show that
𝖯( max_1≤ i,j≤ pmax_1≤ l≤
V_⋆|Δ∑_k=1^nK_h_1(t_k-t_l^⋆)[
χ_ij,k^⋆-𝖤(χ_ij,k^⋆)] |
>Mδ_Δ,p^∗)
≤ ∑_i=1^p∑_j=1^p∑_l=1^V_⋆𝖯(
|Δ∑_k=1^nK_h_1(t_k-t_l^⋆)[
χ_ij,k^⋆-𝖤(χ_ij,k^⋆)] |
>Mδ_Δ,p^∗)
= O( p^2V_⋆exp{ -C_♢(M)log(p∨Δ^-1)})=o(1),
where C_♢(M) is positive and becomes sufficiently large by choosing
M to be large enough, and V_⋆ diverges at a polynomial rate of n.
Therefore, we have
max_1≤ i,j≤ pmax_1≤ l≤
V_⋆|Δ∑_k=1^nK_h_1(t_k-t_l^⋆)[
χ_ij,k^⋆-𝖤(χ_ij,k^⋆)] |
=O_P(δ_Δ,p^∗).
With (<ref>)–(<ref>), we can prove that
max_1≤ i,j≤ psup_0≤ t≤ T|Υ_ij,1(t)|=O_P(δ_Δ,p^∗).
Finally, by the smoothness condition in Assumption 5(ii), we have
max_1≤ i,j≤ psup_0≤ t≤ T|Υ_ij,2(t)|=O(h_1^γ_1).
By virtue of (<ref>), (<ref>) and (<ref>), we complete
the proof of (<ref>).▪
Proof of Proposition A.4. With Assumption 6 replacing Assumption 1, proofs of the uniform convergence results in Proposition A.4 are the same as the proof of Proposition A.1. Details are omitted here to save the space.▪
§ APPENDIX C: FURTHER DISCUSSION AND EXTENSION
In this appendix, we discuss estimation of the spot precision matrix and address the asynchronicity issue which is common when multiple asset returns are collected.
§.§ Appendix C.1: Estimation of the spot precision matrix
The spot precision matrix of high-frequency data defined as inverse of the
spot volatility matrix, plays an important role in dynamic optimal portfolio
choice. In the low-frequency data setting, estimation of large precision
matrices has been extensively studied in the literature and various
estimation techniques such as penalised likelihood <cit.>, graphical
Danzig selector <cit.> and CLIME <cit.> have been introduced. In
the high-frequency data setting, <cit.> estimate the precision matrix
defined as inverse of the integrated volatility matrix, derive the relevant
asymptotic properties under various scenarios and apply the estimated
precision matrix to minimum variance portfolio estimation. We next consider estimating the large spot precision matrix under a uniform
sparsity assumption which is different from (2.3). Specifically, assume that model (3.1) holds and
that the large spot precision matrix Λ_t:=Σ_t^-1 satisfies {Λ_t: 0≤ t≤
T}∈𝒮_∗(q, ϖ_∗(p),T), where
𝒮_∗(q, ϖ_∗(p),T)={Λ_t=[Λ_ij,t]_p× p, t∈
[0,T] | Λ_t≻0, sup_0≤ t≤ T‖Λ_t‖_1≤ C_Λ, sup_0≤ t≤ T‖Λ_t‖_∞,q≤ϖ_∗(p)},
where “Λ≻0" denotes that Λ
is positive definite and C_Λ is a positive constant.
We next apply <cit.>'s constrained ℓ_1 minimisation or CLIME
method to estimate the spot precision matrix Λ_t. The
estimate is defined as
Λ_t=_Λ|Λ|_1 subject to ‖Σ_tΛ-𝐈
_p‖_max≤ρ_4(t),
where Σ_t=(Σ
_ij,t)_p× p with Σ_ij,t defined in (3.3), 𝐈_p is a p× p identity matrix, and ρ_4(t)
is a time-varying tuning parameter. The final CLIME estimate of Λ_t is obtained by further symmetrising Λ_t. Suppose that Assumptions 1, 2(i), 3 and 4(i)(ii)
are satisfied and Assumption 4(iii) holds with ρ_2(t) replaced by ρ_4(t). Using Proposition A.2 in Appendix A and following the proof of
Theorem 6 in <cit.>, we may show that
sup_0≤ t≤ T‖Λ_t-Λ_t‖=O_P(ϖ_∗(p) [
ζ_N,p^∗+ν_Δ,p,N]^1-q).
§.§ Appendix C.2: The asynchronicity issue
In the main text of the paper, we consider a special sampling
scheme: the high-frequency data are synchronised with equally spaced time
points between 0 and T. Such a setting simplifies exposition and
facilitates proofs of the uniform consistency properties. However, in
practice, it is often the case that a large number of assets are traded at
times that are not synchronised. This may induce volatility matrix
estimation bias and possibly result in the so-called Epps effect
<cit.>. We next deal with the asynchronicity problem and
discuss modifications of the estimation techniques and theory developed in
the previous sections.
Assume that the i-th asset price is collected at t_1^i,⋯,t_n_i^i,
which are non-equidistant time points over [0,T]. To address this
asynchronicity issue, we may adopt a synchronisation scheme before
implementing the large spot volatility matrix estimation method proposed in
the main text. Commonly-used synchronisation schemes
include the generalised sampling time <cit.>, refresh time
<cit.> and previous tick <cit.>. We next propose an
alternative technique by slightly amending the localised pre-averaging
estimation in (3.2) to jointly tackle the asynchronicity and noise
contamination issues. Replace the kernel filter in (3.2) by
𝐗_τ^∗=(X_1,τ^∗,⋯,
X_p,τ^∗)^^⊺ with X_i,τ^∗=∑_k=1^n_i
L_b(t_k^i-τ)Z_i,t_k^i(t_k^i-t_k-1^i),
and then use 𝐗_τ^∗ in the kernel smoothing (3.3). Some mild restrictions need to be imposed on the
data collection times. For example, let t_j^i-t_j-1^i=c_j^in_i^-1, where
0<c≤min_1≤ i≤ pmin_1≤ j≤ n_ic_j^i≤max_1≤ i≤ pmax_1≤ j≤ n_ic_j^i≤c<∞,
and there exists a κ_0>0 such that N=O(n^κ_0) with n=min_1≤ i≤ pn_i. Following the proof of Lemma B.1, we may show that
max_1≤ i≤ pmax_0≤ l≤ N|X
_i,τ_l-X_i,τ_l|=O_P(√(log(p∨n))[b^1/2+(nb)^-1/2]).
Then, following the proofs of Proposition A.2 and Theorem 2, we may prove a similar uniform convergence rate to (3.5) but with ν_Δ,p,N replaced by √(Nlog(p∨n))[b^1/2+(nb)^-1/2].
The time-varying noise covariance matrix estimation also needs to be
modified when large high-frequency data are non-synchronised. As in <cit.>, we let 𝒯_i={t_1^i,t_2^i,⋯,t_n_i^i}
be the set of time points at which we observe the contaminated asset prices,
and denote
𝒯_ij=𝒯_i∩𝒯_j={t_1^ij,t_2^ij,
⋯,t_n_ij^ij},
where n_ij is the cardinality of 𝒯_ij. Then, we modify the
kernel estimate in (3.7) as follows,
Ω_ij(t)=1/2∑_k=1^n_ijK_h_1(t_k^ij-t)Δ Z_i,t_k^ijΔ Z_j,
t_k^ij(t_k^ij-t_k-1^ij),
where t_0^ij=0. In contrast to Ω_ij(t), t_k, Z_i,t_k and Δ in (3.7) are now replaced by t_k^ij, Z_i,t_k^ij and t_k^ij-t_k-1^ij, respectively. We subsequently
apply the shrinkage to Ω_ij(t) when i≠ j and obtain
the final estimate of Ω(t). Assuming max_1≤
i,j≤ pmax_1≤ k≤
n_ij(t_k^ij-t_k-1^ij)→0 and letting n_∘=min_1≤ i,j≤ pn_ij, we may similarly derive the uniform
consistency property as in (3.9) but with Δ replaced by n_∘^-1.
§ APPENDIX D: ADDITIONAL SIMULATION RESULTS
In this appendix, we first consider the asynchronous high-frequency data using the technique discussed in Appendix C.2. We use the same simulation setup as in Section 5.1.1. To generate the asynchronous data, we follow <cit.> by randomly deleting 2 observations from every consecutive block of 3 synchronous 15-second observations. Consequently, the average number of asynchronous observations for each asset is equal to one third of the number of synchronous observations. The number of assets is set as p=200 and 500 and the replication number is R=200. We consider the following two volatility matrix estimates.
* Noise-contaminated spot volatility matrix estimate Σ_t^∗, extending Σ_t defined in Section 3.1 to the asynchronous high-frequency data with the modification technique introduced in Appendix C.2.
* Time-varying noise volatility matrix estimate Ω^∗(t), extending Ω(t) defined in Section 3.2 to the asynchronous high-frequency data with the modification technique introduced in Appendix C.2.
As in Section 5.1, we compute the Mean Frobenius Loss (MFL) and Mean Spectral Loss (MSL) over 200 repetitions for the estimated volatility matrices (under the sparsity restriction). Tables D.1 and D.2 report the simulation results when p=200 and p=500, respectively. As shown in Section 5.1.3, the shrinkage volatility matrix estimation significantly outperforms the naive estimation. Comparing with Tables 1 and 2 in the main document, we note that the finite-sample convergence is slowed down when the high-frequency data are not synchronised.
Table D.1: Simulation results of the volatility matrix estimation for asynchronous data when p=200
11c“Banding"
3-79-13
Naive Hard Soft AL SCAD 1l
Naive Hard Soft AL SCAD
3-79-13
Σ_t^∗ MFL 1r21.180
1r13.234 1r13.723 1r
13.392 1r13.768 1lMSL
1r6.174 1r2.375 1r
2.458 1r2.385 1r2.474
Ω^∗(t) MFL 1r38.072
1r4.640 1r4.647 1r
4.640 1r4.646 1lMSL
1r6.624 1r0.663 1r
0.666 1r0.663 1r0.665
11c“Block-diagonal"
3-79-13
Naive Hard Soft AL SCAD 1l
Naive Hard Soft AL SCAD
3-79-13
Σ_t^∗ MFL 1r21.143
1r13.141 1r13.648 1r
13.310 1r13.693 1lMSL
1r6.275 1r2.805 1r
2.821 1r2.804 1r2.827
Ω^∗(t) MFL 1r38.066
1r4.520 1r4.528 1r
4.520 1r4.526 1lMSL
1r6.634 1r0.736 1r
0.738 1r0.736 1r0.737
11c“Exponentially decaying"
3-79-13
Naive Hard Soft AL SCAD 1l
Naive Hard Soft AL SCAD
3-79-13
Σ_t^∗ MFL 1r21.454
1r13.772 1r14.217 1r
13.914 1r14.258 1lMSL
1r6.313 1r2.961 1r
2.968 1r2.958 1r2.972
Ω^∗(t) MFL 1r38.098
1r4.716 1r4.723 1r
4.717 1r4.722 1lMSL
1r6.672 1r0.762 1r
0.764 1r0.762 1r0.764
The selected bandwidths are h^∗=90 and b^∗=4 for Σ_t^∗ and h_1^∗=250 for Ω^∗(t), where h^∗=h/Δ, b^∗=b/Δ, and h_1^∗=h_1/Δ.
Table D.2: Simulation results of the volatility matrix estimation for asynchronous data when p=500
11c“Banding"
3-79-13
Naive Hard Soft AL SCAD Naive Hard Soft AL SCAD
3-79-13
Σ_t^∗ MFL 1r32.710
1r20.656 1r20.445 1r
20.600 1r20.445 MSL 1r6.212
1r2.440 1r2.427 1r
2.430 1r2.427
Ω^∗(t) MFL 1r93.263
1r7.348 1r7.348 1r
7.348 1r7.348 MSL 1r10.724
1r0.681 1r0.681 1r
0.681 1r0.681
11c“Block-diagonal"
3-79-13
Naive Hard Soft AL SCAD Naive Hard Soft AL SCAD
3-79-13
Σ^∗_t MFL 1r32.928
1r21.080 1r20.873 1r
21.026 1r20.873 MSL 1r6.330
1r2.962 1r2.951 1r
2.948 1r2.950
Ω^∗(t) MFL 1r93.281
1r7.331 1r7.331 1r
7.331 1r7.331 MSL 1r10.759
1r0.773 1r0.773 1r
0.773 1r0.773
11c“Exponentially decaying"
3-79-13
Naive Hard Soft AL SCAD Naive Hard Soft AL SCAD
3-79-13
Σ_t^∗ MFL 1r33.153
1r21.524 1r21.371 1r
21.459 1r21.317 MSL 1r6.341
1r3.015 1r3.003 1r
3.001 1r3.003
Ω^∗(t) MFL 1r93.287
1r7.469 1r7.469 1r
7.469 1r7.469 MSL 1r10.783
1r0.781 1r0.781 1r
0.781 1r0.781
The selected bandwidths are h^∗=240, b^∗=6
for Σ_t^∗ and h_1^∗=260 for Ω^∗(t), where h^∗=h/Δ, b^∗=b/Δ, and h_1^∗=h_1/Δ.
We next consider estimating the integrated volatility matrix (with normalisation) of the p-variate Brownian semi-martingale process 𝐗_t=(X_1,t,X_2,t,… ,X_p,t)^^⊺ over the time interval T using high-frequency observations under the sparsity assumption. Define
Σ _ T=(Σ _ T, ij)
_p× p=1/| T|∫_ TΣ_tdt=1/| T|∫_ T(Σ _t,ij) _p× pdt,
where | T| denotes the length of T. Let T=[0,T]. We use the following two methods to estimate Σ _ T and compare their performance. The first one is the sample analog of the quadratic variation (realised volatility matrix) with shrinkage <cit.>:
Σ_ T=( Σ_ T, ij^s) _p× p with Σ_ T, ij^s=s_ρ_∗( Σ_ T, ij) I( i≠ j) +Σ_ T, iiI( i=j),
where ρ_∗ is a user-specified tuning parameter and
Σ_ T,ij=1/T∑_k=1^nΔ X_i,kΔ X_j,k, 1≤ i,j≤ p
with n=T/Δ. Note that the shrinkage is applied to the off-diagonal entries of the estimated integrated matrix which is obtained by summing over the outer product of the p-dimensional vector of discrete observations of Δ𝐗 observed over T=[0,T]. The second method is to utilise the proposed kernel-weighted spot volatility matrix estimate with shrinkage, i.e.,
Σ_ T^†=( Σ_ T, ij^†) _p× p with Σ_ T, ij^†=1/n∑_k=1^nΣ
_ij,kΔ^†
where Σ_ij, kΔ^†=s_ρ( kΔ) ( Σ_ij,kΔ) I(i≠ j) +Σ_ii,kΔ I( i=j), which is the spot volatility estimate defined in (2.5) of the main text.
We use the same simulation setting as in Section 5.1 of the main text. For simplicity, we only consider the noise-free scenario and p=500. We compute the estimation of the integrated covariance matrices Σ_ T_j over 20 equal-length time intervals T_j=[ ( j-1)T_†, jT_†], j=1,2,⋯ ,20, where T_†=T/20 and T=1/252. In fact, these intervals are separated by the equidistant time points t_j defined in Section 5.1.2 for assessing the spot volatility matrix estimation. To measure the performance, we define
MFL(Σ_ T) = 1/200∑_m=1^200( 1/20∑_j=1^20‖Σ_ T_j^(m)-Σ_ T_j^(m)‖ _F) ,
MSL(Σ_ T) = 1/200∑_m=1^200( 1/20∑_j=1^20‖Σ_ T_j^(m)-Σ_ T_j^(m)‖) ,
where Σ_ T_j^(m) and Σ_ T_j^(m) denote the estimated and true integrated volatility matrices in the m-th replication. We can similarly define MFL(Σ_ T^†) and MSL(Σ_ T^†).
As in Section 5.1.3, we consider the four shrinkage methods together with the naive method which does not impose shrinkage. The simulation results are reported in Table D.3. As shown in the previous simulation results, the application of shrinkage substantially improves the estimation accuracy by reducing the MFL and MSL significantly. We note that the integrated volatility matrix estimation defined in (<ref>) based on kernel-weighted spot volatility outperforms the standard estimation defined in (<ref>) uniformly across the four shrinkage methods and the naive method. This may be partly due to the fact that the standard
integrated volatility matrix estimation (<ref>) uses the outer product of only one observation of the p-variate vector Δ𝐗 as an estimate of the integrand in (<ref>), whereas the estimation (<ref>) based on the kernel-weighted spot volatility approximates the integrand by utilising a local sample of size nh. Meanwhile, the application of shrinkage to the estimated spot volatility effectively removes small off-diagonal elements in the integrand before calculating the integral.
Table D.3: Estimation results for the noise-free integrated volatility matrices when p=500
11c“Banding"
2-12
5cFrobenius Norm 5cSpectral Norm
2-68-12
Naive Hard Soft AL SCAD 1l Naive Hard Soft AL SCAD
2-68-12
MFL(Σ_ T)
1r49.8422 1r18.2364
1r11.6644 1r10.1367
1r12.0719 1lMSL(Σ_ T) 1r
11.5148 1r1.8991 1r1.4442
1r1.3495 1r1.5055
MFL(Σ_ T^†)
1r21.7907 1r3.7201
1r5.1311 1r4.8468 1r
3.8476 1lMSL(Σ_ T^†) 1r3.8747
1r0.5858 1r0.7116 1r
0.6946 1r0.5593
11c“Block-diagonal"
2-12
5cFrobenius Norm 5cSpectral Norm
2-68-12
Naive Hard Soft AL SCAD 1l Naive Hard Soft AL SCAD
2-68-12
MFL(Σ_ T)
1r49.8412 1r18.6536
1r12.5082 1r11.2693
1r12.9696 1lMSL(Σ_ T) 1r
11.6577 1r2.4827 1r1.8710
1r1.8116 1r2.0165
MFL(Σ_ T^†)
1r21.7919 1r5.5847
1r6.3739 1r5.8120 1r
5.4097 1lMSL(Σ_ T^†) 1r3.9641
1r0.8504 1r1.1295 1r
0.8869 1r0.8801
11c“Exponentially decaying"
2-12
5cFrobenius Norm 1l
5cSpectral Norm
2-68-12
Naive Hard Soft AL SCAD 1l Naive Hard Soft AL SCAD
2-68-12
MFL(Σ_ T)
1r49.8468 1r19.2167
1r12.8439 1r11.7122
1r13.4447 1lMSL(Σ_ T) 1r
11.7491 1r2.5405 1r1.9241
1r1.8653 1r2.0720
MFL(Σ_ T^†)
1r21.7936 1r5.9783
1r6.6726 1r6.0284 1r
5.7007 1lMSL(Σ_ T^†) 1r4.0020
1r0.8927 1r1.1733 1r
0.9228 1r0.9193
99
Aït-Sahalia, Fan Xiu2010AFX10 Aït-Sahalia, Y., J. Fan, & D. Xiu (2010)
High-frequency covariance estimates with noisy and asynchronous financial
data. Journal of the American Statistical Association 105,
1504–1517.
Barndorff-Nielsen et al.2011BHLS11
Barndorff-Nielsen, O. E., P. R. Hansen, A. Lunde, & N. Shephard
(2011) Multivariate realised kernels: Consistent positive semi-definite
estimators of the covariation of equity prices with noise and
non-synchronous trading. Journal of Econometrics 162, 149–169.
Cai et al2020CHLZ20 Cai, T.
T., J. Hu, Y. Li, & X. Zheng (2020) High-dimensional minimum
variance portfolio estimation based on high-frequency data. Journal of
Econometrics 214, 482–494.
Cai, Liu Luo2011CLL11
Cai, T. T., W. Liu, & X. Luo (2011) A constrained ℓ_1
minimization approach to sparse precision matrix estimation. Journal
of the American Statistical Association 106, 594–607.
Chang et al.2021CHLT21 Chang,
J., Q. Hu, C. Liu, & C. Tang (2021) Optimal covariance matrix
estimation for high-dimensional noise in high-frequency data. Working paper
available at <https://arxiv.org/abs/1812.08217>.
Dai, Lu Xiu2019DLX19
Dai, C., K. Lu, & D. Xiu (2019) Knowing factors or factor loadings, or
neither? Evaluating estimators for large covariance matrices with noisy and
asynchronous data. Journal of Econometrics 208, 43–79.
Epps1979Ep79 Epps, T. W. (1979)
Comovements in stock prices in the very short run. Journal of the
American Statistical Association 74, 291–298.
Fan, Li Yu2012FLY12
Fan, J., Y. Li, & K. Yu (2012) Vast volatility matrix estimation using
high-frequency data for portfolio selection. Journal of the American
Statistical Association 107, 412–428.
Lam Fan2009LF09 Lam,
C. & J. Fan (2009) Sparsity and rates of convergence in large
covariance matrix estimation. Annals of Statistics 37,
4254–4278.
Wainwright2019W19 Wainwright, M. J.
(2019) High-Dimensional Statistics: A Non-Asymptotic Viewpoint.
Cambridge Series in Statistical and Probabilistic Mathematics.
Wang Zou2010WZ10 Wang,
Y. & J. Zou (2010) Vast volatility matrix estimation for
high-frequency financial data. Annals of Statistics 38, 943–978.
Yuan2010Y10 Yuan, M. (2010) High
dimensional inverse covariance matrix estimation via linear programming.
Journal of Machine Learning Research 11, 2261–2286.
Zhang2011Zh11 Zhang, L. (2011)
Estimating covariation: Epps effect, microstructure noise. Journal of
Econometrics 160, 33–47.
|
http://arxiv.org/abs/2307.02473v1
|
20230705174648
|
Fixed elements of pircon automorphisms
|
[
"Mikael Hansson",
"Vincent Umutabazi"
] |
math.CO
|
[
"math.CO"
] |
We prove that the subposet induced by the fixed elements of any automorphism of a pircon is also a pircon.
By <cit.>, the order complex of any open interval in a pircon is a PL ball or a PL sphere.
We apply our main results to symmetric groups of the form S_2n.
A consequence is that the fixed point free signed involutions form a pircon under the dual of the Bruhat order on the hyperoctahedral group.
Finally, we prove that this poset is, in fact, EL-shellable, which is a type B analogue of <cit.>.
Constraints on cosmologically coupled black holes from gravitational wave observations and minimal formation mass
Luca Amendola^1, Davi C. Rodrigues^1,2,3, Sumit Kumar^4, Miguel Quartin^3,5,6
August 1, 2023
==================================================================================================================
§ INTRODUCTION
A pircon is a poset in which every non-trivial principal order ideal is finite and admits a special partial matching (SPM).
The notion of SPMs was introduced in <cit.> as a combinatorial tool for studying the Kazhdan-Lusztig-Vogan polynomials for fixed point free involutions.
Later in <cit.>, pircons were introduced as a generalisation of zircons, which were originally invented by Marietti in <cit.>.
Pircons have since been studied in different settings of Kazhdan-Lusztig theory like <cit.>.
The Bruhat order on any Coxeter group is a zircon.
More generally, if ϕ is an involutive automorphism of a Coxeter system (W,S),
then the induced Bruhat order on the set (ϕ)={w ∈ W |ϕ(w)=w^-1} of twisted involutions is a zircon <cit.>.
Examples of pircons include the Bruhat order on parabolic quotients W^J, J S, and on the set ı(ϕ)={ϕ(w)w^-1| w ∈ W} of twisted identities, whenever W is finite and not of type A_2n <cit.>.
The order complex of any open interval in a zircon is a PL sphere <cit.>.
In a pircon, every open interval is a PL ball or a PL sphere <cit.>.
In <cit.>, it was proved that for a zircon P with any automorphism , the subposet P^ induced by the fixed elements of is itself a zircon.
In this paper, the main results are as follows.
It is first proved that if a finite poset P has a maximum and an SPM, then P^, where is any automorphism of P, also admits an SPM.
Then, it is proved that if P is a pircon, then so is P^.
As an application, where P is taken to be the dual of the Bruhat order on the fixed point free involutions in the symmetric group, we deduce that the fixed point free signed involutions form a pircon.
Therefore, by <cit.>, the order complex of (the proper part of) this poset is a PL ball, which is a type B analogue of a result of Can, Cherniavsky, and Twelbeck <cit.>.
They actually proved that the fixed point free involutions are EL-shellable.
We extend this result by proving that the fixed point free signed involutions are EL-shellable, too.
The rest of this paper is organised as follows.
In Section <ref>, we recall some definitions and properties of SPMs, pircons, and shellability.
Section <ref> contains the main results, where we generalise the main results in <cit.>.
In Section <ref>, we apply our main results to fixed point free involutions.
Finally, in Section <ref>, we establish EL-shellability of the fixed point free signed involutions.
§ PRELIMINARIES
In this section, we review some preliminaries needed in the sequel.
§.§ Posets and pircons
Let P and Q be two posets (partially ordered sets).
Recall that a map P → Q is order-preserving if for all x,y ∈ P, x ≤ y in P implies (x) ≤(y) in Q.
An isomorphism is a bijective order-preserving map whose inverse is also order-preserving.
If P=Q, is called an automorphism of P.
In this case, let P^ be the subposet of P induced by the fixed elements of .
Suppose that x,y ∈ P with x<y.
We say that x is covered by y, and write x y, if there is no z ∈ P such that x<z<y.
An order ideal I P is an induced subposet of P such that, if x ≤ y and y ∈ I, then x ∈ I.
An order ideal with a maximum (i.e., an element 1̂ such that x ≤1̂ for all x ∈ P) is called principal.
In particular, let ={x ∈ P | x ≤ p}.
A matching on P is an involution M P → P such that x M(x) or M(x) x for all x ∈ P.
As invented by Brenti <cit.>, a matching M on P is special if for all x,y ∈ P with x y, either M(x)=y or M(x)<M(y).
When P is Eulerian, special matchings are similar to compression labellings introduced by du Cloux <cit.>.
The following definition is taken from <cit.>.
Zircons were originally defined by Marietti <cit.> in a different way, but the two definitions are equivalent as proved in <cit.>.
[<cit.>]
A poset P is a zircon if, for every non-minimal element p ∈ P, the principal order ideal is finite and admits a special matching.
The next definition generalises special matchings.
Indeed, we note that a special matching is an SPM without fixed elements.
[<cit.>]
Let P be a finite poset with 1̂ and covering relation .
A special partial matching, or SPM, on P is a function M P → P such that
* M^2=𝕀,
* M(1̂) 1̂,
* for all x ∈ P, we have M(x) x, x M(x), or M(x)=x, and
* if x y and M(x) ≠ y, then M(x)<M(y).
The following lemma is the “lifting property” for SPMs.
It will specifically serve as the main tool in the proof of <ref>.
[<cit.>, Lifting property]
Suppose that P is a finite poset with 1̂ and an SPM M. If x,y ∈ P with x<y and M(y) ≤ y, then
(i) M(x) ≤ y,
(ii) M(x) ≤ x M(x)<M(y), and
(iii) M(x) ≥ x x ≤ M(y).
We now have the following definition.
[<cit.>]
A poset P is a pircon if, for every non-minimal element p ∈ P, the principal order ideal is finite and admits an SPM.
From <ref> and <ref>, it is clear that every zircon is a pircon.
Pircons are the main objects of study in this paper.
§.§ Shellability and signed permutations
A finite poset is bounded if it has a minimum and a maximum, and graded if every maximal chain has the same length.
A chain x_0<x_1<<x_k is saturated if x_i-1 x_i for all i ∈ [k]={1,2,,k}, and an x-y-chain is a saturated chain from x to y.
Let P be a finite, bounded, and graded poset.
An edge-labelling of P is a function {(x,y) ∈ P^2 | x y}→ Q, where Q is some totally ordered set.
If is an edge-labelling of P and C is an x_0-x_k-chain, let (C)=((x_0,x_1),(x_1,x_2),,(x_k-1,x_k)).
The chain is called increasing if the sequence (C) is weakly increasing, and decreasing if (C) is strongly decreasing.
An edge-labelling of P is an EL-labelling if, for all x<y in P, there is exactly one increasing x-y-chain and this chain is lex-minimal (i.e., minimal in the lexicographic order) among the x-y-chains in P.
If P has an EL-labelling, P is called EL-shellable, because then its order complex (P) is shellable <cit.>.
Let S_2n denote the group of permutations of [± n]={± 1,± 2,,± n}.
With the adjacent transpositions (i,i+1) as generators, this is a Coxeter group of type A_2n-1.
Consider the subgroup S_n^B of permutations of [± n] such that (-i)=-(i) for all i ∈ [± n].
This is a standard way to represent the hyperoctahedral group B_n as the group of “signed permutations”.
(The Coxeter generators are (1,-1) and (i,i+1)(-i,-i-1) for i ∈ [n-1].)
We shall use for the involutions in S_n, for the involutions in S_n^B (the “signed involutions”), and for the fixed point free signed involutions.
The Bruhat order on S_n^B is an induced subposet of the Bruhat order on S_2n.
It may be defined as follows, where [i,j]=|{k ≤ i |(k) ≥ j}| (see, e.g., <cit.>).
Let ,τ∈ S_n^B. Then ≤τ if and only if [i,j] ≤τ[i,j] for all (i,j) ∈ [± n]^2.
§ PIRCONS AND AUTOMORPHISMS
The following theorem and its corollary generalise the main results of <cit.> from special matchings to special partial matchings.
The proof ideas are similar, but the possibility of M fixing elements introduces additional complications.
Let P be a finite poset with a maximum and a special partial matching M.
Let also be any automorphism of P.
Then the subposet P^ induced by the fixed elements of has a special partial matching.
Since P is finite, has finite order K.
Observe that each automorphism ^i, i ∈ [K], transforms the SPM M into an SPM M_i, i.e., ^i ∘ M=M_i ∘^i.
In particular, M_K=M.
For a given p ∈ P, define
C(p)={a ∈ P |a=M_i_t∘ M_i_t-1∘∘ M_i_1(p) for some i_1,,i_t ∈ [K]}.
Thus, C(p) consists of the elements that are connected to p by the SPMs M_i.
By abuse of notation, we also let C(p) denote the subposet of P induced by the set C(p).
For a ∈ C(p) given by a=M_i_t∘ M_i_t-1∘∘ M_i_1(p),
define b ∈ C(p) as b=M_i_t' ∘ M_i_t-1' ∘∘ M_i_1'(p), where the M_i_j' are recursively given by
M_i_j'=M_i_j if M_i_j∘ M_i_j-1' ∘∘ M_i_1'(p)<M_i_j-1' ∘∘ M_i_1'(p),
𝕀 otherwise.
For a_j,b_j ∈ C(p), where a_j=M_i_j∘∘ M_i_1(p) and b_j=M_i_j' ∘∘ M_i_1'(p), 1 ≤ j ≤ t, we have b_j ≤ a_j.
To see it, we use induction by assuming that b_j-1≤ b_a-1.
We have the following three cases.
(i) If a_j ≥ a_j-1, there is nothing to prove.
(ii) If a_j<a_j-1 and M_i_j'=M_i_j, we have b_j= M_i_j(b_j-1)<b_j-1 and M_i_j(a_j-1)=a_j<a_j-1.
Hence, we may apply the lifting property to get b_j ≤ a_j (with equality if and only if a_j-1=b_j-1).
(iii) Finally, if a_j<a_j-1 and M_i_j'=𝕀, we have M_i_j(b_j-1) ≥ b_j-1 and M_i_j(a_j-1)=a_j<a_j-1.
We can then apply the lifting property to get b_j=b_j-1≤ a_j.
Thus b_j ≤ a_j, proving the claim. In particular, b ≤ a.
By construction, b ≤ p, so if p and a are both minimal in C(p), we have a=b=p.
Hence C(p) contains a unique minimal element.
Analogously, if we reverse the inequality in (<ref>), C(p) also contains a unique maximal element.
[Observe that P need not have a minimum.
However, Lemma <ref> will still hold, with the same proof as in <cit.>, even if P does not have a maximum, as long as M satisfies the other three conditions in <ref>.]
If p is not minimal in C(p), then M_i(p) p for some i.
To see this, note that if a<p in the above construction, then b ≠ p and so we may choose i=i_j for the minimal j which satisfies M_i_j'=M_i_j.
Reversing the inequality in (<ref>), a similar reasoning shows that if p is not maximal in C(p), then there is at least one i such that p M_i(p).
If p ∈ P^ and p ≤ M_i(p) for at least one i, then this will hold for all i and hence p=min C(p).
Similarly, if p ∈ P^ and M_i(p) ≤ p, then p=max C(p).
Therefore, for any p ∈ P^ we have p=min C(p) or p=max C(p), both if C(p)={p}.
Since permutes the SPMs M_i, we have (C(p))=C((p)) for all p ∈ P. Thus the claim below follows.
If p ∈ P^, then both min C(p) and max C(p) belong to P^.
Moreover we have:
For any x,y ∈ P, if min C(x) ≤max C(y), then min C(x) ≤min C(y) and max C(x) ≤max C(y).
Let p ∈ P be arbitrary. If s=max C(s) and C(s) ≠ C(p), it follows from the lifting property that s>p s>M_i(p) for all i.
This shows that s>p s>max C(p) (by repeated application of the lifting property).
A similar argument shows that if s=min C(s) and C(s) ≠ C(p), then s<p s<min C(p).
Hence, by using the above argument, if we take s=max C(y) and p=min C(x) where s>p, we get max C(y) ≥max C(x).
Similarly, if s=min C(x) and p=max C(y) where s<p, then min C(x) ≤min C(y).
Let denote the covering relation in P^.
If =min C(p) and =max C(p) belong to P^, then or =.
Assume that x∈ P^. If x=min C(x) and x<, then by Claim <ref>, x ≤.
Similarly, if x=max C(x) and <x, then by Claim <ref>, ≤ x.
In either case, x does not satisfy <x<.
Define the function P^→ P^ by
(p)=max C(p) if p=min C(p),
min C(p) if p=max C(p).
By Claim <ref>, is well defined.
We show that is an SPM on P^ by verifying the conditions in <ref>.
* For every x ∈ P^, ()^2(x)=x. That is, ()^2=𝕀.
* Since is an automorphism of P, (1̂)=1̂, so 1̂∈ P^.
Because M_i is an SPM, M_i(1̂) ≠1̂, so 1̂≠min C(1̂).
By Claim <ref>, (1̂)=min C(1̂) 1̂.
* For any x ∈ P^, Claim <ref> shows that (x) x, x (x), or (x)=x.
* Let x y and (x) ≠ y. We must show that (x)<(y).
By Claim <ref>, min C(x) ≤min C(y) and max C(x) ≤max C(y).
Moreover, both inequalities are strict since (x) ≠ y.
(i) If x ≠min C(x), then (x)=min C(x)<min C(y) ≤(y).
(ii) If y ≠max C(y), we have (x) ≤max C(x)<max C(y)=(y).
(iii) If x=min C(x) and y=max C(y), we have x=min C(x)<min C(y) ≤ y and x ≤max C(x)<max C(y)=y.
Because x y, we conclude that min C(x)=max C(x) and min C(y)=max C(y).
Hence (x)=x y=(y).
From <ref> we have the following corollary.
Let P be a pircon. If is any automorphism of P, then the subposet P^ is a pircon.
For any non-minimal element p ∈ P, the order ideal is finite and has an SPM.
Let p ∈ P^ be non-minimal.
The elements in are the fixed elements of the restriction of to .
Therefore, by <ref>, has an SPM.
§ AN APPLICATION OF <REF>
As an application of <ref>, we shall derive an example of a pircon from the fixed point free involutions in the symmetric group S_2n.
Below we shall freely use Coxeter group theoretic terminology.
For definitions and preliminaries we recommend the reader to consult <cit.>.
Let w_0 be the reverse permutation in S_2n, and C(w_0) the conjugacy class of w_0, i.e., the set of fixed point free involutions.
(Recall that S_2n denotes the group of permutations of [± n].)
By using <cit.>, one can show that C(w_0), with the dual of the Bruhat order inherited from S_2n, is a pircon.
Consider the Coxeter group, and Bruhat order, automorphism S_2n→ S_2n given by ↦ w_0 w_0.
Observe that the fixed element subgroup S_2n^=S_n^B.
Because preserves C(w_0) (i.e., (C(w_0))=C(w_0)), it follows from <ref> that C(w_0)^, i.e., the set of fixed point free signed involutions, is a pircon (where we have identified with its restriction to C(w_0)).
Since the partial order induced by the Bruhat order on S_2n coincides with the Bruhat order on S_n^B, we conclude that , ordered by the dual of the Bruhat order on S_n^B, is a pircon.
By <cit.>, we therefore have the following result.
It is a type B analogue of <cit.>, which asserts that (the proper part of) C(w_0), with the dual of the Bruhat order inherited from S_2n, is a PL ball.
The order complex of (the proper part of) the fixed point free signed involutions is a PL ball.
§ EL-SHELLABILITY OF THE FIXED POINT FREE SIGNED INVOLUTIONS
In this section, we establish a stronger property of the fixed point free signed involutions, namely, EL-shellability.
This is a type B analogue of <cit.>, which claims EL-shellability of the fixed point free involutions.
[It was as a consequence of this result, together with results from <cit.>,
that Can, Cherniavsky, and Twelbeck proved the type A version of <ref>.]
We follow the strategy in the proof of <cit.>, where this fact was reproved.
Both proofs rely heavily on Incitti's classification of the covering relation of the Bruhat order on the involutions <cit.>.
Similarly, our proof here makes heavy us of his classification for the signed involutions <cit.>.
We shall describe the parts of these classifications that are needed in order to understand the proof given here.
Let us first describe the possible coverings for the involutions, which all appear for the signed involutions.
For ,τ∈, we write τ=_(i,j)(), where stands for covering transformation, if they agree everywhere but for the elements whose positions are marked with dots in Figure <ref>, which is taken from <cit.>.
Incitti characterised the covering relation for the involutions as follows.
[]
Let ,τ∈. Then τ in if and only if τ=_(i,j)() for some (necessarily unique) pair (i,j).
For <τ in , define =(,τ)=min{i |(i) ≠τ(i)} and =(,τ)=min{j ≥+1 |(j) ∈ [()+1,τ()]},
where and are short for difference index and covering index, respectively.
It is clear from Figure <ref> that if τ, then i=(,τ) and j=(,τ).
If <τ, we sometimes write _τ() instead of _(,)().
Here m stands for minimal: (,) is lex-minimal among all pairs (i,j) such that _(i,j)() ≤τ.
Let us now consider the signed involutions.
Here we have the following characterisation of the covering relation.
It follows from <cit.>.
The necessary parts of the transformation will be explained below.
[<cit.>]
Let ,τ∈. Then τ in if and only if τ=_(i,j)() for some (necessarily unique) pair (i,j).
If τ=_(i,j)() for some pair (i,j), let (,τ)=(i,j).
By Lemma <ref>, this defines an edge-labelling of , with {(i,j) ∈ [± n]^2 | i<j} totally ordered by the lexicographic order.
We call (i,j) the label on a cover τ if (,τ)=(i,j), and a label on a chain if it is the label on some cover of the chain.
In order to establish EL-shellability of the fixed point free signed involutions, we need the following results of Incitti.
[]
Let <τ in . Then there is exactly one increasing -τ-chain, and it is lex-minimal.
[]
Let <τ in . Then there is exactly one decreasing -τ-chain.
Since _(i,j)()(i)>_(i,j)()(j), there is also exactly one -τ-chain with weakly decreasing labels.
We now delve somewhat deeper into the covering relation for the signed involutions.
For <τ in , Incitti defines a new signed involution π=_τ(), where m stands for minimal (this is explained below), such that π≤τ.
This is done in <cit.>, which correspond to eight different cases, two of which are called easy, four normal, and two hard.
For example, N2.2 in Table 5 is normal of the second kind, and the first move performed to reach π from , in the sense of Figure <ref>, is of type 2.
Black dots are used for , white dots for π, and white squares for τ.
The light grey regions correspond to the first move that we perform.
Consider, e.g., N2.2 where the black dots move according to type 2 to form the involution _τ() (note that this is not a signed involution).
The medium and dark grey regions correspond to the second and third moves.
The top right cells of Tables 2–9 explain further how to interpret the permutation diagrams.
The number (,τ), where is short for covering value, is defined as π().
The exact definition of is case dependent and not important here.
However, it turns out that if τ=_(i,j)(), then i= and j=.
Moreover, if <τ, then (,) is lex-minimal among all pairs (i,j) such that _(i,j)() ≤τ.
The reader of this paper can safely ignore the quintuple associated with each diagram, as well as the information given in the top left and top middle cells.
We are now ready to prove EL-shellability of the fixed point free signed involutions.
As mentioned earlier, we follow the strategy in the proof of <cit.>.
That is, we prove that the decreasing -τ-chain in is contained in and that this chain is lex-maximal.
The major difference between the two proofs is that we now have to consider many more possible coverings.
Moreover, some arguments have been simplified.
The poset is EL-shellable.
Let <τ in , let C_D=(τ_k τ_1 τ) be the decreasing -τ-chain in , where k ≥ 1,
and let (i_τ,j_τ) be the label on τ_1 τ.
Observe that if we can prove that τ_1 ∈, it follows that τ_1,,τ_k ∈, because the decreasing -τ_1-chain in is τ_k τ_2 τ_1.
Let h=(,τ).
Since has no fixed points, h is an exceedance of (i.e., (h)>h).
Indeed, if (i)<i, then (i) is an exceedance of , and ((i))=τ((i)) (i)=τ(i).
Consider any cover ππ' in [,τ].
Two applications of <ref> yield (π,π') ≥(π,τ) ≥ h.
Since the label (i,j) on ππ' satisfies i=(π,π'), it follows that h is in some label on C_D, and because C_D is decreasing, i_τ=h.
Suppose τ_1 ∉ so that τ_1 τ is a case where the number of fixed points decreases.
The only such cases are E1.1, N1.1, N4.2, and H2.1, and in all of them, h is a fixed point of τ_1.
Observe that τ_1 has fixed points also in E2.6a, N1.2a, N1.2b, N1.3a, N1.3b, N2.2, H1.2, H2.2a, H2.2b, and H2.3, and h is not always one of them, but the number of fixed points does not decrease in those cases.
Therefore, [h,h+1]>τ_1[h,h+1].
Since ≤τ_1, it follows from <ref> that τ_1 ∈.
We have proved that the decreasing -τ-chain in is contained in .
The next step is to show that this chain is lex-maximal.
This is done in the same way as in the proof of <cit.>, where Incitti's type A version of Lemma <ref> is used.
We include the argument here in order to make the proof complete.
To obtain a contradiction, consider the lex-maximal -τ-chain in , and suppose it is not decreasing;
call it C=(π_1 π_k+2) and say that (π_1,π_2) ≤(π_2,π_3).
By Lemma <ref>, π_1 π_2 π_3 is lex-minimal among the π_1-π_3-chains in .
Thus, π_1 π_2' π_3 π_k+2, where π_1 π_2' π_3 is the decreasing π_1-π_3-chain, is lex-larger than C, a contradiction.
So, the decreasing -τ-chain in is contained in and it is lex-maximal.
Now, by reversing the lexicographic order on the set {(i,j) ∈ [± n]^2 | i<j}, we obtain an edge-labelling of such that in each interval [,τ], there is an increasing -τ-chain which is lex-minimal.
By Lemma <ref> and the remark following it, this is an EL-labelling of .
The fact that the decreasing chain in is contained in allows as to compute the dimension of the PL ball (-{0̂,1̂}) as
ρ(1̂)-ρ(0̂)-2, where ρ is the rank function in .
In , 1̂ is the reverse permutation w_0 and 0̂ is the product (-n,-n+1)(-n+2,-n+3) (n-1,n) of adjacent transpositions.
The length of in S_n^B is
ℓ()=()+()/2,
where
()=|{(i,j) ∈ [± n]^2 |i<j and (i)>(j)}|
(the number of inversions of , i.e., the length of in S_2n) and
()=|{i ∈ [n] |(i)<0}|.
By <cit.>,
ρ()=ℓ()+()/2
where
()=|{i ∈ [n] | -i ≤(i)<i}|.
It follows that ρ(1̂)=n^2+n/2.
Furthermore, ρ(0̂)=n/2 if n is even and n+1/2 if n is odd, so the dimension is
n^2/2-2 if n is even,
n^2-1/2-2 if n is odd.
amsplain
|
http://arxiv.org/abs/2307.01199v1
|
20230703175920
|
NeuBTF: Neural fields for BTF encoding and transfer
|
[
"Carlos Rodriguez-Pardo",
"Konstantinos Kazatzis",
"Jorge Lopez-Moreno",
"Elena Garces"
] |
cs.CV
|
[
"cs.CV",
"cs.AI",
"cs.GR",
"cs.LG",
"68T07 (Primary) 68T45, 68U10, 68U05 (Secondary)",
"I.4.0; I.2.6; I.3.0"
] |
Preprint Submitted for CEIG'23
1]Carlos Rodriguez-Pardocor1
[cor1]Corresponding author
carlos.rodriguezpardo.jimenez@gmail.comCarlos Rodriguez-Pardo
2]Konstantinos Kazatzis
3]Jorge Lopez-Moreno
4]Elena Garces
[1,3,4]SEDDI and Universidad Rey Juan Carlos
[2]SEDDI
August 1, 2023
Neural material representations are becoming a popular way to represent materials for rendering. They are more expressive than analytic models and occupy less memory than tabulated BTFs. However, existing neural materials are immutable, meaning that their output for a certain query of UVs, camera, and light vector is fixed once they are trained.
While this is practical when there is no need to edit the material, it can become very limiting when the fragment of the material used for training is too small or not tileable, which frequently happens when the material has been captured with a gonioreflectometer.
In this paper, we propose a novel neural material representation which jointly tackles the problems of BTF compression, tiling, and extrapolation.
At test time, our method uses a guidance image as input to condition the neural BTF to the structural features of this input image. Then, the neural BTF can be queried as a regular BTF using UVs, camera, and light vectors.
Every component in our framework is purposefully designed to maximize BTF encoding quality at minimal parameter count and computational complexity,
achieving competitive compression rates compared with previous work.
We demonstrate the results of our method on a variety of synthetic and captured materials, showing its generality and capacity to learn to represent many optical properties.
Neural Fields Reflectance Rendering BTF Compression
§ INTRODUCTION
A common approach to modeling real-world spatially-varying materials in computer graphics is through the use of Bidirectional Texture Functions (BTFs). This type of representation models the dense optical response of the material, and is more general than analytic representations such as microfacet SVBRDF. However, BTFs can occupy large amounts of memory.
Recently, neural material representations are being proposed as a learning-based alternative to tabulated BTFs, providing a more compact solution while keeping the flexibility and generality of BTFs.
Creating digital representations of real material samples requires using an optical capture device, such as a gonioreflectometer,
a smartphone <cit.>, or a flatbed scanner <cit.>. During the process, several choices must be made. First, it is important to select a patch of the material that contains enough spatial variability. Second, a process –automatic or manual– must be found to produce a tileable material that can be used to create seamless 3D renders. Finally, resources must be allocated for storage as needed.
Making these choices when dealing with implicit or tabulated representations, such as in BTFs or neural materials, is particularly crucial. Once these representations are trained or captured, they cannot be easily modified and it is only possible to query them using the UVs, light, and camera vectors.
In this paper, we propose a novel neural material representation that addresses these issues. Unlike existing neural approaches that are immutable once trained <cit.>, our model can be queried at test time with a guidance image that conditions the neural BTF to the structure provided by the guidance image. Our approach resembles synthesis by example and procedural processes, and can be used to extrapolate BTFs to large material samples, as well as to easily create tileable ones. Furthermore, our method achieves better compression rates than previous work on neural BTF representations.
To achieve this, we present a novel method that works at two steps.
In the first step, we condition the neural BTF using a guidance image as input. To this end, we use an autoencoder that outputs a high-dimensional latent representation of the material, a neural texture, which jointly encodes reflectance and structural properties. In the second step, the UV position of the latent representation, along with the camera and light vectors, are decoded by a fully-convolutional sinusoidal decoder, a neural renderer to obtain the RGB values.
Using a single BTF as input, we train the network end-to-end using a custom training procedure, loss function, and data augmentation policy. This policy, inspired by recent work on attribute transfer <cit.>, allows the autoencoder to encode the relationship between structural features and reflectance, enabling the propagation of the BTF to novel input guidances.
Once trained, the novel input guidances may come from the same material, a different material, or a structural pattern. An input guidance of the same material can be used to extrapolate the BTF to larger samples or create tileable BTFs, provided the input guidance is tileable. If the input guidace is a structural pattern, the local features can be used to synthesize novel materials.
In summary, we propose the following contributions:
* The first neural BTF representation with conditional input that can be used to extrapolate BTF measurements, easily create tileable BTFs, and synthesize novel materials.
* We show how to leverage our system for rendering large-scale and tileable neural BTF generation using measurements captured with small portions of the material.
* We demonstrate that our method works with synthetic and captured materials of diverse optical properties, including colored specular or anisotropy.
We provide additional results, supplementary materials, and implementation details at https://carlosrodriguezpardo.es/projects/NeuBTF/our project website.
§ RELATED WORK
An accurate method for representing the optical properties of materials is through Bidirectional Texture Functions (BTFs) <cit.>. BTFs are 6D functions that characterize all possible combinations of incoming and outgoing light and camera directions for the 2D spatial extent of a material.
Although they are successful in representing materials, they have a major drawback in terms of memory requirements. Therefore, BTF compression has been a major research topic <cit.>.
Non-neural approaches used dimensionality reduction techniques such as Principal Component Analysis (PCA) <cit.>, vector quantization <cit.>, or clustering <cit.>. However, these approaches were recently surpassed by neural models <cit.> due to their flexibility and superior capacity to learn non-linear functions.
*Neural BTFs
Rainer et al. <cit.> proposed the first method to use deep autoencoders to compress BTFs, surpassing PCA <cit.> on captured BTFs. However, this approach required training a single neural network per material. To address this limitation, a later work by the same authors <cit.> proposed a generalization of this idea in which a single network was able to generalize to a variety of materials.
Although these methods were very effective for compressing flat materials, they had some limitations when it came to modeling materials with volume. In their work, Kuznetsov et al. <cit.> improved the quality of neural materials by introducing a neural offset module that captures parallax effects. Further, they method also allowed for level-of-detail though MIP mapping by training a multi-resolution neural representation.
However, grazing angles and silhouette effects remained a challenge for this approach. In a subsequent work, Kuznetsov et al. <cit.> explicitly trained the network using queries that span surface curvatures, effectively handling these cases.
Representing fur, fabrics, and grass with neural reflectance fields was explored by Baatz et al. <cit.> who proposed a representation that jointly models reflectance and geometry.
All of these approaches share the idea of querying neural material using UVs, camera, and lighting vectors, but do not provide any functionality for modifying the material once the network is trained. In contrast, our approach can take a guidance image as input, which conditions the output to generate material variability.
*Material synthesis and tiling
Texture synthesis is a long-standing problem in the field of computer graphics. The goal is to reconstruct a larger image given a small sample, leveraging the structural content and internal statistics of the input image. This concept has been used for synthesizing single images, BTFs, and full material models. For images, the most common strategies include PatchMatch <cit.>, texture transport <cit.>, point processes <cit.>, or neural networks <cit.>.
BTF synthesis, however, has received less attention. Steinhausen et al. <cit.> extrapolated BTF captures to larger material samples using non-neural texture synthesis methods. For full materials, Li <cit.> captured the appearance of materials by first estimating their BRDF and then synthesizing the high-resolution micro-structure from a dataset of measured SVBRDFs. Nagano <cit.> measured microscopic patches of the skin and used a convolutional filter to propagate the measurements to a spatially-varying texture. Deschaintre <cit.> used an autoencoder to propagate SVBRDFs to large material samples. Also recently, Rodriguez-Pardo and Garces <cit.> propagated any kind of visual attribute having a single image as guidance. Their approach shares some similarity to ours, although they transfer 2D image attributes, while we transfer the full BTF.
Procedural models <cit.> are nowadays very successful for generating tileable materials. Thanks to the use of a tileable template, these methods adjust the generated image to the features available in the template. As we show, our approach can also work with a binary template as input. However, guaranteeing predictable outputs given this kind of input is out of the scope of our technique, which can transfer BTF measurements having as input a guidance image of the same material.
*Other neural representations in rendering
Limited to BRDFs, neural networks trained with adaptive angular sampling have been explored to enable importance sampling <cit.>, needed for Monte Carlo integration. Deep latent representations also allow for BRDF editions. For example, Hu et al. <cit.> demonstrate that autoencoders can outperform classic PCA for the purpose of editing.
Other applications of neural encodings in rendering are numerous. For instance, they have been used for scene prefiltering <cit.>, where geometry and materials are simplified to accommodate the LoD of the scene using a voxel-based representation and trained latent encodings. For anisotropic microfacets, Gauthier et al. <cit.> propose a cascaded architecture able to adjust the material parameters to the MIP mapping level. Encoding light transport using neural networks for real-time global illumination has also been explored <cit.>, showcasing promising results.
§ METHOD
We present an overview of our approach in Figure <ref>, where we show our inference and training pipelines. Our goal is twofold: First, find a compact representation for a BTF through the use of neural networks. Second, enable the extrapolation of the BTF according to guidance images used as input. In Section <ref> we describe our inference pipeline and neural network, and in Section <ref> our training process. Section <ref> contains specific implementation details and design of the neural networks.
§.§ Inference
Our neural network is composed of three modules: an autoencoder 𝒜, a neural texture 𝒯∈ℝ^H × W × D, and a renderer ℛ.
The renderer ℛ(𝒯(u,v), , ) = RGB takes as input the feature vector at the (u, v) coordinates of the neural texture 𝒯, the view and light positions, and returns an RGB value. ℛ acts as a conventional BTF and can be used as such in any render engine.
An input guidance image, 𝒢∈ℝ^H × W × 3, is used during inference to condition the generation of the neural texture 𝒯.
This conditioning allows us to propagate the learned reflectance to novel guidance images that can be: a larger sample of the same material, a different material, or a structural image. In the simpler case, the guidance image comes from the BTF used for training, and our process is equivalent to previous work <cit.>.
The autoencoder 𝒜 takes the guidance image 𝒢∈ℝ^H × W × 3 as input and outputs a neural texture 𝒯∈ℝ^H × W × D with the same size H × W as the input guidance image but with more latent dimensions D. As a result, each pixel in the guidance image has a higher-dimensional neural representation in 𝒯. Because it is trained without explicit supervision, this latent representation can capture the reflectance and structural patterns automatically.
Kuznetsov <cit.> also used a latent neural representation of the material, however, lacking the initial autoencoder their approach cannot synthesize novel BTFs without retraining, while our conditioning module allow us to generate novel BTFs during test time.
The autoencoder and the renderer are neural networks trained jointly, using an end-to-end image-to-image approach describe below.
§.§ Training
Figure <ref> (bottom) illustrates our training process. It has two objectives: First, equivalent to regular BTF encoding, we aim to find the mapping between camera direction , light direction , and output slices of the BTF: (, ) →V_, ∈BTF. Second, we aim to condition the synthesis process with an input guidance image, 𝒢.
To this end,
for training, we feed the model with images, V_ω̃_o, ω̃_i, which are randomly sampled from the BTF, and are subject to additional data augmentation processes.
This extensive augmentation process guarantees invariance to different input variations during test time, like camera or illumination conditions, while keeps consistency of the outputs.
*Loss Function Design
Our loss function, that compares ground truth slices V_, with generated ones ℛ(𝒜(f(V_ω̃_o, ω̃_i)), , ), is a weighted sum of three terms: a pixel-wise loss, a style loss, and a frequency loss,
ℒ = λ_ℒ_1ℒ_1 + λ_styleℒ_style + λ_freqℒ_freq
The main driver of our loss is the pixel-wise norm ℒ_1. ℒ_1 produces sharper results than higher-order alternatives, such as ℒ_2 <cit.>. Following <cit.>, we apply a log(x+1) compression to improve the model results on high dynamic range. This compression is only done to the pixel-wise component of the loss function. Inspired by recent work on texture synthesis, capture and transfer <cit.>, we introduce a ℒ_style loss to help the model generate higher quality and sharper results. Further, to mitigate the spectral bias of convolutional neural networks and help ameliorate the results further, we also introduce a focal frequency loss into our learning framework <cit.>. This combination of loss functions proves effective for our problem, without the need for complex adversarial losses which could reduce efficiency or destabilize training.
In this section, we describe our novel neural representation for material reflectance (Sec. <ref>), discuss our training procedure (Sec. <ref>), model design (Sec. <ref>), and the inference process (Sec. <ref>). We illustrate our material model training in Figure <ref>, and the inference process in Figure <ref>. Implementation details are provided on Section <ref> and extended on the supplementary material.
Our training objective is to minimize a loss function ℒ by learning to map from input BTF_c_i, l_i to target V_c_t, l_t images, as follows:
_𝒜,ℛℒ(V_c_t, l_t, ℛ(𝒜(V_c_i, l_i), c_t, l_t)) ∀ c ∈ C, l ∈ L
Where C and L are the set of available camera and lighting positions of the input BTF, respectively. Note that this optimization objective does not explicitly optimize the Neural Texture 𝒯. Its values are learned without supervision, they are found automatically. 𝒯 must have values that are both easy to transfer and which can be effectively used to represent the reflectance of the material, decoded by the renderer ℛ. This double optimization objective shapes the type of properties which can be represented in 𝒯, which we explore qualitatively on Section <ref>. Show some examples of 𝒯 for a couple of materials. We design the loss function, training procedure and the architecture of 𝒜 and ℛ to maximize the reconstruction quality of the input BTF, to enable high-quality propagations at test time, and to maintain a low parameter count, maximizing compression.
*Data Augmentation
We train our models using a comprehensive data augmentation policy aimed at achieving high quality reflectance propagation, increasing performance and generalization, and allowing for generation of multiple resolution materials at test time. We build upon recent work on material transfer <cit.> and use images of the material taken under different illumination and viewing conditions as inputs to our autoencoder. This helps it generalize to novel capture setups, which allows for multiple applications we describe on Section <ref>. In particular, we use every image available on the input BTF, selected uniformly at random for each element in each batch during training.
As in <cit.>, we also use random rescaling, which helps the model generalize to new scales, and build neural materials of multiple resolutions at test time, as we describe on Section <ref>.
Inspired by recent work on image synthesis <cit.>, we use random cropping, which helps generalization by effectively increasing the dataset size. Finally, we extend the color augmentation policy in <cit.> with random hue changes across the entire color wheel, and introduce random Gaussian noise and blurs to the input images, to help it generalize further, as proposed in <cit.>.
*Loss Function Design
Our loss function is a weighted sum of three terms: a pixel-wise loss, a style loss, and a frequency loss:
ℒ = λ_ℒ_1ℒ_1 + λ_styleℒ_style + λ_freqℒ_freq
The main driver of our loss is the pixel-wise norm ℒ_1. ℒ_1 produces sharper results than higher-order alternatives, such as ℒ_2 <cit.>. Following <cit.>, we apply a log(x+1) compression to improve the model results on high dynamic range. This compression is only done to the pixel-wise component of the loss function. Inspired by recent work on texture synthesis, capture and transfer <cit.>, we introduce a ℒ_style loss to help the model generate higher quality and sharper results. Further, to mitigate the spectral bias of convolutional neural networks and help ameliorate the results further, we also introduce a focal frequency loss into our learning framework <cit.>. This combination of loss functions proves effective for our problem, without the need for complex adversarial losses which could reduce efficiency or destabilize training.
§.§ Inference
Once the model has been trained, we can generate the final Neural BTF. To do so, we require a guidance image 𝒢∈ℝ^3× H × W as input to the autoencoder 𝒜. 𝒢 can take many forms: it can an image of the input BTF, an larger portion of the same material where more variations are represented, a tileable texture, or even a synthetic image for which we want to propagate reflectance values, as shown in Figure <ref>. From it, we generate the Neural Texture 𝒜(𝒢) = 𝒯_𝒢∈ℝ^D× H × W.
𝒯_𝒢 and the decoder ℛ form the final Neural BTF, which can then be queried as a regular BTF, using input UVs, camera and light vectors. We illustrate the generation of 𝒯_𝒢 and the usage of the Neural BTF on rendering scenarios on Figure <ref>.
§ MODEL DESIGN AND IMPLEMENTATION
We provide extensive implementation details for model sizes, training and data generation on the supplementary material.
*AutoencoderFor the autoencoder, we use a lightweight U-Net <cit.> with a few modifications to tailor it for our problem. Inspired by recent work on CNN design, we leverage ConvNext <cit.> Blocks across our model, with depth-wise convolutions using 5×5 kernels. We empirically observe that ConvNext blocks achieve higher quality structural editions at a lower parameter count than vanilla U-Net blocks. To further help convergence and preserve details in the input images, we use residual connections <cit.> in every convolutional block of the model. We use 1×1 convolutions on the skip connections and residual scaling <cit.>. As in <cit.>, we use Layer Normalization <cit.> and GELU non-linearities <cit.>. On the bottleneck of the model, we introduce an attention module <cit.> to help the model learn longer-range dependencies. To avoid checkerboard artifacts <cit.>, we use nearest neighbors interpolation for upsampling. Inspired by recent work on tileable material generation <cit.>, we use circular padding throughout the model. We initialize its weights using orthogonal initialization <cit.>, which helps avoiding exploding gradients.
*Renderer For the renderer, we build upon SIREN <cit.> MLPs, with additional modifications to enhance its performance for our problem. We use 1×1 convolutions instead of vanilla linear layers, to allow for end-to-end training using 2D images. Further, we introduce Layer Normalization <cit.> before each sinusoidal non-linearity, which stabilizes training. Finally, inspired by <cit.>, we use residual connections <cit.>, to help preserve the information of the input vector across the decoder layers. Model weight initialization follows <cit.>. With sinusoidal activations, we observe significantly higher reconstruction quality and training dynamics than with ReLU <cit.> MLPs, which are common for BTF compression <cit.>.
Because the network is fully-convolutional, it can take as input feature vectors of any size. This is very convenient for our use cases when the input guidance image have a size different from the size of the original BTF used to train it. The renderer can be evaluated very efficiently in GPU, at an average of 2.514 e^-4± 4.48 e^-5 ms per sample.
§ EVALUATION
§.§ Ablation Study
Aquí:
* Estudio cualitativo de la influencia de SIREN, Loss Functions, RAdam, Orthogonal Normalization, Autoencoder Architecture, etc.
* Loss WRT number of layers in renderer, autoncoder, # latent parameters, width of renderer, etc.
* quantitative study of these factors
* speed vs accuracy
* Use render aware losses like in UMat
* Comparativa cuantitativa con Rainer2019 y Rainer2020 en materiales de Bonn y UTIA (intentar tenerlos)
* Comparativa cualitativa usando imágenes
* BRDF slices using BRDF viewer
§.§ Qualitative and Quantitative Analysis
We evaluate our method on materials from different sources including acquired BTFs from <cit.>, and rendered BTFs from procedurally generated and scanned SVBRDFs. In Figure <ref>, we show examples of the results of NeuBTF for a variety of materials with highly complex structures and reflectance properties, like colored specular (first column) or anisotropy (last). We show some additional results in Figure <ref> for materials of different datasets. As shown, our model achieves high quality reconstructions regardless on the type of data source. In Table <ref>, we show the reconstruction error for the same materials, averaged across the full directional space, for a variety of pixel-wise and perceptual metrics.
esto de las normales es muy confuso, lo quitaría Sí? No se, a mi me mola pero me has generado muchas dudas.We also evaluate whether the encoded materials preserve the geometric properties in their training datasets. We leverage Photometric Stereo <cit.> to compute surface normals of encoded and ground truth materials. We provide results in Figure <ref>, where we show that the reconstructed surface geometry is accurate albeit noisier, and preserves the structure of the material.
Finally, in Figure <ref>, we show a colored visualization for a few channels of the latent neural texture 𝒯 found for a variety of materials. Because the values for the neural texture are unbounded, to each channel c∈𝒯, we standardize them to 0 mean and unit variance, and apply a sigmoid(c) = 1/e^1-c + 1 non-linearity to make the maps comparable. Without any explicit training, the models learn to separate distinct parts of the material. For example, the model finds distinct latent spaces for warp and weft yarns on woven fabrics, or separation between color and geometric patterns. This disentanglement provides clues on why the material propagation is possible, and suggests potential future research directions for fine-grained neural material edition.
§.§ Compression comparisons with previous work
In Table <ref>, we show the number of trainable parameters on the decoders of different neural BTF compression algorithms. As shown, our model is competitive with previous work in terms of trainable parameters. This is achieved as we use more complex loss functions than previous work, which help regularize the models, and because our sinusoidal MLP achieves higher quality reconstructions for natural signals than ReLU MLPs, as shown in <cit.>. NeuMIP <cit.> uses smaller MLPs, however, they require an additional decoder for their neural offset module, which helps them encode parallax effects (See Figure <ref>), for which our model struggles. Our decoder has one order of magnitude fewer parameters than <cit.>, however, the method in <cit.> provides the benefit of fast encoding of new materials, while ours requires a different model for each new material.
§.§ Limitations
As we show in Figure <ref>, our model struggles with materials with strong displacement. While our method provides accurate encodings on viewing angles close to the material surface, it cannot accurately encode grazing angles for such extreme cases. Displacement maps translate the geometric position of the points over the surface, breaking the underlying assumptions behind our neural texture. NeuMIP <cit.> solves this issue by explicitly modelling parallax effects with a neural offset module. While we did not observe that such extension was needed for acquired BTF data, like the UBO2014 <cit.> dataset, introducing a similar module into our editable neural material framework is an interesting future research direction to increase its generality.
* Limitations shared with BTFs: Materials with holes, transmittance effects, etc.
* Comparisons with NeuMIP: Our model struggles with strong parallax effects. Show some examples and propose modifications to the neural representation to mitigate these problems in future work. Also, show 1-2 examples of NeuMIP trained on Bonn BTFs where our model behaves similarly with a comparable number of parameters.
§ APPLICATIONS
§.§ Reflectance Propagation and Tileable Neural BTFs
Many material reflectance acquisition devices are limited in the surface dimensions they can digitize. This hinders their applicability to many real-world materials, which exhibit variations that cannot be captured at such small scales. Further, in many applications like SVBRDF acquisition, obtaining larger samples of the material improves realism and helps tileable texture synthesis. In this context, previous work on BTF reflectance compression inherit the surface area limitations of the capture devices used to generate their training data. Our method can easily be applied for reflectance propagation. We build upon the work of Rodriguez-Pardo and Garces <cit.> and leverage our encoder to propagate the neural texture optimized using a small portion of the material (e.g. a 1×1 cm capture) to a larger portion of the same material, represented with a guidance image captured using a commodity device like a flatbed scanner. Because our model is trained using a large amount of lighting conditions, as in <cit.>, the propagation is invariant to how the images are illuminated. We show results of such pipeline in Figure <ref>. For instance, on the last row, we show an anisotropic and specular Silver Jacquard fabric, for which we generated a BTF by rendering a 1×1 cm SVBRDF. This small crop cannot represent the complex pattern in the fabric, which we show on the guidance image, which covers a 10×10 cm area. Using our encoder, we propagate the neural material to this guidance image, generating a new, high-resolution, latent space which we can render, enabling realistic material representations with a reduced digitization cost. This propagated neural material has a 2000×2000 texels resolution, and it requires no re-training during test time.
Relatedly, our propagation framework can also be easily leveraged for generating tileable BTFs. Given any guidance image of the material, we can generate a tileable version of it, either using manual editions by artists or automatic algorithms <cit.>. With this tileable input guidance, we can use our autoencoder 𝒜 to propagate the neural texture 𝒯, effectively generating tileable BTFs, as we show in Figure <ref>. This propagation algorithm can leverage state-of-the-art algorithms for tileable texture synthesis without any modification of our material model or training framework. Tileable BTFs were not achievable with previous approximations and this simple pipeline has the potential of enabling novel applications of this type of material representation in rendering scenarios.
§.§ Structural Material Edition
Besides propagating BTF measurements to larger portions of the material, NeuBTF allows for generating novel materials using structural editions. Given a trained NeuBTF and a guidance image representing some particular target structure, we can propagate the neural texture to this guidance image, generating high-quality neural materials which preserve the structure of the guidance image and the reflectance properties of the trained neural material. This pipeline allows for easily generating multiple different neural materials without the need for retraining. As we show in Figure <ref>, this propagation method works for many types of input guidances, including vector black and white images, procedurally generated textures, or real photographs of materials and textures. We show results on acquired BTFs and from synthetic BTFs, rendered from scanned and manually generated SVBRDFs. As shown, our propagation frameworks provides high-quality material editions, even for very challenging cases, like the circles pattern.
§.§ Multi Resolution Neural Materials
Another useful application enabled by our method is the generation of materials at different resolutions. Unlike previous work <cit.>, which explicitly optimizes a pyramid of levels of detail during training, we can generate materials at any resolution at test time without introducing any additional complexities to our material representation.
Because we train our models using random rescales as a data augmentation policy, they are equivariant to rescales of its input guidance images G↓: ℛ(𝒜(𝒢))↓ = ℛ(𝒜(𝒢↓)). As such, we can generate any continuous resolution for a particular BTF by downsampling the guidance image to the target resolution and propagating its neural texture, as we illustrate in Figure <ref>. Note that this algorithm only guarantees accurate results for the rescaling ranges that we use during data augmentation.
§ CONCLUSIONS
We have presented a learning based representation for material reflectance which provides efficient encoding and powerful propagation capabilities. Our method introduces input conditioning into neural BTF representations. This allows for multiple applications which were not possible with previous neural models, including BTF extrapolation, tiling and novel material synthesis through structure propagation. Our method builds upon recent work on neural fields, network design and data augmentation, showing competitive compression capabilities with previous work on neural BTF representation. Through multiple analyses, we have shown the capabilities of our method on a variety of materials with different reflectance properties, including anisotropy or specularity, as well as effectively handling either synthetic and acquired BTFs.
Our method can be extended in several ways. The most immediate extension is to allow for materials with strong parallax effects due to displacement mapping or curvature, as in <cit.>. Our representation is limited to opaque materials. Extending them to handle translucent or holed surfaces would increase their realism in materials like thin fabrics or meshes. Further, our method could be extended to allow for hyperspectral BTF data <cit.>, but captured data is scarce.
Besides, recent work on neural BRDF representations <cit.> and generative models <cit.> suggests a promising research direction: Learning to sample from neural BTFs, using invertible neural networks. While these may introduce challenging complexities to the models, they could provide efficient representations for Monte Carlo rendering using importance sampling.
Further, building upon recent work on SVBRDF capture <cit.>, BRDF sampling <cit.> and BTF compression <cit.>, it could be possible to learn a prior over neural BTFs with a generative model. This should help in capturing more efficiently the data needed for generating these assets, as well as generating new materials and interpolating between them.
Finally, editing semantic and reflectance properties in neural fields is an active area of research <cit.>. While our method introduces structural edition into neural BTF representations, it is not capable of editing particular semantic properties, such as albedo or specularity. Extending our edition capabilities to more fine-grained parameters is an interesting research avenue. We hope our method inspires future research on neural material representations.
*Acknowledgments
Elena Garces was partially supported by a Juan de la Cierva - Incorporacion Fellowship (IJC2020-044192-I). This publication is part of the project TaiLOR, CPP2021-008842 funded by MCIN/AEI/10.13039/501100011033 and the NextGenerationEU / PRTR programs.
cag-num-names
|
http://arxiv.org/abs/2307.01170v1
|
20230703172958
|
Online nearest neighbor classification
|
[
"Sanjoy Dasgupta",
"Geelon So"
] |
cs.LG
|
[
"cs.LG"
] |
Quantum Neural Estimation of Entropies
Mark M. Wilde
July 3, 2023
======================================
We study an instance of online non-parametric classification in the realizable setting.
In particular, we consider the classical 1-<ref> algorithm, and show that it achieves sublinear regret—that is, a vanishing mistake rate—against dominated or smoothed adversaries in the realizable setting.
§ INTRODUCTION
In online classification, a learner observes a stream of data points x_t from an instance space 𝒳, and it is tasked with sequentially making predictions ŷ_t about their classes y_t coming from some label space 𝒴. At each point in time t = 1,2,…
- the learner is presented with an instance x_t ∈𝒳
- the learner makes a prediction ŷ_t ∈𝒴
- the label y_t is revealed, and the learner incurs some loss ℓ(x_t, y_t, ŷ_t),
where ℓ(x, y,ŷ) is a non-negative, bounded loss function satisfying ℓ(x,y,y) = 0 (there is no penalty for a correct prediction). The learner's performance is given by its regret at any time T, defined as the difference between the learner's cumulative loss and that of the best fixed classifier h : 𝒳→𝒴 that the learner would have chosen in hindsight from some comparator class ℋ,
regret_T := ∑_t=1^T ℓ(x_t, y_t, ŷ_t) - inf_h ∈ℋ ∑_t=1^T ℓ(x_t, y_t, h(x_t)).
Learning in the online setting means achieving sublinear regret, regret_T = o(T), for then the average loss of the online learner is asymptotically no worse than the average loss of the offline learner who had access to the data (x_1, y_1),…, (x_T, y_T) all at once.
While in the worst-case setting, this sequence of instances and labels may be completely arbitrary, we consider the more restrictive realizable setting, in which a concept c : 𝒳→𝒴 is fixed at the onset (though it may be chosen adversarially) and describes the labels y_t = c(x_t) for all time.
In this paper, we further let (𝒳, ρ) be a metric space, and we consider online classification through the 1-<ref> rule. This algorithm, first introduced by <cit.>, is a particularly appealing learning algorithm due to its simplicity: this learner memorizes everything it sees. Then, given some instance x, it searches for the nearest neighbor among previously seen instances x_1,…, x_t, returning the corresponding label as the prediction ŷ. We ask:
0.99
Question What are general conditions under which the 1-<ref> rule achieves sublinear regret in the realizable <ref> setting?
In our setting, when ℋ is the family of all nearest-neighbor classifiers, the best hindsight classifier in ℋ makes no mistakes, and so the regret consists only of the cumulative loss term; we simply aim to understand when the average loss of the nearest neighbor rule converges to zero:
average loss_T := 1/T∑_t=1^T ℓ(x_t, y_t, ŷ_t) → 0.
§.§ A negative result: the worst-case adversary
When the comparator class ℋ can interpolate the sequence of data, learning in the worst-case setting is generally intractable—even in the realizable setting. Unless the learner exactly recovers the underlying concept, a worst-case adversary (or indeed, a best-case teacher) can at each time step find test instances on which the learner errs; the average loss fails to converge to zero.
[Failing to learn the sign function]
Consider the sign function sign(x) := _x ≥ 0 on 𝒳 = ℝ. The <ref> makes a mistake every round on the sequence of instances:
x_t = (-1/3)^t.
At time t + 1, the nearest neighbor for x_t+1 is x_t, which has the opposite sign (see <Ref>).
The above negative example relies on the worst-case adversary's ability to select instances with arbitrary precision in order to construct a hard sequence. For <ref>, the hardness of a point can be related to its separation from points of different classes—constructing a hard sequence like the one above is possible precisely whenever the classes are not separated:
Let (𝒳, ρ) be a totally bounded metric space and c be a concept. Let ℓ be the zero-one loss ℓ(x, y, ŷ) = {y ŷ}. There is a sequence of instances (x_t)_t on which the <ref> rule fails to achieve sublinear regret on c if and only if there is no positive separation between classes:
inf_c(x) c(x') ρ(x,x') = 0.
This makes sense, since the inductive bias built into the nearest neighbor rule is that most points are surrounded by other points of the same class (though one might have to zoom in very close to a point before the labels of its surrounding neighbors become pure). Boundary points are not amenable to the nearest neighbor rule since their labels can't be learned from neighbors, nor do their labels consistently generalize to nearby points.
Intuitively, the nearest neighbor learner fares poorly if faced with an adversary that can take advantage of boundary points by selecting instances with arbitrary precision. However, it may be able to perform well if its adversary doesn't have unbounded power to find these hard points near the boundary. In this paper, we make this intuition precise through the smoothed analysis of nearest neighbors.
§.§ Smoothed analysis of online learning
While the nearest neighbor algorithm does not perform well in all worlds, we might reasonably expect to not live in the worst-case world. In that case, the worst-case analysis of nearest neighbor does not necessarily help elucidate the behavior of the algorithm in practice.
This motivates the smoothed analysis of online learning algorithms, in which the adversary does not directly select instances, but rather distributions μ_t from which the instances x_t are then drawn. If the distributions are fixed for all time, we recover the i.i.d. setting. If they may be point masses, we recover the worst-case setting. But somewhere in between, the smoothed online setting might also capture more tractable and realistic learning settings, and has been previously studied by <cit.>.
The following interaction protocol formalizes the <ref> setting:
One common smoothed setting is the Gaussian perturbation model <cit.>, where the adversary selects μ_t in the form of a Gaussian 𝒩(x̃_t, σ^2 I). Another natural setting is the σ-smoothed adversary model <cit.>, where there is some base distribution ν on the instance space 𝒳, and the adversary is constrained to not boost the probability mass of any region A ⊂𝒳 by more than a multiplicative factor σ^-1, so μ_t(A) ≤σ^-1·ν(A).
We distill the key property of these smoothed adversaries through the notion of a dominated adversary on a measure space (𝒳, ν). The dominated adversary is simply one that cannot place a constant probability mass μ(A) on region A ⊂𝒳 with arbitrarily small ν-mass. We define:
Let (𝒳, ν) be a measure space. The measure ν uniformly dominates a family ℳ of probability distributions on 𝒳 if for all ϵ > 0 there exists δ > 0 such that:
ν(A) < δ ⟹ μ(A) < ϵ,
for all A ⊂𝒳 measurable and distribution μ∈ℳ. A <ref> adversary is ν-dominated if at all times t it selects μ_t from a family of distributions uniformly dominated by ν.
To see why this is helpful, let's say that A_t ⊂𝒳 is the set of points on which the learner makes mistakes at time t. For a learner's error rate to converge to zero against a dominated adversary, it suffices to prove that the sequence ν(A_t) converges to zero: the probability that the dominated adversary induces a mistake μ_t(A_t) must also converge to zero since the μ_t's are uniformly dominated by ν. Convergence of the average loss then follows from the law of large numbers for martingales.
Of course, this captures only a narrow set of scenarios where learning succeeds—in general, the convergence of the mistake region to a null set is a much stronger than the convergence of the mistake rate to zero. For example, if the adversary never tests on some region of the space, the average loss could still converge to zero even though the size of the mistake region might not. Instead, we shall argue that under mild boundary conditions, all but finitely many mistakes that a nearest neighbor learner makes must come from a very small set of `hard points' (small with respect to ν). But as the adversary is ν-dominated, those instances can come only very infrequently.
§.§ Main results
Let (𝒳, ρ, ν) be a metric measure space. Assume that ρ is a separable metric and ν is a finite Borel measure. We prove that under mild boundary conditions, the <ref> rule achieves sublinear regret in the <ref> setting against ν-dominated adversaries.
To state the boundary condition, let's formalize the notion of boundary points. Given a concept c : 𝒳→𝒴, define the margin m_c(x) of a point x ∈𝒳 as its distance to points of different classes:
m_c(x) := inf_c(x) c(x') ρ(x, x').
We say that x is a boundary point of c if m_c(x) = 0, which is to say that it is arbitrarily close to points of other classes. Denote the set of boundary points by ∂𝒳. The condition we require is this:
[Boundary condition]
The set of boundary points ∂𝒳 is essentially countable. That is, it is the union of a countable set and a ν-measure zero set.
This boundary condition is the same condition required by <cit.> to prove the consistency of 1-<ref> in the i.i.d. setting. We can now state our main result:
Let (𝒳, ρ, ν) be a metric measure space, where ρ is a separable metric and ν is a finite Borel measure. Let c : 𝒳→𝒴 satisfy Assumption <ref>. Then, the <ref> rule achieves sublinear regret when learning c against a ν-dominated adversary. In particular, the average loss converges to zero:
a.s.lim_T→∞ 1/T∑_t=1^T ℓ(x_t, y_t, ŷ_t) = 0 a.s.
To show this, we prove a general condition in <Ref> under which online learning is possible against a dominated adversary. <Ref> shows that <ref> satisfies this condition. We also derive rates of convergence in <Ref>. Here is a simple instantiation of more general rates:
Let 𝒳⊂ℝ^d be the unit ball. Assume d > 1. Let the set of boundary points of c : 𝒳→𝒴 have finite Minkowski content with respect to the Lesbegue measure and let the adversary be σ-smoothed. Let p > 0. With probability at least 1 - p, the <ref> rule satisfies the error rate bound simultaneously for all time T:
1/T∑_t=1^T ℓ(x_t,y_t, ŷ_t) ≤(T/σ)^(-1 + o(1))/(d+1).
§.§ Related works
The 1-nearest neighbor rule <cit.> was shown by <cit.> to be consistent when the instances come i.i.d. under <Ref>. On the other hand, in the online learning setting where the sequence of instances can be arbitrary <cit.>, there is no learning algorithm that can achieve sublinear regret in the worst-case even in the case of learning a threshold function. However, worst-case analyses of algorithms can fail to explain the observed behavior of algorithms, especially if hard instances are extremely rare in practice <cit.>. This motivates the smoothed analysis of algorithms, first introduced by <cit.>. The setting of smoothed online learning was first studied by <cit.>, and has recently been followed up by a series of work <cit.>. Our work fills in the gap between the i.i.d. and worst-case analysis of nearest neighbor, while also giving the first convergence result in smoothed non-parametric online learning. <Ref> further expands on related works.
§ PRELIMINARIES
In <ref>, the learner incrementally updates its prediction rule as it receives more data. It does so according to a prediction strategy 𝒜, which constructs each subsequent hypothesis h_t+1: 𝒳→𝒴 based on previously seen data:
𝒜 : {(x_τ, y_τ)}_τ=1^t ↦ h_t+1.
Suppose that c is the underlying concept to be learned. Then, every hypothesis h : 𝒳→𝒴 induces an error function ℰ: 𝒳→ℝ, which is the loss that h achieves at any particular instance x,
ℰ(x) := ℓ(x, c(x), h(x))
When the prediction strategy 𝒜 and concept c are clear from context, it shall be fruitful to let ℰ_t be the associated error function to h_t generated by 𝒜. Rewriting <Ref>, we say that the strategy 𝒜 learns if it achieves a vanishing error rate:
average loss_T = 1/T∑_t=1^T ℰ_t(x_t) → 0.
§.§ Online local consistency
We introduce the online local consistency (OLC) condition for learning against dominated adversaries. This is a condition that depends on both the learning algorithm and the concept to be learned.
For intuition, let 𝒳 be composed of (countably many) known clusters, and suppose that we are guaranteed that points in the same cluster have the same label. A natural learning algorithm is to remember a single label from each cluster, and to return that label if a point from the same cluster is queried. In this setting, the learner makes at most one mistake per cluster. If ν is a finite measure over 𝒳, then over time, a ν-dominated adversary will find it increasingly harder to pick points from previously unseen clusters; the mistake rate will eventually converge to zero.
We generalize these easily-learned clusters through the notion of locally-learned sets for a learner. In the following, if U ⊂𝒳 is a locally-learned set, we can think of the online learning problem restricted to U as easy for the learner: no matter what sequence of points an adversary chooses, the learner will eventually incur arbitrarily small loss from U.
Let c : 𝒳→𝒴 be a concept. We say that c is locally learned on a subset U ⊂𝒳 by the prediction strategy 𝒜 when, for any sequence of instances (x_t)_t, either:
(i) x_t falls into U finitely often, or
(ii) the error function ℰ_t|_U → 0 restricted to U uniformly converges to zero.
In this case, we say that U is a locally-learned set for c.
For example, singleton sets are locally-learned by consistent learners, which are learners that exactly interpolate past data. But in general, if 𝒳 is uncountable, this family of locally-learned sets is too granular to work with, as the family also becomes uncountably large. The OLC condition ensures that there is a way to cut up the problem into a countable collection of `easy' problems.
A prediction strategy 𝒜 is online locally consistent (OLC) for a concept c if there exists a countable collection 𝒰_c := {U_n}_n of locally learned sets for c that covers all but a ν-negligible subset of 𝒳.
The argument for why an OLC learner can perform well against a dominated adversary is not unlike the earlier example of learning labels for pure clusters. We can restrict the learning problem to a finite collection of locally-learned sets that covers all but a small part of 𝒳. Because the part of 𝒳 we covered consists only of finitely many easy learning problems, the learner's error rate will eventually converge to zero here. The uncovered portion of 𝒳 can be made sufficiently small so that its contribution to the error rate is made arbitrarily small—the adversary cannot test the learner with instances from this region very frequently because it is ν-dominated.
§.§ Mutually-labeling sets
For the analysis of nearest neighbor, we introduce the notion of a mutually-labeling set. It is a set defined so that, upon receiving a label for any point within the set, the nearest neighbor learner will never make a subsequent mistake on any other point in that set (see <Ref>).
A set U ⊂𝒳 is a mutually-labeling set for a concept c if:
, ∀ x, x' ∈ U.ρ(x,x') < m_c(x), ∀ x, x' ∈ U.
Naturally, mutually-labeling sets are locally learned (<Ref>). The proof of convergence for OLC learners using locally-learned sets generalizes the following proof sketch for <ref>:
Proof Sketch of <Ref> For simplicity, let's assume a stronger boundary condition: the set of boundary points of c has ν-measure zero. It turns out that if x is not a boundary point, then sufficiently small open balls centered at x are mutually-labeling sets (see Lemma <ref>). Thus, 𝒳 is covered almost everywhere by open mutually-labeling sets. By separability of ρ and finiteness of ν, all but an arbitrarily small region of 𝒳 can be covered by a finite number of such sets.
Because the <ref> learner makes at most one mistake on each mutually-labeling set, eventually all mistakes must come from the uncovered hard region. The average rate at which a ν-dominated adversary can test the learner with these hard instances can almost surely be bounded above by any ϵ > 0, by selecting a sufficiently small hard region for our analysis. Thus, the average loss converges to zero almost surely, by the law of large numbers for martingales. ▪
§ CONVERGENCE OF OLC LEARNERS
Given an <ref> problem on the measure space (𝒳, ν) where ν is a finite measure. Suppose the learner is online locally consistent with respect to c and that the adversary is ν-dominated. Then, the learner's error rate converges:
a.s.lim_T→∞ 1/T∑_t=1^T ℓ(x_t, y_t, ŷ_t) = 0 a.s.
Before commencing the proof, recall that ℰ_t(x_t) is the error incurred by the learner at time t. Given error function ℰ and test distribution μ, let's also define the notation ℰ(μ) to be the expected error,
ℰ(μ) := _x ∼μ[ℰ(x)].
If A ⊂𝒳 is measurable, let ℰ_A denote the pointwise product of ℰ and the indicator on A.
Proof of <Ref>
We show that for any ϵ > 0, the following error rate bound holds:
a.s. lim_T →∞ 1/T∑_t=1^T ℰ_t(x_t) < 2ϵ a.s.
If so, then this statement holds simultaneously for any countable sequence of ϵ converging to zero, implying that the error rate converges to zero almost surely.
To prove <Ref>, fix ϵ > 0. Because the loss function is bounded above, say by C > 0, we have for any error function ℰ and any measurable A ⊂𝒳,
(ℰ_A)(μ) ≤ (C _A)(μ) = C ·μ(A).
The right-hand side can be bounded in terms of ν(A) whenever μ is chosen by a ν-dominated adversary. In particular, we may select δ > 0 such that:
ν(A) < δ ⟹ (ℰ_A)(μ) < ϵ.
Let us do so: any region whose ν-mass is less than δ contributes no more than ϵ to the error rate.
We claim that there exists a subset V ⊂𝒳 with the properties that (a) there exists a random time T_ϵ such that the learner incurs less than ϵ error for any further instance x_t that lands in V,
, ∀ t ≥ T_ϵ and x_t ∈ V.ℰ_t(x_t) < ϵ, ∀ t > T_ϵ and x_t ∈ V,
and that (b) V covers all but a δ-mass of 𝒳, so that ν(V^c) < δ. Assume this for now—we decompose ℰ_t into its pieces on V and V^c, with ℰ_t = ℰ_t _V + ℰ_t _V^c. We have:
- By property (a) of V, the sequence (ℰ_t _V) (x_t) eventually remains less than ϵ, in particular when we have t > T_ϵ. Because T_ϵ is almost surely finite, we have that:
lim_T→∞ 1/T∑_t=1^T (ℰ_t_V)(x_x) < ϵ.
- By property (b) of V, the mass of V^c is less than δ. <Ref> implies:
(ℰ_t _V^c)(μ_t)< ϵ.
By the law of large numbers for martingales (<Ref>), this implies that almost surely:
lim_T→∞ 1/T∑_t=1^T (ℰ_t _V^c)(x_t) = lim_T→∞ 1/T∑_t=1^T (ℰ_t _V^c)(μ_t) < ϵ.
Because the loss function is bounded, the error rates within the limits in <Ref> are also bounded. Thus, we can sum the two equations and apply dominated convergence, interchanging limits and sum, to yield <Ref>.
To finish the proof, we show that V exists. The learner is OLC, so there is a countable cover {U_n}_n ∈ℕ of locally learned sets for 𝒳 almost everywhere. Let V satisfying ν(V^c) < δ be chosen as a finite union:
V := ⋃_i=1^N U_n.
Such a N < ∞ exists by the continuity of measure, since ⋃_n=1^∞ U_n is essentially all of 𝒳.
By now, we have constructed V in such a way such that property (b) holds. To show property (a), we use the fact that each U_n is locally learned: either (i) (x_t)_t eventually never returns to U_n, which is to say that {x_t ∈ U_n} converges to zero over time, or (ii) for sufficiently large t, ℰ_t|_U_n < ϵ. Thus, almost surely, there exists some T_n such that for all t > T_n,
ℰ_t(x_t) ·{x_t ∈ U_n} < ϵ.
Property (a) follows by defining T_ϵ := max{T_1,…, T_N}.
▪
§ NEAREST NEIGHBOR IS AN OLC LEARNER
Let (𝒳, ρ, ν) be a metric measure space, where ρ is a separable metric and ν is a finite Borel measure. If c is a concept whose boundary points satisfy Assumption <ref>, then <ref> is OLC with respect to c.
To show that <ref> is OLC, we need to prove that any concept c with essentially countable boundary also has a countable family of locally-learned sets.
We define two types of locally-learned sets for <ref>: singleton sets for the boundary points and mutually-labeling sets for everything else. Recall that mutually-labeling sets U satisfy:
, ∀ x,x' ∈ U.ρ(x,x') < m_c(x), ∀ x,x' ∈ U,
where m_c(x) is the margin between x and the boundary of c. Note that all points in U share the same label. If this weren't the case, then there would exist x, x' ∈ U with different labels such that:
ρ(x,x') < inf_c(x) c(x̃) ρ(x, x̃)_m_c(x)≤ρ(x,x'),
a contradiction. The following lemma further shows that these are locally learned sets:
Consider learning the concept c via the <ref> rule. If U is a mutually-labeling set for c and x_t ∈ U, then for all time τ > t, the predictor h_τ is correct on all of U. Thus, U is locally learned.
Let x ∈ U so that c(x) = c(x_t). When τ > t, the nearest neighbor classifier errs on x only if the closest point to x among x_1,…, x_τ is of the opposite class. But this is impossible since the closest point must be no more than a distance of ρ(x,x_t) and U is mutually labeling.
Sufficiently small balls around any non-boundary point x are mutually-labeling sets.
Let c : 𝒳→𝒴 be a concept, and suppose that x has positive margin m_c(x) > 0. Then, the open ball B(x, m_c(x)/3) is mutually labeling.
Let x_1, x_2 ∈ B(x, m_c(x)/3). By the triangle inequality,
ρ(x_1, x_2) ≤ρ(x_1, x) + ρ(x, x_2) < 2m_c(x) /3.
We also know for i ∈{1,2} and for all x̃ that ρ(x_i, x̃) ≥ρ(x, x̃) - ρ(x_i, x), by the reverse triangle inequality. Since c(x_i) = c(x), we take infimums on both sides over x̃ where c(x̃) c(x), so:
inf_c(x_i) c(x̃) ρ(x_i, x̃)_m_c(x_i)≥inf_c(x) c(x̃) ρ(x,x̃)_m_c(x) - ρ(x_i, x) ≥ 2m_c(x) / 3.
This implies that ρ(x_1, x_2) < m_c(x_1), so that B(x, m_c(x)/3) is mutually labeling.
Proof of <Ref>
Given a concept c with essentially countable boundary, we construct a countable cover of 𝒳 except for a ν-measure zero set by locally-learned sets of c.
Let us denote by ∂𝒳 the set of boundary points {x : m_c(x) = 0}. By Lemma <ref>, non-boundary points 𝒳∖∂𝒳 can be covered by the family of open mutually-labeling sets,
{B(x, m_c(x)/3) : x ∈𝒳∖∂𝒳}.
By the separability of 𝒳, there is a countable subcover of 𝒳∖∂𝒳 by mutually-labeling sets. These are locally-learned sets, by Lemma <ref>.
As for the boundary points, the set ∂𝒳 is essentially countable ∂𝒳 = 𝒩∪𝒵, where 𝒩 is countable and 𝒵 is ν-measure zero. Then, each {x} for x ∈𝒩 is a locally-learned set because nearest neighbors is a consistent learner. Together, these two collections of locally-learned sets is a countable cover of all of 𝒳 except for a measure zero set; thus, the <ref> is OLC.
▪
§ RATES OF CONVERGENCE FOR NEAREST NEIGHBOR
Rates of convergence for <ref> arise almost immediately out of the proof technique for asymptotic convergence. Recall that the proof technique consisted of decomposing 𝒳 into V and V^c, where (i) V can be covered by finitely many mutually-labeling sets and (ii) V^c has small ν-mass.
The proof can be adapted to yield rates by quantifying (i) the number of mutually-labeling sets required to cover V, and (ii) the rate at which a ν-dominated adversary can boost the probability of selecting points from V^c. To bound these, we respectively define the following:
Let V⊂𝒳. The mutually-labeling covering number 𝒩_ML(V) given a concept c is the size of a minimal covering of V by mutually-labeling sets.
An adversary has smoothness rate ϵ : ℝ_≥ 0→ [0,1] whenever all distributions μ it can select satisfy:
∀ A ⊂𝒳 measurableμ(A) ≤ϵ(ν(A)), ∀ A ⊂𝒳 measurable.
An adversary is ν-dominated if lim_δ→ 0 ϵ(δ) = 0. It is σ-smooth if ϵ is further 1/σ-Lipschitz.
For simplicity, let us assume that the boundary ∂𝒳 has ν-measure zero. Then, the following mistake rate is obtained by separately counting mistakes on V and V^c:
[#mistakes by time T] ≤min{T , inf_V ⊂𝒳 𝒩_ML(V) + Tϵ(ν(V^c))}.
By a standard application of Azuma-Hoeffding's, we can convert this into a high-probability bound:
Let (𝒳, ρ, ν) be a metric measure space with separable metric ρ and finite Borel measure ν. Let c be a concept with measure zero boundary. Let the ν-dominated adversary have smoothness rate ϵ. Fix p > 0. Then, with probability at least 1 - p, the following mistake bound holds for <ref> simultaneously for all T ∈ℕ :
#mistakes_T ≤min{T , inf_V ⊂𝒳 𝒩_ML(V) + T ϵ(ν(V^c)) + √(2T log2T/p)}.
§.§ Convergence rate for length metric spaces
In this section, we instantiate the convergence rate when 𝒳 is a length metric space. The appealing property of length spaces is that the margin of a point x is simply its distance to boundary points:
Let (𝒳,ρ) be a length space. Let c be a classifier. Then,
m_c(x) = ρ(x, ∂𝒳).
In this case, it is natural to restrict V ⊂𝒳 in <Ref> to the sets of the form:
V_r := {x ∈𝒳 : m_c(x) ≥ r}.
These are the set of points whose margin is at least r. Then, we need to control the mutual-labeling covering number of V_r and the ν-masses of V_r^c. When 𝒳 is a length space, these can be bounded in terms of the geometry of the boundary ∂𝒳. The reason is that in length spaces, points with small margins are also close to boundary points: here, V_r^c precisely coincides with the r-expansion ∂𝒳^r of the boundary. And when 𝒳 is a doubling space, we can quantify the bounds in terms of the box-counting dimension d(∂𝒳) and the Minkowski content 𝔪(∂𝒳) of the boundary.
In particular, <Ref> shows that for small r,
𝒩_ML(V_r) ≲ r^-d and ν(V_r^c) ≲𝔪· r,
where the hand-waving inequality can be made rigorous by replacing d = d + o(1) and 𝔪 = 𝔪 + o(1). For example, this yields convergence rates of <ref> against σ-smoothed adversaries, by plugging <Ref> into <Ref>. After optimizing r, we obtain the following result:
#mistakes_T ≲(𝔪T/σ)^d/(d + 1).
Let (𝒳, ρ, ν) be a bounded length space with finite doubling dimension and Borel measure. Suppose the concept c satisfies ν(∂𝒳) = 0. Let the adversary be σ-smooth for σ > 0. Denote the box-counting dimension and Minkowski content of ∂𝒳 by d := d(∂𝒳) and 𝔪 := 𝔪(∂𝒳) respectively. Assume d > 1.
The following holds for <ref>: given c_1, c_2, p > 0, there exist constants C_0, C_1 > 0 such that with probability at least 1 - p, the mistake bound holds simultaneously for all T:
#mistakes_T ≤ C_0 + C_1 ((𝔪 + c_2)T/σ)^(d + c_1) / (d + 1).
See <Ref> for proofs.
§ RELATED WORK
Non-parametric online learning We consider non-parametric online classification in the realizable setting with bounded loss. Without further conditions imposed on the problem, existing work shows that online learning as a rule is not possible in this setting. Consider the setting with an unrestricted adversary and the zero-one loss ℓ(x,y, ŷ) = {y ŷ}. For binary classification in this case, <cit.> has characterized online learnability of a concept class 𝒞 by the non-existence of infinite Littlestone trees associated to 𝒞, a weaker condition than that of having finite Littlestone dimension <cit.>. However, as any reasonably non-parametric setting will have infinite Littlestone trees, there is not much more to be said about online non-parametric classification with the worst-case adversary under the zero-one loss.
And so, because of the difficulty of online non-parametric learning, conditions are often imposed that (i) restrict the concept class, (ii) relax the notion of regret, or (iii) constrain the adversary.
In the first instance, the difficulty of making inferences can be reduced by imposing regularity conditions such as Lipschitzness or smoothness on the underlying concept class. This is especially natural in the regression setting where the label space 𝒴 is continuous. For example, <cit.> consider the noisy setting where the label y associated to an instance x is drawn from the conditional distribution P_Y|X=x where the conditional mean [Y|X = x] is Lipschitz continuous in x. In the realizable classification, this constraint guarantees that points of different classes have positive separation.
In the second instance, the notion of what it means to learn online can be relaxed by changing the definition of regret. For example, much of the existing work in online non-parametric learning assume universal Lipschitz or Hölder constants constraining the family of comparator functions ℋ, while also considering a convex or Lipschitz loss function <cit.>. We shall make no such assumptions in this work.
In the last instance, the hardness of the online sequence of points is limited, as in the smoothed online setting of <cit.>. In particular, <cit.> show that any concept class is online learnable against smoothed adversaries if it has finite VC dimension <cit.>. But because any reasonably non-parametric setting will also have infinite VC dimension, it was an open question whether learning is possible in the non-parametric setting under the smoothed online setting. We demonstrate that the nearest neighbor is indeed able to learn in this setting, while generalizing the notion of the smoothed adversary studied by <cit.>.
Nearest neighbor methods The 1-nearest neighbor rule was initially introduced and studied by <cit.>. <cit.> showed that when the sequence (x_t)_t is drawn i.i.d. from some data distribution ν over 𝒳, nearest neighbor is consistent under the same boundary conditions as our Assumption <ref>. There is much work extending the algorithm to other nearest neighbor methods and analyses in the i.i.d. setting <cit.>. See also survey work <cit.> and references therein.
There has been limited work on nearest neighbor methods in the non-i.i.d. setting. As noted above, <cit.> studied the online learning setting where the sequence of instances can be arbitrary, but with the Lipschitz constraint on the underlying regression function. While not in the online setting, both <cit.> and <cit.> considered the consistency of nearest neighbor classifiers where the training and test data distributions differ. In particular, <cit.> studied nearest neighbor under selective sampling. Here, instances are drawn i.i.d. but only some of the labels are selectively revealed. And <cit.> studied the covariate-shift transfer learning setting where the train ν and test μ distributions are related by μ(A) < Cν(A) when A comes from some family of measurable sets 𝒜.
§ LEARNING WITH SEPARATION GUARANTEE IN THE WORST-CASE
Proof of Proposition <ref>
Suppose that there is a positive separation between classes, so that the margin m_c(x) is lower bounded by some m> 0 for all x ∈𝒳. By Lemma <ref>, the collection of open balls B(x, m/3) for all x ∈𝒳 forms a cover of 𝒳 by mutually labeling sets. Because 𝒳 is totally bounded, there is a finite subcover of 𝒳 by these mutually labeling balls. As each of these sets admits at most one mistake by the <ref> learner, it makes at most finitely many mistakes, achieving sublinear regret.
On the other hand, suppose that there is no positive separation between classes. Then, we can find a sequence of pairs (x_2t - 1, x_2t) such that:
* x_2t-1 is the nearest neighbor of x_2t out of all previous instances x_1,…, x_2t-1, and
* x_2t-1 and x_2t are of different classes, c(x_2t-1) c(x_2t).
Thus, <ref> makes a mistake ŷ_2t = c(x_2t-1) at every even-numbered time, and so:
lim inf_T →∞1/T∑_t=1^T {y_t ŷ_t}≥1/2.
That is, it fails to achieve sublinear regret.▪
§ THE LAW OF LARGE NUMBERS FOR MARTINGALES
For completeness, we include a version of the strong law of large numbers (SLLN).
Let (M_t)_t ≥ 0 be a martingale and let ξ_t = M_t - M_t-1 for t > 0. If Eξ_t^2 < K < ∞, then:
a.s. M_t / t → 0 a.s.
For example, we can use it to formally prove our remark right after Definition <ref>, reproduced below. Of course, the remark requires too stringent of a condition to be a useful. But, it is a good demonstration of how to formally define the martingale on which <Ref> can be applied.
Let A_t be the mistake set of an online learner at time t against a ν-dominated adversary. Suppose that (A_t)_t=1^∞ converges to a ν-measure zero set almost surely. Then, the mistake rate converges to zero almost surely as well:
lim_T →∞ 1/T∑_t=1^T {y_t ŷ_t} = 0.
Define {ℱ_t}_t to be the natural filtration for the stochastic process {(x_t, A_t+1, μ_t+1)}_t. The following is a martingale difference sequence:
ξ_t = {x_t∈ A_t}_{y_tŷ_t} - E[{x_t∈ A_t} | ℱ_t-1]_μ_t(A_t).
Since, Eξ_t^2 < 1, we can apply <Ref>, which implies:
lim_T→∞ 1/T∑_t=1^T({y_tŷ_t} - μ_t(A_t)) = 0.
Notice that because the adversary is ν-dominated, the almost-sure convergence of ν(A_t) to zero implies that of μ_t(A_t) to zero. Thus, the time-averaged expected mistake rate also goes to zero:
lim_T→∞ 1/T∑_t=1^T μ_t(A_t) = 0.
By dominated convergence, we can sum the previous two equations, proving this remark.
§ ON THE BOUNDARY CONDITION
Assumption <ref> requires that the boundary points to be essentially countable. While this condition does not come for free, counterexamples tend to be fairly pathological. An example where this fails is when the concept is the indicator on the fat Cantor set.
Recall that the fat Cantor set is obtained as the limit of subsets of the unit interval. Each subset looks like a finite union of closed intervals, and at each iteration n, the middle 2^-(n+1)-fraction of each interval is removed, leaving behind two smaller closed intervals. The limit is a set with Lebesgue measure 1/2 with no interior; each point in the fat Cantor set is a boundary point. However, all countable sets have Lebesgue measure zero, so the fat Cantor set is not essentially countable.
The Osgood curve gives another counterexample. Recall that a Jordan curve is a closed curve in ℝ^2 that is homeomorphic to the circle, splitting the plane into interior and exterior regions. It is a Jordan curve whose boundary between these two regions has positive measure <cit.>.
§ PROOFS FOR RATES OF CONVERGENCE
In this section, we provide the background and proofs for <Ref>.
§.§ Analysis on length spaces
Recall that length spaces are spaces where distances between points are given by the infimum of lengths over continuous paths between those points. For reference, see also <cit.>.
A metric space (𝒳,ρ) is a length space if for all x, x' ∈𝒳,
ρ(x,x') = inf_γ ℓ(γ),
where γ : [0,1] →𝒳 include all continuous paths from x to x' and ℓ(γ) is the length of the path γ.
Proof of <Ref> To show that m_c(x) = ρ(x, ∂𝒳), we prove left and right inequalities.
First, the margin is upper bounded by m_c(x) ≤ρ(x, ∂𝒳). To see this, fix δ > 0. By the definition of the distance between x and the set ∂𝒳, there is a boundary point z ∈∂𝒳 such that:
ρ(x, z) < ρ(x, ∂𝒳) + δ/2.
And as boundary points are arbitrarily close to at least two classes, there exists x' ∈𝒳 close to z:
ρ(z, x') < δ/2,
while also belonging to a different class than x. By the definition of m_c(x) and by triangle inequality, we obtain that for all δ > 0, there exists some x' satisfying:
m_c(x) ≤ρ(x,x') < ρ(x, ∂𝒳) + δ.
Letting δ go to zero yields the first inequality.
For the other, we claim that if γ : [0,1] →𝒳 is a continuous path from x to x' with c(x) c(x'), then there exists a point γ(t) contained in ∂𝒳. If the claim is true, then the other inequality holds:
ρ(x, ∂𝒳) (i)≤inf_c(x) c(x') inf_γ ℓ(γ) (ii)=inf_c(x) c(x') ρ(x,x') (iii)= m_c(x),
where (i) the infimum above is taken over all continuous paths γ from x to x', (ii) applies the definition of a length space, and (iii) applies the definition of the margin.
To prove the claim, let t be the first time a point on the path has a different label than x. Formally,
t := _s ∈ [0,1] {c(γ(s)) c(x)}.
To show that γ(t) ∈∂𝒳, we need to exhibit a point γ(s) that is δ-close to γ(t) with a different label, given any δ > 0. Indeed, such a s exists by the definition of t and the continuity of γ.
§.§ Analysis on metric measure spaces
To obtain bounds on the mutually-labeling covering number 𝒩_ML(V_r), we need to introduce the notion of the box-counting dimension a set A ⊂𝒳 and the doubling dimension of a metric space 𝒳. Let us first recall the following definitions and results from analysis and measure theory.
Given r > 0 and A ⊂𝒳, the r-covering number 𝒩_r(A) of A is size of a minimal covering of A by balls with radius r.
The (upper) box-counting dimension of A ⊂𝒳 is:
d(A) := lim sup_r → 0 log𝒩_r(A)/log 1/r.
The box-counting dimension implies a bound on the covering number 𝒩_r(A) of r^-d(A) + o(1). The following lemma is a straightforward conversion of the asymptotic limit into a quantitative bound.
Let 𝒳 be bounded with diameter R. Let A ⊂𝒳 have box-counting dimension d(A). Then, for all c > 0, there exists a constant C > 0 such that:
𝒩_r(A) < Cr^-(d(A) + c).
Fix c > 0. By the definition of d(A), there exists r_0 > 0 such that whenever 0 < r < r_0,
log𝒩_r(A)/log 1/r < d(A) + c.
Because 𝒩_r(A) is non-increasing in r, we can extend the bound to all 0 < r < R,
log𝒩_r (A)/log 1/(r ∧ r_0) < d(A) + c,
where r ∧ r_0 := min{r, r_0}. In fact, we have min{r, r_0} > r · r_0 / R, and so:
𝒩_r(A) < (r_0/R· r)^-(d(A) + c).
To finish the proof, it suffices to let C = (r_0/R)^-(d(A) + c).
A metric space (𝒳, ρ) has doubling dimension Γ if there is a constant C > 0 such that for all radius r > 0 and centers x ∈𝒳, the covering number is bounded:
𝒩_r/2(B(x,r)) ≤ C 2^Γ.
We say that 𝒳 is doubling if it has finite doubling dimension Γ < ∞.
To obtain bounds on the mass ν(V_r^c), we need to introduce the Minkowski content of a set A ⊂𝒳. First, recall that the r-expansion of a set A fattens the set to all points of distance within r of A:
Let A ⊂𝒳 be a set and r > 0. The r-expansion A^r of A is:
A^r := ⋃_x ∈ A B(x,r).
The Minkowski content of A is the rate at which an infinitesimal fattening of A increases its mass:
Let (upper) Minkowski content of A ⊂𝒳 is:
𝔪(A) := lim sup_r → 0ν(A^r) - ν(A)/r.
The following lemma bounding the covering number of the r-expansion of a set in terms of the doubling dimension will also be helpful:
Let (𝒳,ρ) have finite doubling dimension Γ. There exists a constant C > 0 such that for all A ⊂𝒳, we have:
𝒩_r(A^r) ≤ C2^Γ𝒩_r(A).
Let A be covered by the balls B(x_1,r),…, B(x_n, r) where n = 𝒩_r(A). Then, by the triangle inequality, the r-expansion A_r is covered by the r-expanded balls, B(x_1,2r),…, B(x_n, 2r). Now, by the definition of the doubling dimension, each expanded ball B(x_i, 2r) can be covered by C2^Γ balls with radius r. It follows that covering A_r needs at most C2^Γn balls with radius r.
§.§ Bounding geometric quantities of ∂𝒳
Let (𝒳, ρ, ν) be a bounded length space with finite doubling dimension and Borel measure. Suppose the concept c satisfies ν(∂𝒳) = 0. Then, for any c_1, c_2 > 0, there is a constant C > 0 and r_0 > 0 so that for all 0 < r < r_0,
𝒩_ML(V_r) ≤ C r^- (d(∂𝒳) + c_1) and ν(V_r^c) ≤(𝔪(∂𝒳) + c_2) · r.
Proof of <Ref>
Recall that V_r and ∂𝒳 are defined in terms of the margin:
V_r := {x ∈𝒳 : m_c(x) ≥ r} and ∂𝒳 = {x ∈𝒳 : m_c(x) = 0}.
While the complement V_r^c always contains the expansion ∂𝒳^r, generally V_r^c can be much larger. But when 𝒳 is a length space, equality holds:
Let (𝒳, ρ) be a length space. Then, for all r > 0:
V_r^c = ∂𝒳^r.
<Ref> shows that when 𝒳 is a length space, m_c(x) = ρ(x, ∂𝒳). Thus:
x ∈ V_r^c ⟺ m_c(x) < r ⟺ ρ(x, ∂𝒳) < r ⟺ x ∈∂𝒳^r.
Now, the question of bounding 𝒩_ML(V_r) and ν(V_r^c) becomes that of 𝒩_ML(𝒳∖∂𝒳^r) and ν(∂𝒳^r).
Let 𝒳 be a bounded length space with finite doubling dimension Γ and diameter R. Given a concept c, let d be the box-counting dimension of ∂𝒳. Then, for any c > 0, there exists a constant C > 0 such that for all r > 0:
𝒩_ML(𝒳∖∂𝒳^r) ≤ CR^4Γ r^-(d + c).
We can write 𝒳∖∂𝒳^r as a union of layers of the form L_k := ∂𝒳^2^k+1r∖∂𝒳^2^kr,
𝒳∖𝒳^r = ⋃_k=0^⌈ R/r⌉ L_k.
Then, we can upper bound the mutually-labeling covering number by the sum:
𝒩_ML(𝒳∖𝒳^r) ≤∑_k=0^⌈ R/r⌉𝒩_ML(L_k).
To upper bound 𝒩_ML(L_k), first note that by <Ref>,
L_k ⊂𝒳∖∂𝒳^2^k r = V_2^k r.
Thus, the margin of any point x ∈ L_k is at least 2^k r. By <Ref>, the ball B(x, 2^kr/3) is a mutually-labeling set, so that 𝒩_ML(L_k) ≤𝒩_2^k r/3(L_k). In fact, we obtain the following:
𝒩_ML(L_k) ≤𝒩_2^k r/3(L_k) (i)≤𝒩_2^k-2 r(L_k)
(ii)≤𝒩_2^k-2 r(∂𝒳^2^k+1r)
(iii)≤ C_1 2^3Γ𝒩_2^k+1 r(∂𝒳^2^k+1r)
(iv)≤ C_2 2^4Γ𝒩_2^k+1r(∂𝒳)
(v)≤ C_3 2^4Γ (2^k+1r)^-(d+c)
where (i) holds because the radius 2^k-2r is less than 2^k r/3, (ii) follows because ∂𝒳^2^k+1r contains L_k and so has larger covering number, (iii) makes use of the definition doubling dimension three times to convert the 2^k-2-covering number to a 2^k+1-covering number, (iv) applies <Ref> to convert the covering number of the expansion to that of the boundary set, and (v) upper bounds the covering number in terms of the box-dimension of ∂𝒳 by <Ref>.
By combining <Ref>, we obtain:
𝒩_ML(𝒳∖𝒳^r) ≤ C_3 2^4Γ r^-(d + c)∑_k = 0^∞ 2^- (d+c)(k+1),
where the geometric series converges to a constant 2^-(d+c). We finish by relabeling the constants.
This shows that 𝒩(V_r) = r^-(d(∂𝒳) + o(1)). Next we show that ν(V_r^c) = (𝔪(∂𝒳) + o(1) ) · r. This is immediate from the definition of the Minkowski content 𝔪(∂𝒳).
Let (𝒳, ρ, ν) be a metric Borel space. Suppose c is a concept satisfying ν(∂𝒳) = 0 whose boundary ∂𝒳 has Minkowski content 𝔪. Then, for any c > 0, there exists some r_0 such that for all 0 < r < r_0:
ν(∂𝒳^r) < (𝔪 + c) · r.
Since the boundary has measure zero, the definition of Minkowski content states that there exists r_0 > 0 so that for all 0 < r < r_0,
ν(∂𝒳^r)/r < 𝔪(∂𝒳) + c.
The result follows by multiplying through by r.
Together, <Ref> prove <Ref>.▪
§.§ Proofs of convergence rates
Proof of <Ref>
Fix V ⊂𝒳. Let (x_t)_t be the sequence of test instances. Denote by A_t ⊂𝒳 region on which <ref> makes a mistake at time t. We can count the total number of mistakes separately on V and V^c:
#mistakes_T := ∑_t=1^T {x_t ∈ A_t} = ∑_t=1^T {x_t ∈ A_t ∩ V}_mistakes made in V + ∑_t=1^T {x_t ∈ A_t ∩ V^c}_mistakes made in V^c
Because at most one mistake can be made per mutually-labeling set on V, the first summation can be bounded by 𝒩_ML(V). The second term can be bounded by the number of times x_t comes from V^c:
{x_t ∈ A_t ∩ V^c} ≤{x_t ∈ V^c}
= {x_t ∈ V^c} - μ_t(V^c)_martingale difference + μ_t(V^c).
By Azuma-Hoeffding's, we have that with probability at least 1 - p/2T^2:
∑_t=1^T {x_t ∈ V^c} - μ_t(V^c)_martingale difference≤√(T log2T^2/p)≤√(2 T log2T/p).
Because the adversary is ν-dominated, we also have μ_t(V^c) < ϵ(ν(V^c)). By taking a union bound over all T ∈ℕ, we obtain that with probability at least 1 - p,
#mistakes_T ≤𝒩_ML(V) + T ϵ(ν(V^c)) + √(2T log2T/p).
The result follows from optimizing V, and by noting at most T mistakes can be made in T time.
▪
Proof of <Ref>
Given c_1, c_2 > 0, <Ref> yields C, r_0 > 0 so that when 0 < r < r_0,
𝒩_ML(V_r) ≤ C r^- (d + c_1) and ν(V_r^c) ≤(𝔪 + c_2) · r.
From <Ref>, it follows that with probability at least 1 - p, we have for all T:
#mistakes_T ≤inf_0 < r < r_0 𝒩_ML(V_r) + T ϵ(ν(V_r^c)) + √(2T log2T/p)
≤inf_0 < r < r_0 C r^- (d + c_1) + T σ^-1·(𝔪 + c_2) · r + √(2T log2T/p),
where r is optimized at:
r_T^* = (C (d + c_1) σ/T (𝔪 + c_2))^1 / (d + c_1 + 1),
provided that r_T^* < r_0. This will eventually hold for sufficiently large T > T_0. For T ≤ T_0, we can use the coarser mistake bound T_0. Thus, for all T ∈ℕ:
#mistakes_T ≤ T_0 + C_1 (T(𝔪 + c_2)/σ)^(d + c_1)/(d + c_1 + 1) + √(2T log2T/p),
where C_1 is a constant, defined below.
Because we assumed d > 1, the √(T log T) term is eventually dominated by the T^(d + o(1))/(d + 1) term when T > T_0' is sufficiently large. We obtain the result by setting C_0 as below, and noting that we can simplify the exponent because (d + c_1)/ (d + c_1 + 1) < (d + c_1)/(d + 1).
* C_0 = T_0 + 2√(2 T_0' log2T_0'/p).
* C_1 = 2C(d + c_1).
▪
|
http://arxiv.org/abs/2307.02498v1
|
20230704091838
|
On possible wormhole solutions supported by non-commutative geometry within $f(R, L_m)$ gravity
|
[
"N. S. Kavya",
"V. Venkatesha",
"G. Mustafa",
"P. K. Sahoo"
] |
gr-qc
|
[
"gr-qc"
] |
APS/123-QED
kavya.samak.10@gmail.com
Department of P.G. Studies and Research in Mathematics,
Kuvempu University, Shankaraghatta, Shivamogga 577451, Karnataka, INDIA
vensmath@gmail.com
Department of P.G. Studies and Research in Mathematics,
Kuvempu University, Shankaraghatta, Shivamogga 577451, Karnataka, INDIA
gmustafa3828@gmail.com
Department of Physics, Zhejiang Normal University, Jinhua, 321004, People's Republic of China.
pksahoo@hyderabad.bits-pilani.ac.in
Department of Mathematics, Birla Institute of Technology and Science-Pilani,
Hyderabad Campus, Hyderabad 500078, INDIA
Non-commutativity is a key feature of spacetime geometry. The current article explores the traversable wormhole solutions in the framework of f(ℛ,ℒ_m) gravity within non-commutative geometry. By using the Gaussian and Lorentzian distributions, we construct tideless wormholes for the nonlinear f(ℛ,ℒ_m) model f(ℛ,ℒ_m)=ℛ2+ℒ_m^α. For both cases, we derive shape functions and discuss the required different properties with satisfying behavior. For the required wormhole properties, we develop some new constraints. The influence of the involved model parameter on energy conditions is analyzed graphically which provides a discussion about the nature of exotic matter. Further, we check the physical behavior regarding the stability of wormhole solutions through the TOV equation. An interesting feature regarding the stability of the obtained solutions via the speed of sound parameters within the scope of average pressure is discussed. Finally, we conclude our results.
Keywords
Traversable wormhole, f(ℛ,ℒ_m) gravity, energy conditions, non-commutative geometry,
equilibrium condition.
On possible wormhole solutions supported by non-commutative geometry within f(ℛ,ℒ_m) gravity
P.K. Sahoo0000-0003-2130-8832
August 1, 2023
============================================================================================
§ INTRODUCTION
Wormholes are tube-like structures whose mouths at both ends connect distinct regions positioned in the same universe or different universes. Initially, Flamm proposed the existence of these hypothetical connections in the universe <cit.>. With the isometric embedding, he probed Schwarzschild solutions to governing gravity relations. Einstein and Rosen gave the mathematical vision of a bridge-like structure having an event horizon <cit.>. A traversable wormhole solution (horizon-less scenario) to the classical Einstein field equation was put forth by Morris and Thorne <cit.>. These wormholes violate the energy conditions, specifically the Null Energy Condition (NEC). Therefore, to accomplish the traversability of wormhole in the realm of GR, a certain type of hypothetical fluid disobeying NEC is required (as an ordinary matter agrees with the known laws of physics). In the framework of GR, the wormhole facilitated with non-hypothetical fluid cannot be formulated. Accordingly, numerous endeavors took place to reduce the usage of exotic matter <cit.>. Modified theories, on the other hand, gave a satisfactory solution to the exotic matter problem. Böhmer et al. <cit.> constructed wormhole structures with specific redshift and shape functions in modified teleparallel gravity that obeys energy conditions. In the background of f(ℛ) gravity, Lobo and Oliveira <cit.> studied traversable wormhole geometries. They imposed that matter threading the wormhole satisfies energy conditions. In <cit.>, authors formulated wormholes without exotic matter in the Einstein-Gauss-Bonnet theory. Capozziello et al. <cit.> explored various aspects of WH in alternative gravity theories. Recently, Capozziello and Nisha <cit.> considered non-local gravity in view of obtaining stable and traversable wormhole solutions.
In recent years, the investigation of wormhole geometry has piqued the interest of many astrophysicists. Rahaman et al. <cit.>, presented a solution for a wormhole with phantom energy in spherically symmetric spacetime. Their model suggests the presence of a wormhole supported by an arbitrarily small amount of phantom energy. Dynamic thin shell traversable wormhole is examined based on the black-bounce spacetimes <cit.>. In <cit.>, Zubair et al. investigated static spherically symmetric wormhole geometry with anisotropic, isotropic, and barotropic matter content. A four-dimensional wormhole solution with 'Casimir-like' energy is put forth by Maldacena and the team <cit.>, which in ambient space doesn't lead to causality violation. Gravitational lensing is studied by rotating and non-rotating Damour-Solodukhin wormholes using the Gauss-Bonnet theorem and Bozza's method <cit.>. The effects of repulsive gravity in gravitational contexts have been studied in <cit.>. Numerous endeavors have taken place to probe wormhole spacetime in the context of metric affine theories <cit.>, geometry-matter couplings <cit.>, braneworld <cit.>, and non-commutative geometry <cit.>.
In exploring the manifolds' features in different lights, the concept of non-commutative geometry is phenomenal. In the analysis of spacetime structure, non-commutativity can be presented via modified matter sources. Primarily, it unifies weak and strong forces with gravitational force. P. Aschieri et al <cit.> have constructed classical governing equations of gravity on non-commutative geometry. Both the deformation of the spacetime geometry and the quantization can be effectively dealt with non-commutativity. In D-brane <cit.>, the coordinates of spacetime can be treated as non-commutative operators represented by [y^a,y^b]=𝑖θ^ab. Such operators lead to spacetime discretization. This is indicated by the second-order antisymmetric matrix θ^ab <cit.>. One can remark that non-commutativity substitutes smear objects for point-like structures. The smearing phenomenon can be employed by replacing the Dirac delta function with a Gaussian and a Lorentzian distribution of minimal length √(θ). In <cit.>, Schneider et al. have discussed the Gaussian and Lorentzian distributions via simple smearing of a matter distribution within the black hole. With Gaussian distribution, Sushkov examined the wormholes supported by phantom energy <cit.>. Kuhfittig, in <cit.>, shows that certain thin-shell wormholes that are unstable in GR behave stable as a consequence of non-commutativity. The physical impact of the short separation of non-commutative coordinates can be seen in <cit.>. Rahaman et al. investigated wormhole geometry with Gaussian distribution and proved the feasibility of solution in four and five dimensions <cit.>.
For the static spherically symmetric point-like gravitational source having total mass M, Gaussian and Lorentzian distribution of energy densities are given by <cit.>,
ρ= M e^-r^2/4 θ/8 π ^3/2θ ^3/2,
ρ=√(θ) M/π ^2 (θ +r^2)^2.
These choices reflect the notion that the source is spread out or smeared rather than being concentrated at a single point. This is mainly because of the intrinsic uncertainty in the coordinate commutator. Further, the noncommutative correction becomes significant in a region near the origin, specifically when r ≲θ. Within this neighborhood, the effects of noncommutativity regularize both the radial and tangential pressures, as well as the matter density. From the particular choice (<ref>) and (<ref>), the physical parameters (especially energy density) are finite and asymptotically vanish, supporting the vacuum solution at points far away from the origin.
In the present manuscript, we attempt to study static spherical symmetric Morris-Thorne wormhole structure in the paradigm of the newly proposed f(ℛ,ℒ_m) gravity. This article is organized as follows: Section <ref> provides a mathematical outline of the modified theory in which we discuss governing equations, wormhole solutions in f(ℛ,ℒ_m) gravity, and energy conditions. In section <ref>, we examine the wormhole model with Gaussian and Lorentzian distribution and derive the corresponding shape functions. Also, we analyze the influence of model parameter on the shape functions and energy conditions. Section <ref> assesses the stability of wormholes using the TOV equation. In section <ref> we interpret the physical aspects of the wormhole by examining average pressure and speed of sound. Finally, section <ref> gives the discussion of results and concluding remarks.
§ MATHEMATICAL FORMULATION OF THE MODIFIED THEORY
§.§ Initiation
The observational constraints have drawn consideration to some of the shortcomings of GR, which affect it on both smaller and larger scales, such as quantum scale and galactic systems, where modifications to the standard action are required to maintain GR as the fundamental theory of gravity. In other words, it is plausible that the gravitational aspect of the standard GR model needs further examination to address these observable issues. This notion could come in the form of generalizations beyond GR that could serve as an alternative to the formulation. For instance, f(ℛ) gravity is one of the most prominent modified geometry theories whose models can agree with observational data. It can reasonably explain the late-time acceleration <cit.> and cosmic inflation <cit.> due to the replacement of the Ricci scalar with its arbitrary function. A generalized gravity version of f(ℛ) theory is presented in <cit.>. Here, along with the modified geometry section, an explicit form of matter source is coupled to achieve extension to the matter sector of the standard model of particle physics. The modified action for the theory is described as,
S=∫f(ℛ,ℒ_m) √(-g) d^4x,
where, f represents an arbitrary function of scalar curvature ℛ and the matter Lagrangian ℒ_m. For f=ℛ/2+ℒ_m, one can retain the governing equations of GR. The explicit coupling between geometry and the matter sector results in obtaining a non-vanishing covariant derivative of the Energy-Momentum Tensor (EMT) i.e. ∇_a 𝒯^ab0. Due to this, the motion of test particles takes a non-geodesic path that influences the violation of the equivalence principle. Various forms of ℒ_m, representing matter sources, lead to extra force orthogonal to four-velocity <cit.>. Recent studies suggest that this theory can be regarded as a possible explanation for cosmic acceleration and dark energy <cit.>. The primary purpose of this manuscript is to assess the wormhole geometry with non-commutativity. In the next section, we shall discuss the governing equation of f(ℛ,ℒ_m) gravity.
§.§ Governing Equations in f(ℛ,ℒ_m) Gravity
Action describes the governing equation of a gravity theory. With the help of (<ref>) we can derive the field equations of f(ℛ,ℒ_m) gravity. By varying (<ref>) with respect to g^ab, the field equation is obtained as,
f_ℛℛ_ab+(g_ab∇_a∇^a-∇_a∇_b)f_ℛ-12[f- f_ℒ_mℒ_m]g_ab=12f_ℒ_m𝒯_ab.
Here, f_ℒ_m and f_ℛ represents the partial derivative of f with respect to matter Lagrangian ℒ_m and the Ricci scalar ℛ respectively. The Energy-Momentum tensor (EMT) 𝒯_ab is defined as,
𝒯_ab=-2√(-g)δ(√(-g)ℒ_m)δ g^ab=g_abℒ_m-2∂ℒ_m∂ g^ab.
By taking the covariant divergence of EMT we get,
∇^a 𝒯_ab=2{∇^a ln[f_ℒ_m]}∂ℒ_m ∂ g^ab.
Now contracting the governing equation (<ref>) we obtain the following correspondence between matter Lagrangian and the trace of EMT:
3∇_a∇^af_ℛ+f_ℛℛ-2[f -f_ℒ_mℒ_m]=12f_ℒ_m𝒯.
Using the above equation, one can get another form of the field equation,
f_ℛ( ℛ_ab-13ℛg_ab) + g_ab6[f -f_ℒ_mℒ_m]=12(𝒯_ab -13𝒯g_ab)f_ℒ_m(ℛ,ℒ_m)+∇_a∇_bf_ℛ.
The effective EMT is given by
𝒯_ab^eff=1/f_ℛ[12(f- ℛf_ℛ)g_ab-(g_ab∇_a∇^a-∇_a∇_b)f_ℛ+1/2f_ℒ_mℒ_m g_ab+1/2f_ℒ_m𝒯_ab].
The EMT (<ref>) for anisotropic matter becomes,
𝒯_ab=(ρ+p_τ)η_a η_b-p_τ g_ab+(p_r-p_τ)ξ_aξ_b,
where, the 4-velocities η^a and ξ^a satisfies η^aη_a=-1=-ξ^aξ_a.
§.§ Wormhole Solution in f(ℛ,ℒ_m) Gravity
The Morris-Thorne metric for the traversable wormhole is described as,
ds^2=e^2R_f(r)dt^2-dr^21-S_f(r)r - r^2(dθ^2+sin^2θ dϕ^2),
where R_f(r) and S_f(r) are respectively redshift and shape functions. The redshift function R_f takes a finite value in the entire spacetime to avoid the presence of a horizon. Here in our study, to reduce the complexity of the problem we take R_f as a constant i.e., we are investigating wormhole in the zero tidal force scenario <cit.>. The radial coordinate r takes the values ranging from r_0 to ∞. The minimum value r_0 is called the throat radius and is the fixed point of the shape function S_f(r) i.e., S_f(r_0)=r_0. The shape function is significant in achieving the traversability of a wormhole. It is a monotonic function and makes the spacetime asymptotically flat i.e., S_f(r)/r tends to vanish for infinitely large values of the radial coordinate. Further, the shape function satisfies the flaring-out condition S_f(r)-rS_f'(r)/S_f(r)^2>0. This, at the throat, becomes S_f'(r_0)<1. Another significant function in describing the geometry of traversable wormhole is the proper radial distance function:
L_f(r)=±∫_r_0^r √(rr-S_f(r))dr.
The equation (<ref>) should be finite everywhere in the domain. Therefore S_f(r)<r ∀ r should be satisfied. The sign ± indicates respectively the upper and lower universes.
The gravitational interaction of the wormhole geometry with anisotropic matter distribution in f(ℛ,ℒ_m) gravity can be described using the field equations given by,
4f_ℛS_f'r^2-(f-f_ℒ_mℒ_m)=(2ρ+p_r+2p_τ)f_ℒ_m,
6f_ℛ”(1-S_fr)+3f_ℛ'(S_f-rS_f'r^2)+2f_ℛ(3S_f-rS_f'r^3)
-(f-f_ℒ_mℒ_m)=(-ρ-2p_r+2p_τ)f_ℒ_m,
6f_ℛ”r(1-S_fr)-f_ℛ(3S_f-rS_f'r^3)-(f-f_ℒ_mℒ_m)=(-ρ+p_r-p_τ)f_ℒ_m.
§.§ Energy Conditions
As a result of the Raychaudhuri equation, energy conditions govern the physical behavior of matter and energy in motion. One may examine the studies on the energy conditions in f(ℛ,ℒ_m) gravity in <cit.>. We will take into account the criteria for various energy conditions in order to assess the geodesic behavior. For the EMT (<ref>) with ρ, p_r and p_τ respectively being energy density, radial pressure and tangential pressure, we have:
* Null Energy Conditions (NECs): ρ+p_τ≥0 and ρ+p_r≥0.
* Weak Energy Conditions (WECs): ρ≥0 ρ+p_τ≥0 and ρ+p_r≥0.
* Strong Energy Conditions (SECs): ρ+p_j≥0 ρ+∑_j p_j≥0 ∀ j.
* Dominant Energy Conditions (DECs): ρ≥0 ρ-|p_r|≥0 and ρ-|p_τ|≥0.
§ WORMHOLE MODELS IN F(ℛ,ℒ_M) GRAVITY
In this section, we shall consider a viable wormhole model to study the characteristics of wormhole geometry. In particular, we suppose the non-linear form given by,
f(ℛ,ℒ_m)=ℛ2+ℒ_m^α,
where α is a free parameter. For α=1 the case reduces to GR. We presume that the matter Lagrangian density ℒ_m depends on energy density ρ i.e., ℒ_m=ρ <cit.>. Now, comparing the equations (<ref>) and (<ref>) for f(ℛ,ℒ_m) model (<ref>) we can get the expressions for radial and tangential pressures as,
p_r =-ρα[(α-1)+S_fr^3ρ^α],
p_τ =r S_f'+S_f2α r^3 ρ^α-1 -ρ.
§.§ Gaussian energy density:
The equation (<ref>) describes the energy density for Gaussian distribution. With the physical parameters ρ (<ref>), p_r (<ref>) and p_τ (<ref>) the field equation (<ref>) reduces to,
S_f'(r)/r=8^-απ ^-3 α/2 r (M e^-r^2/4 θ/θ ^3/2)^α.
On solving the above ordinary differential equation, the shape function of the wormhole with Gaussian distribution can be obtained. This is given by,
S_f(r)=2^1-3 απ ^-3 α/2θ[√(πθ) e^α r^2/4 θerf(√(α) r/2 √(θ))-√(α) r] (M e^-r^2/4 θ/θ ^3/2)^α/α ^3/2+k,
where, erf (z)=2/√(π)∫_0^z e^-t^2 dt is the Gauss error function and k is the integrating constant. Now, to obtain the particular solution, we find the value of k by imposing the throat condition S_f(r_0)=r_0. Then we have,
k=√(α) r_0 [(8 π ^3/2)^αα +2 θ(M e^-r_0^2/4 θθ ^3/2)^α]-2 √(π) θ ^3/2 e^α r_0^2/4 θ erf(√(α) r_0/2 √(θ)) (M e^-r_0^2/4 θθ ^3/2)^α/(8 π ^3/2)^αα ^3/2.
In order to achieve the traversability of a wormhole the shape function should satisfy the flaring-out condition. For the present scenario, S_f(r) satisfies S_f'(r_0)<1 at the throat if the following inequality holds:
(M e^-1/4θ8π^3/2θ^3/2)^α<1/r_0^2.
The above inequality is significant in determining the relation between M, θ, α, and r_0. For the GR scenario, with r_0=1 and θ=4, we can get the constraining relation on the total mass M as, M<64π^3/2 e^1/16. By taking the plot of shape function S_f(r) with respect to r, we examined its behavior for M=1.2, θ=4, and r_0=1. One can refer to TABLE<ref> for detailed analysis. For different values of α, FIG.<ref> shows the characteristics of S_f(r). It is obvious from our choice of k that S_f(r) satisfies the throat condition. From FIG.<ref>, it can be seen that S_f(r)>0 is a monotonically increasing function. Moreover, S_f(r)<r, implying the finiteness of the proper radial distance function. Also, the flaring-out condition is satisfied [FIG.<ref>, <ref>]. The Lorentzian manifold becomes flat asymptotically as the value S_f(r)/r→0 for large r and this can be interpreted from FIG.<ref>.
Substituting equations (<ref>), (<ref>), (<ref>) in (<ref>) and (<ref>), the pressure elements take the form,
p_r=(M e^-r^2/4 θ/θ ^3/2)^1-α/8 π ^3/2α ^5/2 r^3[-2 √(π)θ ^3/2 e^α r^2/4 θerf(√(α) r/2 √(θ)) (M e^-r^2/4 θ/θ ^3/2)^α+2 θ(√(π)√(θ) e^α r_0^2/4 θerf(√(α) r_0/2 √(θ))-√(α) r_0) (M e^-r_0^2/4 θ/θ ^3/2)^α.
.+√(α)(-r ((α -1) α r^2-2 θ) (M e^-r^2/4 θ/θ ^3/2)^α-8^απ ^3 α/2α r_0)],
p_τ=(M e^-r^2/4 θ/θ ^3/2)^1-α/16 π ^3/2α ^5/2 r^3[2 √(π)θ ^3/2 e^α r^2/4 θerf(√(α) r/2 √(θ)) (M e^-r^2/4 θ/θ ^3/2)^α+2 θ(√(α) r_0-√(π)√(θ) e^α r_0^2/4 θerf(√(α) r_0/2 √(θ))) (M e^-r_0^2/4 θ/θ ^3/2)^α.
.+√(α)(8^απ ^3 α/2α r_0-r (2 θ +α (2 α -1) r^2) (M e^-r^2/4 θ/θ ^3/2)^α)].
Furthermore, with the above pressure elements, we examined the energy conditions NEC, DEC, and SEC for Gaussian distribution [ref FIG.<ref>]. In this case, NEC is not satisfied for radial pressure [FIG.<ref>] but for tangential pressure it holds [FIG.<ref>]. Also, both DECs are violated and SEC is satisfied [FIG.<ref>,<ref>,<ref>]. In addition, NEC_eff≡rS_f'-S_f/r^3 is violated as a consequence of satisfied flaring-out condition.
Wormhole Solutions: In the context of Gaussian distribution, we have verified the criteria satisfied by the wormhole, such as finite redshift, throat condition, asymptotic condition, flaring-out condition, and the violation of effective null energy condition. Based on the constraining relation (<ref>), we chose M=1.2, θ = 4 and r_0=1 and analyzed the influence of model parameter α on the wormhole solution. For different values of α (say, α_i=0.645,0.650,0.655,09.660,0.656), corresponding wormhole metrics read:
ds^2=e^cdt^2-ψ_i dr^2 - r^2(dθ^2+sin^2θ dϕ^2),
where c is some constant. For shape functions S_f_i corresponding to α_i, ψ_i≡(1-S_f_i/r)^-1, with
ψ_1= r/(e^-r^2/16)^0.645(0.315227 r-1.39139 e^0.0403125 r^2erf(0.20078 r))+r-0.99173,
ψ_2= r/(e^-r^2/16)^0.65(0.304023 r-1.33676 e^0.040625 r^2erf(0.201556 r))+r-0.991964,
ψ_3= r/(e^-r^2/16)^0.655(0.293234 r-1.2844 e^0.0409375 r^2erf(0.20233 r))+r-0.992191,
ψ_4= r/(e^-r^2/16)^0.66(0.282845 r-1.23419 e^0.04125 r^2erf(0.203101 r))+r-0.992411,
ψ_5= r/(e^-r^2/16)^0.665(0.272839 r-1.18605 e^0.0415625 r^2erf(0.203869 r))+r-0.992626.
§.§ Lorentzian energy density:
The non-commutative geometric distribution is an intrinsic aspect of a Lorentzian manifold <cit.>. It is independent of the spacetime properties such as curvature. In this section, we study the scenario of the traversable wormhole with Lorentzian energy density distribution (<ref>). Substituting (<ref>), (<ref>) and (<ref>), the field equation (<ref>) becomes,
S_f'(r)/r=π ^-2 α r (√(θ) M/(θ +r^2)^2)^α.
The aforementioned equation is significant in determining the desired shape function. It is known that the shape function S_f(r) at the throat should have a fixed point i.e., S_f(r_0)=r_0. Therefore, the ordinary differential equation (<ref>) is an initial value problem. The particular solution of this equation is obtained as,
S_f(r)=r^3/3π ^2 α( Mθ ^-3/2)^α _2F_1(3/2,2α ;5/2;-r^2/θ) +k,
where, _2F_1(a,b;c;z) is the hypergeometric function and k is the constant of integration given by,
k=r_0-r_0^3/3π ^2 α( Mθ ^-3/2)^α _2F_1(3/2,2α ;5/2 ;-r_0^2/θ).
Additionally, we consider a constraining relation
(M√(θ)π^2(1+θ)^2)^α<1r_0^2,
in order to satisfy the flaring-out condition at the throat. For α=1 we can retain the inequality for GR. With r_0=1 and θ=4 the inequality (<ref>) reads, M<25π^2/2.
Now we have to choose the range of values for the model parameter for which the obtained shape function satisfies all the necessary requirements. To this end, with the help of the plot of shape function versus radial coordinate, we studied the behavior of S_f(r). The value of α is constrained to get the viable form of the shape function with Lorentz distribution [refer TABLE<ref>]. The effect of model parameter α on S_f(r) for M=1.2, θ=4 and r_0=1 is depicted in FIG.<ref>. It can be observed that S_f(r) is a non-negative monotonically increasing function in the domain of radial coordinate r [FIG.<ref>] and satisfies the condition S_f(r)<r. Further, FIG.<ref>, <ref> reveals that the shape function obeys the flaring-out condition. For an infinitely large value of the radial coordinate S_f(r)/r approaches to zero [FIG.<ref>]. Thus, we can say that the shape function so obtained for the Lorentz distribution satisfies all the essential conditions.
Further, with the shape function (<ref>) and energy density (<ref>), the radial and tangential pressures can be rewritten as,
p_r=(√(θ) M/(θ +r^2)^2)^1-α/3 π ^2 αθ r^3[-r^3 (θ +r^2) _2F_1(1,5/2-2 α ;5/2;-r^2/θ) (√(θ) M/(θ +r^2)^2)^α-3 (α -1) θ r^3 (√(θ) M/(θ +r^2)^2)^α.
.-3 π ^2 αθ r_0+r_0^3 (θ +r_0^2) _2F_1(1,5/2-2 α ;5/2;-r_0^2/θ) (√(θ) M/(θ +r_0^2)^2)^α].
p_τ=(√(θ) M/(θ +r^2)^2)^1-α/6 π ^2 αθ r^3[r^3 (θ +r^2) _2F_1(1,5/2-2 α ;5/2;-r^2/θ) (√(θ) M/(θ +r^2)^2)^α-3 (2 α -1) θ r^3 (√(θ) M/(θ +r^2)^2)^α.
.-r_0^3 (θ +r_0^2) _2F_1(1,5/2-2 α ;5/2;-r_0^2/θ) (√(θ) M/(θ +r_0^2)^2)^α+3 π ^2 αθ r_0].
In addition, energy conditions interpret the characteristics of motion of energy and matter. Here, we studied the behavior of various energy conditions for Lorentz distribution with M=1.2, θ=4, and r_0=1. The NEC is violated for radial pressure, supporting the requirement of the exotic fluid. Further, SEC and tangential NEC are obeyed. There is a violation of both the DECs.
In the next section, we shall analyze the physical aspects of the Gaussian and Lorentzian wormholes.
Wormhole solutions: Within the framework of the Lorentzian distribution, we have examined the criteria that a traversable wormhole should satisfy. These include finite redshift, throat condition, asymptotic condition, flaring-out condition, as well as the violation of the effective null energy condition. By utilizing the constraining relation denoted by (<ref>), we have selected specific values for the parameters M=1.2, θ=4, and r_0=1, enabling us to investigate the impact of the model parameter α on the wormhole solution. For various values of α (i.e., α_i=0.645, 0.650, 0.655, 0.660, 0.656), the corresponding metrics describing the wormhole are as follows:
ds^2=e^cdt^2-ψ_i dr^2 - r^2(dθ^2+sin^2θ dϕ^2),
where ψ_i's are given by,
ψ_1= -44.6546 r/1. (r^2+4)^1.29(1/(r^2+4)^2)^0.645 r^3 _2F_1(1.29,3/2;5/2;-r^2/4)-44.6546 r+43.8154,
ψ_2= -45.5992 r/1. (r^2+4)^1.3(1/(r^2+4)^2)^0.65 r^3 _2F_1(1.3,3/2;5/2;-r^2/4)-45.5992 r+44.7611,
ψ_3= -46.5637 r/1. (r^2+4)^1.31(1/(r^2+4)^2)^0.655 r^3 _2F_1(1.31,3/2;5/2;-r^2/4)-46.5637 r+45.7268,
ψ_4= -47.5487 r/1. (r^2+4)^1.32(1/(r^2+4)^2)^0.66 r^3 _2F_1(1.32,3/2;5/2;-r^2/4)-47.5487 r+46.7129,
ψ_5= -48.5545 r/1. (r^2+4)^1.33(1/(r^2+4)^2)^0.665 r^3 _2F_1(1.33,3/2;5/2;-r^2/4)-48.5545 r+47.7199.
§ EQUILIBRIUM CONDITION
In this section, we shall analyze the stability of Gaussian and Lorentzian wormhole models. For this purpose, we use the Tolman-Oppenheimer-Volkov (TOV) equation <cit.>:
p_r'+ϖ'2(ρ+p_r)+2r(p_r-p_τ)=0,
where primes (') represent the derivative with respect to the radial coordinate r and ϖ=2R_f. The aforesaid equation describes the equilibrium phase of a wormhole with gravitational F_g, hydro-static F_h, and anisotropic F_a forces. These forces are defined by,
F_g =-R_f'(ρ+p_r),
F_h =-p_r',
F_a =2r(p_τ-p_r).
Thus, (<ref>) can be rewritten as, F_g+F_h+F_a=0. Since we have considered the tideless scenario, we have R_f=0 implying F_h+F_a=0. FIG.<ref> and FIG.<ref> illustrate the behavior of hydro-static and anisotropic forces for Gaussian distribution and Lorentzian distribution. From these plots, one can assess the influence of the model parameter on the equilibrium condition. It can be interpreted from FIG.<ref> that the nature of both hydro-static and gravitational forces are similar but opposite to one another.
§ INTERPRETATION OF WORMHOLE
In this section, we shall discuss the physical aspects of the wormhole models.
§.§ Average Pressure
The average pressure p can be described as,
p=13(p_r+2p_τ).
For Gaussian distribution, the expression for average pressure reads,
p=(2-3 α ) M e^-r^2/4 θ/24 π ^3/2αθ ^3/2,
and for the Lorentzian distribution, it is given by,
p=-(3 α -2) √(θ) M/3 π ^2 α(θ +r^2)^2.
§.§ Speed of Sound
The speed of sound parameter v_s^2 determines the stability of a wormhole. A wormhole is said to be stable if 0<v_s^2<1 <cit.>. The speed of sound parameter is expressed as,
v_s^2=dpdρ.
For both the Gaussian and Lorentzian distributions, this physical quantity is given by,
dpdρ=2/3 α-1.
One can note that 0≤dpdρ<1 is satisfied for 1/3≤α<2/3. Therefore, non-commutative wormhole models are stable for 1/3≤α<2/3.
§ RESULTS AND CONCLUDING REMARKS
Wormhole describes a path to connect two points of the universe. Recent research interests in this astrophysical entity are gaining importance due to its unique geometric feature. In the framework of modified theories of gravity, it is possible to construct physically viable wormhole structures. In our current study, we used an explicit coupling of matter with geometry which replaces the Ricci scalar in the Einstein-Hilbert action with an arbitrary function of the Ricci scalar and the matter Lagrangian. The geometry-matter coupling theory such as f(ℛ,ℒ_m) gravity can remarkably address the issue of exotic matter <cit.>. On the other hand, non-commutative geometry with modified matter sources provides a mathematical approach to dealing with physical phenomena.
* With the anisotropic matter distribution, we studied wormholes with zero tidal force.
* Further, we presumed a non-linear f(ℛ,ℒ_m) model f(ℛ,ℒ_m)=ℛ2+ℒ_m^α with α being a free parameter. From <cit.>, we can see that the model is capable to explain the present scenario of the universe.
* In the first case, we examined the wormhole scenario with Gaussian distribution. Here, we derived a shape function satisfying the throat condition. We verified the range of values of the model parameter α for which the obtained shape function obeys the flaring-out and asymptotic flatness conditions [ref TABLE<ref>].
* In the second case, we considered Lorentzian distribution to analyze wormhole properties. The obtained shape function obeys all the necessary conditions for a traversable wormhole. The parameter values of α for which the shape function fulfills the criteria are represented in the TABLE<ref>.
* We verified the flaring-out condition at the throat for both non-commutative geometries. The obtained inequalities (M e^-1/4θ/8π^3/2θ^3/2)^α<1/r_0^2 and (M√(θ)/π^2(1+θ)^2)^α<1/r_0^2 constrained the parameter values M, r_0, and θ. In addition, with the aid of the speed of sound parameter, we checked the stability of the wormhole. It is inferred from equation (<ref>) that these non-commutative wormholes are stable for 1/3≤α<2/3. Further, the equilibrium condition is verified through the TOV equation.
* In FIG.<ref> and <ref>, we analyzed the impact of model parameter α on the behavior of shape functions. It is necessary to note that a minute variation in the value of α can impact the nature of shape functions. The nature of both shape functions is similar to that of the results obtained by Shamir and his collaborators <cit.>, in the context of exponential gravity coupled with the matter.
* In addition, throughout the manuscript, we analyzed the influence of this parameter α on the physical properties of the wormhole. Also, in all figures, we plotted the profile for those values of α for which we obtain a physically plausible wormhole solution. Further, in our analysis, we ignored those values of the model parameter leading to the negatively defined energy density. FIG.<ref> and <ref> show the density profile for both Gaussian and Lorentzian distribution.
* In both cases, there is a violation of the NEC, implying the existence of an exotic matter source. This agrees with the result obtained in GR and various modified theories with non-commutativity. In <cit.>, for the linear f(Q) model with charge, NEC is violated. Also, in <cit.>, the result is obtained in the absence of charge. A similar instance can be seen in <cit.> with matter coupling in teleparallel gravity, which indicates the presence of exotic matter. In the context of Rastall gravity, Mustafa et al. <cit.> obtained the wormhole solutions violating NEC.
In all, the letter has presented a stable viable wormhole model in the framework of f(ℛ,ℒ_m) gravity with non-commutative distributions.
§ DATA AVAILABILITY STATEMENT
There are no new data associated with this article.
N.S.K. and V.V. acknowledge DST, New Delhi, India, for its financial support for research facilities under DST-FIST-2019.
70
flamm L. Flamm, Phys. Z. 17, 448 (1916).
erbridge A. Einstein and N. Rosen, Phys. Rev. 48, 73 (1935).
morrisandthorne M. S. Morris and K. S. Thorne, Am. J. Phys. 6, 395 (1988).
em1 M. Visser, Lorentzian Wormholes: From Einstein to Hawking (American Inst. of Physics, 1995).
em2 P. Gao, D. L. Jafferis and A. C.Wall, J. High Energy Phys. 2017, 1 (2016).
em3 J. Maldacena and X.-L. Qi, arXiv:1804.00491 (2018).
em4 E. Caceres, A. Kundu, A. K. Patra, et al., J. High Energy Phys. 02, 149 (2020).
em5 C. Armendáriz-Picón, Phys. Rev. D 65, 104010 (2002).
em6 A. Nicolis, R. Rattazzi and E. Trincherini, J. High Energy Phys. 2010, 95 (2010).
em7 M. Visser, Nuclear Phys. B 328, 203 (1989).
em8 P. K. F. Kuhfittig, Amer. J. Phys. 67, 125 (1999).
ec1 C.G. Böhmer, T. Harko and F.S.N. Lobo, Phys. Rev.D 85, 044033 (2012).
ec2 F.S.N. Lobo and M.A. Oliveira, Phys. Rev. D 80, 104012 (2009).
ec3 P. Kanti, B. Kleihaus and J. Kunz, Phys. Rev. Lett. 107, 271101 (2011).
CZ1 S. Capozziello, R. Pincak and E. N. Saridakis, Annals Phys. 390, 303 (2018).
CZ2 S. Capozziello, R. Pincak and E. Bartos, Symmetry 12, 774 (2020).
CZ3 S. Capozziello and M. Francaviglia, Gen. Relativ. Grav. 40, 357 (2008).
CZ4 S. Capozziello, S. Nojiri, S. D. Odintsov, et al., Phys. Lett. B 639, 135 (2006).
CZ5 S. Capozziello, V. F. Cardone and A. Troisi, Phys. Rev. D 71, 043503 (2005).
CZ6 S. Capozziello, A. Stabile and A. Troisi, Class. Quantum Grav. 24, 2153 (2007).
CZ7 S. Capozziello, M. De Laurentis, S. D. Odintsov, et al., Phys. Rev. 83, 064004 (2011).
CZ8 S. Capozziello, O. Luongo and L. Mauro, Eur. Phys. J. Plus 136, 167 (2021).
Luongo3
S. Capozziello, R. D`Agostino, O. Luongo, Int. J. Mod. Phys. D 28, 1930016 (2019).
CZ9 S. Capozziello and Nisha Godani, Phys. Lett. B 835, 137572 (2022).
ref1 F. Rahaman, M Kalam, M Sarker, et al., Phys. Lett. B 633, 2-3 (2006).
ref2 F. S. N. Lobo, A. Simpson and M. Visser, Phys. Rev. D 101, 124035 (2020).
ref3 M. Zubair, S. Waheed and Y. Ahmad, Eur. Phys. J. C 76, 444 (2016).
ref4 J. Maldacena, A. Milekhin and F. Popov, arXiv:1807.04726v3 (2020).
ref5 A. Övgün, Phys. Rev. D 98, 044033 (2018).
Luongo1
O. Luongo, H. Quevedo, Phys. Rev. D 90, 084032 (2014).
Luongo2
O. Luongo, H. Quevedo, arXiv:1005.4532 (2010).
mat G. Mustafa, M. Ahmad, A. Övgün, et al., Fortschr. Phys. 69 2100048 (2021).
mat1Z. Hassan, S. Mandal and P. K. Sahoo, Fortschr. Phys. 69, 2100023 (2021);
R. C. Tefo, P. H. Logbo, M. J. S. Houndjo, et al., Int. J. Mod. Phys. D 28, 1950065 (2019).
gmc1 P. K. Sahoo, P. H. R. S. Moraes and P. Sahoo, Eur. Phys. J. C 78, 46 (2018).
gmc2 E. Elizalde and M. Khurshudyan, Int. J. Mod. Phys. D 28, 1950172 (2019).
gmc3 U. K. Sharma and A. M. Kumar, Found. Phys. 51, 50 (2021).
gmc4 N.M. Garcia and F.S.N. Lobo, Class. Quant. Grav. 28, 085018 (2011).
bw L. A. Anchordoqui and S. E. P. Bergliaffa, Phys. Rev. D 62, 067502 (2000).
ncg G. Mustafa, Z. Hassan and P. K. Sahoo, Ann. Phys. 437, 168751 (2022).
ncg1 F. Rahaman, S. Islam, P. K. F. Kuhfittig, et al., Phys. Rev. D 86, 106010 (2012);
M. Sharif and S. Rani, Phys. Rev. D 88, 123501 (2013).
nc1 P. Aschieri, M. Dimitrijević, F. Meyer, et al, Class. Quant. Grav. 23, 1883 (2006).
nc2N. Seiberg and E. Witten, J. High Energy Phys. 9, 32 (1999).
nc3 S. Doplicher, K. Fredenhagen and J.E. Roberts, Phys. Lett. B 331, 39 (1994).
nc4 H. Kase, K. Morita, Y. Okumura, et al., Prog. Theor. Phys. 109, 663 (2003).
nc5 A. Smailagic and E. Spallucci, J. Phys. A 37, 1 (2004).
nc6 P. Nicolini, Int. J. Mod. Phys. A 24, 1229 (2009).
nc7 M. Schneider and A. DeBenedictis, Phys. Rev. D 102, 024030 (2020).
nc8 S. Sushkov, Phys. Rev. D 71, 043520 (2005).
nc9 P. K. F. Kuhfittig, Adv. High Energy Phys. 2012, 462493 (2012).
nc10 P. Nicolini and E. Spalluci, Class. Quant. Grav. 27, 015010 (2010).
nc11 F. Rahaman, S. Islam, P. K. F. Kuhfittig, et al., Phys. Rev. D 86, 106010 (2012).
intrinsic P Nicolini, A Smailagic and E Spallucci, Phy. Lett. B 632, 547-551 (2006).
nc A. Smailagic and E. Spalluci, J. Phys. A Math. Gen. 36, L467 (2003).
fr1 S. M. Carroll, V. Duvvuri, M. Trodden, et al., Phys. Rev. D 70, 043528 (2004).
Luongo4
S. Capozziello, M. De Laurentis, O. Luongo, Int. J. Mod. Phys. D 24, 1541002 (2015).
fr2 S. Capozziello, V. F. Cardone and A. Troisi, Mon. Not. R. Astron. Soc. 375, 1423 (2007); S. Nojiri and S.D. Odintsov, Phys. Lett.B 657, 238(2007).
frlm T. Harko and F. S. N. Lobo, Eur. Phys. J. C 70, 373 (2010).
lmrho2 O. Bertolami, F. S. N. Lobo and J. Pàramos, Phys. Rev. D 78, 064036 (2008).
extraforce O. Bertolami, C. G. Boehmer, T. Harko, et al., Phys. Rev. D 75, 104016 (2007).
lmp1 T. P. Sotiriou and V. Faraoni, Class. Quant. Grav. 25, 205002 (2008).
lmp2 B. F. Schutz, Phys. Rev. D 2, 2762 (1970).
frlm1 M. Bañados and P.G. Ferreira, Phys. Rev. Lett. 105, 011101 (2010).
frlm2 J. Wang and K. Liao, Class. Quant. Grav. 29, 215016 (2012).
frlm3 L. V. Jaybhaye, R. Solanki, S. Mandal et al., Phys. Lett. B 831, 137148 (2022).
frlm4 N.S. Kavya, V.Venkatesha, S. Mandal, et al., Phys. Dark Universe 38, 101126 (2022).
frlm5 A. Pradhan, D. C. Maurya, G. K. Goswami, et al., arXiv:2209.14269 (2022).
tidal R. A. Konoplya, Phys. Lett. B 784, 43-49 (2018); D. J. Gogoi and U. D. Goswami, J. Cosmol. Astropart. Phys. 02, 027 (2023)
ecfrlm J. Wang and K. Liao, Class. Quantum Grav. 29, 215016 (2012).
whfrlm1 N. M. Garcia and F. S. N. Lobo, Phys. Rev. D 82, 104018 (2010).
lmrho1 J. D. Brown, Class. Quant. Grav. 10, 1579 (1993).
lmrho3 S.W. Hawking and G. F. R. Ellis, The Large Scale Structure of Spacetime (Cambridge University Press, Cambridge, England, 1973).
lmrho4 V. Faraoni, Phys. Rev. D 80, 124040 (2009).
lmrho5 N. M. Garcia and F. S. N. Lobo, Phys. Rev. D 82, 104018 (2010).
tov P. K. F. Khufittig, Fund. J. Mod. Phys. 14, 23-31 (2020); F. Rahaman, P. K. F. Kuhfittig, S. Ray, et al., Eur. Phys. J. C 74, 2750 (2014).
sound1 E. Poisson and M. Visser, Phys. Rev. D 52 7318 (1995); M. Sharif and M. Azam, J. Phys. Soc. Japan 81, 124006 (2012).
sound2 A.A. Usmani, Z. Hasan, F. Rahaman, et al., Gen. Rel. Gravity 42, 2901 (2010); F. Rahaman, Sk. A. Rahman, S.A. Rakib, et al., Internat. J. Theoret. Phys. 49, 2364 (2010).
sound3 S. Mandal, G. Mustafa, Z. Hassan, et al., Phys. Dark Universe. 35, 100934 (2022).
sound4 K. Boshkayev, T. Konysbayev, E. Kurmanov, O. Luongo et al., Galaxies 8, 74 (2020).
shape M. F. Shamir, G. Mustafa and A. Fazal, New Astron. 83, 101459 (2021).
res O. Sokoliuk, Z. Hassan, P. K. Sahoo, et al., Ann. Phys. 443, 168968 (2022).
res1 G. Mustafa, S. Waheed, M. Zubair, et al, Chin. J. Phys. 65, 163-176 (2020).
|
http://arxiv.org/abs/2307.03296v1
|
20230706211050
|
Gammatonegram Representation for End-to-End Dysarthric Speech Processing Tasks: Speech Recognition, Speaker Identification, and Intelligibility Assessment
|
[
"Aref Farhadipour",
"Hadi Veisi"
] |
eess.AS
|
[
"eess.AS",
"cs.CL",
"cs.SD"
] |
Patterns in Knot Floer Homology
[
================================
Dysarthria is a disability that causes a disturbance in the human speech system and reduces the quality and intelligibility of a person's speech. Because of this effect, the normal speech processing systems can not work properly on impaired speech. This disability is usually associated with physical disabilities. Therefore, designing a system that can perform some tasks by receiving voice commands in the smart home can be a significant achievement. In this work, we introduce gammatonegram as an effective method to represent audio files with discriminative details, which is used as input for the convolutional neural network. On the other word, we convert each speech file into an image and propose image recognition system to classify speech in different scenarios. Proposed CNN is based on the transfer learning method on the pre-trained Alexnet. In this research, the efficiency of the proposed system for speech recognition, speaker identification, and intelligibility assessment is evaluated. According to the results on the UA dataset, the proposed speech recognition system achieved 91.29% accuracy in speaker-dependent mode, the speaker identification system acquired 87.74% accuracy in text-dependent mode, and the intelligibility assessment system achieved 96.47% accuracy in two-class mode. Finally, we propose a multi-network speech recognition system that works fully automatically. This system is located in a cascade arrangement with the two-class intelligibility assessment system, and the output of this system activates each one of the speech recognition networks. This architecture achieves an accuracy of 92.3% WRR. The source code of this paper is available [<https://github.com/areffarhadi/Gammatonegram_CNN_Dysarthric_speech>].
Disordered Speech, dysarthric Speech, Gammatonegram, CNN, Speech Recognition, Speaker Identification, Intelligibility Assessment.
§ INTRODUCTION
Speech is the ability to express emotions and thoughts through vocal sounds to communicate with others. However, due to some factors such as illness or physical disability, speech has an incomprehensible form and causes an interruption in the communication process. People who suffer from dysarthria cannot produce natural speech due to the lack of control of the articulatory part of their brain. In addition to the speech disability, such people usually have physical disabilities, which makes them unable to do their simple initial daily tasks properly.
Automatic systems based on Artificial Intelligence (AI) can evoke the idea they can come to the aid of humans. Helping disabled people has always been one of these areas. For example, when a person cannot do something for any reason, AI systems can have a constant pre-defined performance without being affected by environmental or mental parameters. For people with disordered speech, it is advantageous to have a system that can automatically process their speech to improve their life. For instance, in smart homes scenarios for disabled people, basic tasks such as working with the TV, turning off and turning on the lamp, and working with the computer with the help of Automatic Speech Recognition (ASR) systems that can receive and recognize voice commands and react appropriately, can be helpful. Designing a system that can perform the ASR correctly for the impaired and sometimes high variety speech is challenging. Indeed, the typical ASR system for normal speech does not have a suitable performance for impaired speech <cit.>, whereby a particular ASR system should be designed for this kind of speech that can learn the characteristics of the impaired speech and have an acceptable performance.
In this study, we try to design efficient systems for dysarthric speech processing. Most conventional speech processing systems use short-term speech features <cit.>. The idea in this work originates from the fact that to design a system that works appropriately for dysarthric isolated word recognition, an overall view of an audio file can be helpful. Because the speech of dysarthric individuals sometimes has unwanted interruptions in the middle of the word, especially in explosive letters. Moreover, a syllable is repeated several times in a row periodically. Furthermore, this event can be short or long, depending on mental and physical conditions. Therefore, checking at the word level or long duration can be helpful, in which the system decides by watching a general representation of a voice command.
On the other hand, using an efficient method for discriminative and detailed representation of audio files improves these systems' efficiency in the real world. For example, every audio signal represents an efficient and integrated image. Gammatonegram is a weighted version of the traditional spectrogram. Human speech has an inherent characteristic that a large part of its information occurs in the low-frequency range. On the other hand, the gammatone filter-bank works non-linearly for low and high frequencies in such a way that it provides higher resolution for low and lower resolution for high frequencies. For this behavior, gammatonegram can represent speech efficiently.
In recent years, deep learning has made significant results in many fields of signal processing <cit.>, especially image processing, in which convolutional neural networks (CNNs) have an essential role <cit.>. Therefore, many studies introduced the idea of using CNNs in speech processing <cit.>. In this study, we use CNNs to build the proposed systems and the transfer learning technique to train the networks, which is used in data-limited conditions <cit.>. Three scenarios, speech recognition, speaker identification, and intelligibility assessment, are defined to evaluate the proposed algorithms. Moreover, we introduce a cascade multi-network ASR system based on speakers’ intelligibility levels.
The remainder of the article is organized as follows: Section <ref> analyzes the related works in dysarthric speech processing. Section <ref> explains the methodology that yields the objective of this research. Section <ref> reports the system parameters and experimental results, and section <ref> presents the discussion and conclusions.
§ RELATED WORK
This study contains several systems in three speech recognition, speaker recognition, and intelligibility assessment tasks. This subsection reports some newest related works in these categories.
Dysarthric speech recognition is one of the most interesting tasks in impaired speech processing. Most conventional dysarthric speech recognition systems used Hidden Markov Models (HMMs) with several states to model the sequential structure of the speech signal and Gaussian Mixture Models (GMMs) to model the distribution of the features in each state <cit.>. In dysarthric speech recognition, the limited training data is challenging. However, HMM-based systems need a large amount of data. Therefore, this method has a significant obstacle in the context of dysarthric speech.
In recent years, impaired speech processing performances have grown thanks to the development of deep neural network (DNN) algorithms. Kim et al. <cit.> adopted convolutional long short-term memory recurrent neural networks to model dysarthric speech in a speaker-independent situation. Authors In <cit.> attempted to use a gated neural network to explore the robust integration of pitch features to improve disordered speech recognition performance. The study conducted in <cit.> proposed a denoising autoencoder to enhance dysarthric speech and improve feature extraction. Shahamiri <cit.> proposed a speech vision system based on the UA speech dataset for dysarthria speech recognition. It generates synthetic voicegrams for all vocabulary words and speakers and uses them in addition to the original data for training. This method delivered average word recognition accuracy of 64.71%. Some works focused on applying meta-learning to find an end-to-end model initialization for dysarthric speech recognition <cit.>. They introduced a base model pre-trained from large-scale normal speech data and proposed methods to meta-update the base model by incorporating across-dysarthric speakers' knowledge into the re-initialized model. Speaker adaptation results on the UASpeech dataset achieved 7.6% relative word error rate reduction.
In <cit.>, A set of novel modeling techniques including neural architectural search, data augmentation model-based speaker adaptation, and cross-domain generation of visual features within an audio-visual speech recognition system framework were employed. The combination of these techniques produced a word error rate of 25.21% on the UA Speech dataset. The multi-stream model introduced in <cit.> consists of convolutional and recurrent layers. It allows for fusing the vocal tract and excitation components. Moreover, they proposed a system with various features, studied the training dynamics, explored the usefulness of the data augmentation, and provided interpretation for the learned convolutional filters. Compared with their baseline system, the proposed approach results in up to 1.7% absolute WER reduction for TORGO dysarthric speech corpus. Their best model reaches 40.6% and 11.8% WER for dysarthric and typical speech, respectively. In <cit.> has proposed an end-to-end ASR framework trained by not only the speech data of a Japanese person with an articulation disorder but also the speech data of a physically unimpaired Japanese person and a non-Japanese person with an articulation disorder to relieve the lack of training data of a target speaker. In that work, an acoustic model portion was shared between persons with dysarthria, and a language model portion was assigned to each language regardless of dysarthria. Experimental results reported the merit of the proposed approach of using multiple databases for speech recognition.
Few studies have been published on dysarthric speaker recognition tasks. One of our previous works <cit.> described the performance of the typical ANN-based system with deep belief network-based features. This system was implemented in single and multi-network modes. Best results on the UA speech dataset were yielded in the multi-network condition with 97.3% speaker identification accuracy for 16 dysarthric speakers. In another work, <cit.> presented new approaches to improve the analysis and classification of disordered speech. For this purpose, an ear model was introduced. This ear model provided relevant auditory-based cues that were combined with the usual Mel-Frequency Cepstral Coefficients to represent atypical speech utterances. The experiments were carried out by using data of both Nemours and Torgo databases of dysarthric speech. Gaussian Mixture Models, Support Vector Machines and hybrid systems were tested and compared in the context of dysarthric speaker identification. The experimental results achieved correct speaker identification rate of 97.2%.
Some researchers work on speech intelligibility assessment or severity level measurement. In <cit.> new technique to detect dysarthric severity levels was proposed. Authors presented time-domain, frequency-domain, and Teager Energy Operator analysis of dysarthric speech to justify spectrogram as feature representation particularly capable of capturing unstructured spectral energy density distributions. Quantifying dysarthria severity based on a residual neural network with short speech segments was reported 98.9% accuracy on the UA speech dataset.
Al-Qatab et al. <cit.> examined the acoustic features and feature selection methods to improve the classification of dysarthric speech. Four acoustic features, including prosody, spectral, cepstral, and voice quality, were used for feature extraction. Furthermore, six classification algorithms were evaluated. The best classification accuracy was 95.80%. A comparative study on the classification of dysarthria severity levels using different deep learning techniques and speech-disorder specific features computed from prosody, articulation, phonation, and glottal functioning were evaluated on DNN models <cit.>. In the best situation, the proposed system gave an accuracy of 93.97% under the speaker-dependent scenario and 49.22% under the speaker independent scenario for the UA-Speech database. Hall et al. in <cit.> reported the optimal setup of deep learning–based dysarthric intelligibility assessment and explained different evaluation strategies. Results indicate an average of 78.2% classification accuracy for unforeseen low intelligibility speakers, 40.6% for moderate intelligibility speakers, and 40.4% for high intelligibility speakers.
§ METHODOLOGY
This section introduces the methods and algorithms used in this study, including transfer learning description, gammatonegram introduction, dysarthria speech dataset, and Voiced Activity Detector (VAD) algorithm presentation.
§.§ Transfer Learning
Convolutional neural networks are the most commonly used algorithm in image processing. The term convolutional means the network has one or several layers that use the convolution operator. Generally, this network has two parts. The first part, which performs convolution, is responsible for feature extraction and input information processing. In fact, during the learning process, this part of the network should acquire a correct understanding of vision, which achieve by using convolutional multilayer processing. The second part of the network is a classifier that uses the feature extracted in the first part of the network to build a model for each class so that for a given speech file, based on extracted feature, determines the associated class.
The CNN needs lots of data to train correctly, but Trained CNNs for a specific task can be modified and reused in a new scenario in low data conditions. This pre-trained convolutional neural network contains information about the dimension and content of the input data, which determines the type and size of the input image. Moreover, the model’s parameters are predetermined, such as the number and type of layers, architecture, and connection between the layers. Transfer learning uses weights and parameters of a pre-trained CNN for a new scenario. This technique eliminates the need for extensive training data because of an understanding of vision based on former training.
The Alexnet is a classic convolutional neural network with five convolutional layers to extract more valuable features in deeper layers <cit.>. The last convolutional layer connects to three fully connected layers. The outputs of these layers use the ReLU activation function. The last layers are the softmax and classifier, which determine the output based on the 1000 trained classes. The input of this network is a colored image with dimensions of 227*227*3. The architecture of this network includes about 60 million parameters and more than 650,000 neurons. This network is trained with more than one million images from the Imagenet dataset. Therefore, according to the classical structure of this network, we used it as the primary network for transfer learning. The structure and parameters of the Alexnet are shown in Fig <ref>. To create a network for a new task, we use the feature extraction part of alexnet and replace new fully connected, softmax, and classifier layers in the classification part to learn the new classes.
Based on the transfer learning technique, a new Alexnet-based CNN is built to recognize dysarthric isolated words. This network extract features from the input images during five convolutional layers based on the convolution filter with dimensions of 11*11 in its first layer. The max-pooling layers are placed after the convolutional layer to reduce the learning time, prevent overfitting, and reduce the number of parameters. The fully connected layer is placed before the output layer and has 4096 neurons. The softmax layer locates after it, and a classifier layer classifies the data.
It was mentioned earlier that the input of the convolutional neural networks are images. This study uses gammatonegram to visually represent audio signals to consider as input for the CNN network. The gammatonegram representation for a given speech file is an image that depicts the amplitude or energy of speech signals at different frequencies and the time of occurrence of audio events <cit.>.
The block diagram of the gammatonegram extraction steps is shown in Fig <ref>. This algorithm is similar to the spectrogram <cit.> but performs the representation more effectively. In gammatonegram extraction, pre-emphasis uses a filter with one pole. Inherently human speech system produces high frequencies with a lower amplitude than low frequencies. This filter increases the range of energy in high frequencies. As a result, the speech becomes more intelligible. Speech is a non-static signal. As a result, it cannot be modeled as a total of sine and cosine functions. In other words, it cannot be transformed to the frequency domain using Fourier transform. However, this signal shows a static behavior in a short period of 20 to 30 milliseconds. For this reason, we divide the speech signal into rectangular frames with a length of 25 milliseconds.
§.§ Gammatonegram
In general, unwanted side lobes will appear if the Fourier transform is taken from a rectangular signal. For this reason, hamming windowing is used before the Fourier transform. This window is a hat-shaped signal that softens each frame's side and eliminates side lobes. To compensate for information loss in the edges, frames have 10 milliseconds overlap. In this situation, the Fourier transform is applied to the signal, and its amplitude is extracted. Finally, the speech signal is weighted with the gammatone filter-bank.
The gammatone filter-bank is shown in Fig <ref> and has a high resolution in low frequencies and a low resolution in high frequencies. Multiplying the speech signal into the filter-bank and summation outputs of all the filters can be depicted as the proposed gammatonegram.
Gammatonegram is an RGB color image and can be considered as input for the convolutional neural network. As mentioned, this type of representation has a high resolution in low frequencies compared to the traditional spectrogram representation. An example of these images is shown in Fig <ref>. It is clear that in comparison with the spectrogram, the speech signals at low frequencies have higher resolution in gammatonegram, which can increase the discriminative of classes. The final image is saved in the size of 227*227*3 as input layer properties of Alexnet.
§.§ Dataset
A dataset, including 16 dysarthric speakers, has been collected and published by some researchers at the University of Illinois <cit.>. These speakers have different severities and speak with varying levels of intelligibility from 2% to 95%. The information of the speakers is reported in Table <ref>. This dataset includes 255 isolated dysarthric speech words, including uncommon words, radio alphabet, digits, computer commands, and common worlds. This dataset is collected in three sessions, B1, B2, and B3, with eight microphones. The sampling frequency in this dataset is 16 kHz.
In this work, the speech files of 16 dysarthric speakers for 30 isolated words have been used. These 30 words include 9 digits, 19 computer commands, and 2 radio alphabets. All the eight saved files that recorded one utterance in each session are almost the same; as a result, in the evaluations, the most reliable evaluation method is the Kfold with K=3. In this situation, one of the Sessions should be separated from the other two sessions so that excessive similarity between the expressions of a session does not cause unnatural similarity of test and train data. In almost all experiments, the data of one session uses for training, and the data of the other two sessions use for testing. It should be noted that this dataset contains speech files of 12 normal speakers which stay unused.
§.§ Voiced Activity Detector
Silence negatively affects speech processing systems. Therefore, VAD algorithms have been used in almost all speech processing systems. For dysarthric people, the inability to pronounce some syllables even in the middle of a word causes several pauses during an utterance. Therefore, using VAD can improve the performance of speech processing systems for these people. We use the GMMVAD algorithm <cit.> before representing the speech signal through gammatonegram and spectrogram. This process reduces the variety of intra-class and can improve the system's efficiency. An example of the GMMVAD process on an audio file and gammatonegram representation before and after applying VAD is shown in Fig <ref>.
§.§ Multi-Network in Cascade Structure
In many dysarthric speech processing works, multi-network have been used <cit.>, but in none of them, assigning the audio file to the desired network has been done automatically, and the person himself must determine which network work for him or Which category does it belong. In the proposed multi-network cascade architecture, the proposed two-class automatic intelligibility assessment system automatically activates one of the Multi-network for speech recognition.
This architecture is shown in Fig <ref>. In this architecture, in the first step, the intelligibility assessment system classifies the incoming speech into two categories, high and low. We trained two ASR systems for each intelligibility category in the second step. In this structure, the intelligibility assessment network activates a CNN network for speech recognition based on the intelligibility of input speech files.
§.§ Evaluation Criteria
Different evaluation criteria are used to check the performance of speech recognition systems. In this work, the Word Recognition Rate (WRR) criterion is used, which calculates the number of isolated words that are correctly recognized compared to the total number of test data.
In the proposed speaker identification systems, the network's decision is made based on each audio expression of an isolated word. As a result, for these systems, we calculate the number of correct decisions in relation to the total number of audio files.
In the intelligibility assessment section, the proposed system classifies each audio file into predetermined categories. There are no dependencies on the speaker’s identity or speech content. The decision of this system is also based on each expression.
§ EXPERIMENTAL RESULTS
In these experiments, we intend to evaluate the performance of the proposed system based on gammatonegram representation and the pre-trained CNN in three modes: recognition of 30 dysarthric isolated words, dysarthric speaker identification for 16 speakers, and speech intelligibility assessment for 2 and 3 classes modes and corresponding speech recognition in a cascade structure.
It is important to have enough data to train convolutional neural networks. Therefore, we first re-train the basic Alexnet to recognize dysarthria speech for 255 classes based on Alexnet as the initial network. The goal of this work is not to achieve high performance, but we want to give a lot of data to the network so that its feature extraction part can be trained appropriately with gammatonegram and spectrogram images. This new CNN is the pre-trained network to build the systems in all the proposed scenarios.
Before evaluating the proposed systems, we answer two questions about the proposed method. 1) How is the efficiency of this system compared to a traditional system based on HMM. 2) Does the proposed gammatonegram perform better than the classical spectrogram.
§.§ Initial Experiments
Before the era of deep neural networks, the HMM was one of the most popular methods for speech recognition <cit.>. Therefore, initially, in a speech recognition scenario for dysarthric speech, the overall performance of a system based on proposed CNN and gammatonegram with a system based on HMM-GMM and conventional features of Mel Frequency Cepstral Coefficient (MFCC) has been compared. In this comparison, the training and test data are completely identical to be a benchmark for measuring performance.
In addition to the classification method, we need to pursue the efficiency of the proposed representation method. Therefore, the proposed representation method, i.e., gammatonegram, should be compared with the conventional representation method, i.e., spectrogram. To this goal, two systems are built separately under the same conditions based on gammatonegram and spectrogram, in which the number of classes, the amount of training and test data, the network structure, and learning parameters are completely similar.
In this situation, all three systems should be trained for 30 dysarthric isolated words. The system based on HMM-GMM has three states and four Gaussians in each state. The MFCC features, energy, and first and second-order derivatives have been extracted from the audio signal, totaling 39 features per frame. However, we train the proposed CNN network using the introduced pre-trained network for gammatonegram and spectrogram. In all three systems reported in Table <ref>, B3 is used for training, and B1 and B2 are used for testing in speaker-dependent mode.
It should be noted that the proposed HMM system is implemented using Murphy Toolbox <cit.>. As can be seen from Table <ref>, the HMM-based system achieved 66.23% overall recognition accuracy, which is poor performance compared to the other two systems. The CNN-based systems show an acceptable performance despite the insufficient training data. Meanwhile, the system that used the Gammatonegram representation method shows better results and reaches to 91.29% accuracy rate. These results verify that the proposed gammatonegram method for representation and CNN for end-to-end classification are the right choices for dysarthric speech processing.
§.§ Automatic Speech Recognition
For disabled people, it can be helpful to implement a smart home system based on voice commands. One of the best ways to get this command is the speech signal. In this case, by checking the contents of the speech file, ASR tries to identify the command word defined for the system. In this system, the information related to speech is important, not the speaker's information. Therefore, this system can generally operate in two modes, Speaker Independent (SI) and speaker Dependent (SD). In the SD mode, the information of specific speakers is used in the training phase, and the network adapts to these speakers. In this case, the system is more efficient because it is familiar with the parameters related to the speakers. In the SI mode, there is any information about test speakers in the training phase. The performance of ASR systems usually decreases in SI mode because the information related to the test speakers affects their performance.
In this section, dysarthric speech recognition systems are evaluated in both modes. In the simulation for SD mode, only one CNN is trained for all the speakers, and the speech files belonging to all the speakers are tested using this network. In SI mode, each test speaker's speech files are left out, and the system is trained using the speech of other speakers. The simulation is repeated for all 16 speakers, and a specific SI network is trained for each speaker. In Table <ref>, the results of the proposed speech recognition systems are reported based on speakers.
In these experiments, the CNNs are trained with batch size 32, and the iteration is 20. The ASR system in the SD mode achieved an average WRR 91.29%, which is about 6.5% better than the SI mode with 84.50% WRR. In addition, by analyzing the results for each speaker, it can be found the system has its lowest performance for speakers with high severity.
§.§ Automatic Speaker Identification
In scenarios such as smart homes, the voice key is beneficial for disabled people, because in cases such as locking the house's front door or permission to a set of voice commands, speaker identification can allow the person to gain access. Therefore, designing an efficient system in this area can be helpful. Two modes Text Dependent (TD) and Text Independent (TI), are evaluated. These CNNs are trained with about 5 minutes of speech for each speaker, and the output layer has 16 classes, each representing one of the speakers.
In the TD mode, the texts expressed in the test and training mode are the same. In other words, the dysarthric person has to repeat a specific password in both phases. The system is tested with sessions B1 and B2 for 30 isolated words. In TI mode, the speech contents used for training and testing have to be unequal. In other words, in this case, a person can use any word as a voice password, and the system recognizes the person's identity with different speech content outside of the training data. For the test of the TI system, the CW1 to CW50 words of the UA dataset were used, which had not been used in the system training. The systems trained with batch size 32, and 30 iterations. The results obtained from both modes are reported in Table <ref>. the performance of the systems reached 87.74% accuracy in TD mode and 80.70% in TI mode.
§.§ Cascade System for Speech Recognition
Automatic process the disabled people's speech to find out their speech intelligibility level is effective for many purposes. Among them, we can mention the automatic diagnosis of the disease level and the growth process of disability by periodically checking their speech. Moreover, the automatic intelligibility assessment can improve the efficiency of automatic speech recognition and speaker identification systems in multi-network mode. In this scenario, the dysarthric speakers’ express speech commands without knowledge of the multi-network structure or even the severity level of their disability. Automatic intelligibility assessment examines the person's speech and assigns it to the corresponding network according to the intelligibility level.
For this purpose, different categories are usually made according to the intelligibility percentage. In this study, according to the efficiency of the system and the amount of available data, the speakers are divided into three-class and two-class modes based on the intelligibility level, and two separate networks are trained to recognize the intelligibility. The interesting point in this scenario is that the speech of dysarthric individuals is sometimes accompanied by unusual silence even in the middle of a word. this phenomenon can play an essential role in determining the intelligibility level of a dysarthric person's speech. For this reason, the training and evaluation of systems are done without VAD, and all speech files are processed originally. In this case, trained CNN networks are trained using batch size 32, and 20 iterations.
The results of three-class and two-class networks are reported in Table <ref>. In the three-class mode, speakers are classified into three categories: high, mid, and low, whose intelligibility range in each class is shown in Table <ref>. In the two-class mode, the high and mid categories are combined because we realize that for these two classes, there is a high correlation between data, and in the end, the low category remains unchanged. These two systems have been trained in SD mode, in which session B3 is used for training and sessions B1 and B2 are used for testing. According to the results, the performance has improved in the two-class mode, so the average intelligibility recognition accuracy using CNN and gammatonegram in the two and three classes have reached 96.47% and 92.74%, respectively.
Part 2 of Table <ref> provides the results of the multi-network ASR in cascade structure with the intelligibility assessment system. The results are reported in two and three-class modes. According to these results, the performance of the speech recognition system in the dual-network has improved compared to the single-network mode and reached 92.3% accuracy in SD mode. This achievement is because each network focuses on close-range speech intelligibility or less intra-class variation.
§ CONCLUSION
In this work, we introduced gammatonegram as an effective representation method and utilized transfer learning to build the dysarthric speech processing systems based on CNN. The introduced systems have been evaluated in three scenarios: speech recognition, speaker identification, and intelligibility assessment. Before these experiments, two hypotheses: 1) better performance of CNN in comparison with traditional HMM and 2) more detailed representation of gammatonegram in comparison with the typical spectrogram, were checked. Comparing the performance of these three systems in the same conditions showed that the system based on CNN and gammatonegram performs better than the two others. This result verifies all subsequent simulations using the proposed method.
The UA dysarthria speech dataset and the GMMVAD algorithm for silence removal are used for systems training and evaluation. The CNN used in this work is based on the famous Alexnet as initial network, and we re-trained it using 255 audio commands so that the first part of the network that performs feature extraction is trained with a large volume of gammatonegram images. This network has been used as a pre-trained network that makes models for all the scenarios based on the transfer learning technique. To this end, only B3 was used for system training, and B1 and B2 were used for system evaluation.
The performance of proposed systems in different modes is shown in Fig <ref> and <ref> so that it can be analyzed more efficiently for each speaker. In this chart, the speakers are sorted from the lowest speech intelligibility to the highest speech intelligibility, as reported in the dataset. The speech recognition system was developed and evaluated in two modes, SD and SI, and was able to achieve 91.29% accuracy in the SD mode. The second system was built for speaker recognition, where about 5 minutes of speech is used for each speaker in the training phase. In the best case, this system achieves 87.74% accuracy in TI mode. The third system was trained for intelligibility assessment and evaluated in three-class and two-class modes. This system reached 96.47% and 92.74% accuracy in two and three-class modes, respectively.
Finally, we designed an automatic multi-network system for automatic speech recognition, in which the input speech was automatically assigned to each speech recognition network according to the intelligibility percentage. In this case, the system achieved an accuracy of 92.3% in a cascade architecture and using a two-class speech recognition, which shows improvement compared to the single-class mode.
Future studies could improve the results using: cascade implementation for speaker identification tasks and data augmentation techniques such as adding different noises and music to the speech files.
IEEEbib
|
http://arxiv.org/abs/2307.02742v1
|
20230706030543
|
Diffractive lensing of nano-Hertz gravitational waves emitted from supermassive binary black holes by intervening galaxies
|
[
"Hao Ma",
"Youjun Lu",
"Zhiwei Chen",
"Yunfeng Chen"
] |
astro-ph.CO
|
[
"astro-ph.CO",
"astro-ph.GA",
"astro-ph.HE"
] |
firstpage–lastpage
Active Learning with Contrastive Pre-training for Facial Expression Recognition
Shuvendu Roy, Ali Etemad
Dept. ECE and Ingenuity Labs Research Institute
Queen's University, Kingston, Canada
{shuvendu.roy, ali.etemad}@queensu.ca
August 1, 2023
==========================================================================================================================================================
Pulsar timing array (PTA) experiments are expected to detect nano-Hertz gravitational waves (GWs) emitted from individual inspiralling supermassive binary black holes (SMBBHs). The GW signals from a small fraction of these SMBBHs may be diffractively lensed by intervening galaxies. In this paper, we investigate the diffractive lensing effects on the continuous GW signals from the lensed SMBBHs and estimate the detectable number of such signals by PTAs, such as the Chinese PTA (CPTA) and the Square Kilometer Array (SKA) PTA. We find that the amplitude of the lensed GW signals may be only amplified by a factor of ∼ 1.01-1.14 (16%-84% range) and the phase of the signals may shift somewhat due to the lensing, significantly different from those strongly lensed high frequency GW signals from compact binary mergers in the geometric optics. We estimate that ∼ 0.01% of all detected nano-Hertz GW signals from individual SMBBHs by future PTA experiments are lensed by foreground galaxies (i.e., up to ∼ 106 for CPTA and up to ∼ 289 for SKA-PTA). However, the lensed nano-Hertz GW signals are difficult to be distinguished from those without lensing by the PTA observations only. We further discuss the possibility about the identification of the lensed nano-Hertz GW signals from SMBBHs via the electromagnetic detection of their host galaxies or active galactic nuclei.
gravitational lensing: strong – gravitational wave – galaxies: nuclei – (galaxies:) quasars: supermassive black holes – relativistic process – (stars:) pulsars: general
§ INTRODUCTION
Supermassive binary black holes (SMBBHs) can form in galactic centers as a natural consequence of frequent galaxy mergers <cit.>. At the late stages of their inspiral (e.g., with period around a year or so), vast amount of gravitational wave (GW) radiations are emitted at the nano-Hertz (nHz) frequency band (10^-9∼ 10^-7 Hz), making them one of the main targets of the Pulsar Timing Arrays (PTAs). The ongoing International PTA <cit.> composed of the Parkes PTA <cit.>, the European PTA <cit.>, and the North-American Nanohertz Observatory for GWs <cit.>, are regularly updating their timing data set by monitoring a set of millisecond pulsars (MSPs) <cit.>. Recently, new PTA experiments, including the Chinese PTA <cit.> and the Indian PTA <cit.> are also joining the campaign for the detection of GW signals and MeerKAT PTA has released its first 2.5 yr data <cit.>. Furthermore, the next-generation Very Large Array <cit.> and the future PTA based on the Square Kilometer Array <cit.> telescope will greatly improve the sensitivity of PTA observations by monitoring hundreds to a thousand of MSPs with very high timing precision.
The nHz GW signals from SMBBH inspirals can also be strongly gravitationally lensed by foreground galaxies like any other GW sources, including mergers of stellar binary black holes and binary neutron stars <cit.>. Since the nHz GW signals have long wavelength which are comparable with the Schwarzschild radius of the lens galaxies, the wave effect may be important for the gravitational lensing of these GW signals. As pointed out by <cit.>, the geometrical optics approximation breaks down and the wave effect must be taken into account if the lens mass M_ L ≲ 10^8 M_⊙/(f_ GW/mHz). For typical PTA sources (10^-9∼ 10^-7 Hz) lensed by foreground galaxies, the wave effect plays a significant role in the lensing when the lens mass M_ L≲ (10^12∼ 10^14) M_⊙. In this case, a lensed event only has one image rather than multiple images under the geometrical optics approximation, and the path of the lensed GW signal from the source to earth is different from those paths of the lensed multiple electromagnetic (EM) images from the same event <cit.>. Understanding this multi-messenger propagation path (and also time delay) difference may be helpful in reconstructing the mass distribution of the lens galaxy <cit.>. In addition, the first detection of the lensed nHz GWs along with their EM counterparts may at the same time verify Einstein's general relativity again.
In this paper, we investigate the lensed GW signals from inspiralling cosmic SMBBHs in the wave optics regime as well as their lensed host galaxies based on the singular isothermal sphere (SIS) lens model. We estimate the number of the SMBBHs that can emit GW signals detected by future PTA experiments with different configurations and the detectability of their associated lensed host galaxies by the future sky survey Nancy Grace Roman Space Telescope <cit.>. We note that <cit.> recently studied the GW signals from SMBBH inspirals being strongly lensed by adopting the geometrical optics approximation but without consideration of the wave effect. They found that the nHz GW signals from about 9-26 lensed SMBBHs may be detected along with high chances to resolve their EM counterparts.
This paper is organized as follows. In Section <ref>, we describe the mock SMBBH sample generated from a cosmic formation and evolution model of SMBBHS in galaxy merger remnants <cit.>. Then we briefly introduce the wave effect in gravitational lensing of the nHZ GW signals from inspiralling SMBBHs and investigate how it affects the GW waveforms emitted from lensed cosmic SMBBHs in Section <ref>. Estimations of the detectable number of lensed nHz GW signals are given in Section <ref>. The detectability of the corresponding lensed host galaxies and/or EM counterparts are discussed in Section <ref>. Finally conclusions and discussions are given in Section <ref>. Throughout the whole paper, we adopted the Λ CDM cosmology model with (h, Ω_ m, Ω_Λ) = (0.7, 0.3, 0.7).
§ GW EVENTS FROM SMBBH INSPIRALS
§.§ GW samples
To generate a mock sample for cosmic SMBBHs, we adopt the formation and evolution model of cosmic SMBBHs from <cit.>, which was in turn based on the study of the dynamical evolution of SMBBHs in galaxy merger remnants by <cit.>. In the model, the cosmic distribution of SMBBHs is described by Φ_ BBH(M_ BH, q_ BH, f_ GW, z), which is defined so that Φ_ BBH(M_ BH, q_ BH, f_ GW, z) dM_ BH dq_ BH df_ GW represents the comoving number density of SMBBHs at redshift z with total mass in the range M_ BH→ M_ BH+dM_ BH, mass ratio in the range q_ BH→ q_ BH+dq_ BH and GW frequency in the range f_ GW→ f_ GW+df_ GW; and the distribution function can be obtained through
Φ_BBH (M_BH, q_BH, f_GW, z)
= ∫_0^t d t^'∫ d M_* ∫ d q_* n_*(M_*, z^') ℛ_*(q_*, z^'| M_*)
× p_BH(M_BH, q_BH| M_*, q_*, z^')
× p_f(f_GW, t-t^'| M_*, q_*, M_BH, q_BH, z^')
× P_intact (z, z^'| M_*).
In the above equation, n_*(M_*,z') is the stellar mass function of the spheroidal components of the merger remnants (which is also denoted as `bulge' for simplicity) so that n_*(M_*,z')dM_* represents the comoving number density of bulges at redshift z' with mass in the range M_*→ M_*+dM_*; ℛ_*(q_*,z'|M_*) is the merger rate per bulge so that ℛ_*(q_*,z'|M_*)dq_* represents the averaged number of mergers with mass ratio in the range q_*→ q_*+dq_* within time t'→ t'+dt' for a descendant bulge with mass M_*, where t' is the corresponding cosmic time at redshift z'. p_ BH(M_ BH,q_ BH|M_*,q_*,z') is a probability distribution defined so that p_ BH(M_ BH,q_ BH|M_*,q_*,z') dM_ BH dq_ BH represents the probability that a host merger at redshift z' characterized by (M_*,q_*) leads to a SMBBH merger characterized by (M_ BH, q_ BH) with M_ BH representing the total mass of the SMBBHs, which is determined by the MBH–host galaxy scaling relation. p_f(f_ GW,τ| M_*, q_*, M_ BH, q_ BH, z') is a probability distribution defined so that p_f(f_ GW,τ|M_*,q_*, M_ BH, q_ BH, z')df_ GW represents the probability that a host merger characterized by parameters (M_*,q_*,M_ BH,q_ BH) at redshift z' leads to a SMBBH emitting GW at frequency range f_ GW→ f_ GW+df_ GW at a time τ after the host merger[Note that p_a(a, τ | M_*, q_*, M_BH, q_BH, z') was used by <cit.> since they pursued the distribution of the semimajor axis a instead of the GW frequency f_ GW as we do here. The two quantities a and f_ GW can be converted to each other through f_GW = π (1+z) (G M_BH/a^3)^1/2 assuming circular orbits.]; and it encodes the different dynamical processes the SMBBHs undergo from galaxy mergers to their own mergers (detailed considerations can be found in and ). P_ intact(z,z'|M_*) represents the probability that the host merger remnant will not experience a major merger with another galaxy or be accreted by a bigger galaxy to become a satellite galaxy of it between redshifts z and z'.
In Equation (<ref>), the mass function and merger rate of the bulges can be derived from quantities of their host galaxies, i.e.,
n_* (M_*, z^') ℛ_*(q_*, z^'| M_*)
= ∫ d M_gal∫ d q_gal n_gal(M_gal, z^') ℛ_gal(q_gal, z^'| M_gal)
× p_*(M_*, q_* | M_gal, q_gal, z^')
where n_ gal(M_ gal,z) and ℛ_ gal(q_ gal,z|M_ gal) are the galaxy stellar mass function and the merger rate per galaxy, respectively; p_*(M_*,q_*|M_ gal,q_ gal,z) is the probability distribution of the bulge total mass and mass ratio given the host galaxy total mass and mass ratio. All these galaxy quantities are defined in a similar way as the bulges.
We generate a mock sample of cosmic SMBBHs with source parameters (z, M_ BH, q_ BH, f_ GW) based on Equation (<ref>). We restrict the sample SMBBHs to the redshift range 0.2 ≤ z≤3, the total mass range 10^7≤ M_ BH/M_⊙≤ 10^11, the mass ratio range q_ BH≥ 0.01 and the GW frequency range 10^-9≤ f_ GW/ Hz≤ 10^-7. We will discuss the parameter distribution of these mock SMBBH systems and the lensed ones in section <ref>. Throughout this study, we ignore the orbit eccentricity of the SMBBHs for simplicity, since SMBBHs generally do not form with high eccentricities at the end of the dynamical friction stage <cit.> and the eccentricity decays rapidly due to GW radiation <cit.>. Nevertheless, there are still debates on the eccentricity of SMBBHs. The scattering of ambient stars and interaction with circumbinary disks may also lead to some eccentric SMBBHs <cit.>.
§.§ SNR
Once a GW signal is passing by the propagation path of the pulses from a millisecond pulsar (MSP) to earth, fluctuations of the pulse time of arrivals (ToAs) is induced and therefore the GW signals can be extracted from the pulsar timing residuals. Considering a single continuous GW source located at a direction Ω̂, the induced timing residual on the ToAs can be calculated by <cit.>
s(t, Ω̂)=F^+(Ω̂) Δ A_+(t)+F^×(Ω̂) Δ A_×(t).
In the above equation, the antenna pattern functions F^+(Ω̂) and F^×(Ω̂) are given by
F^+(Ω̂)= 1/4(1-cosθ){(1+sin ^2δ) cos ^2δ_pcos[2(α-α_p)].
. -sin 2 δsin 2 δ_pcos(α-α_p)+cos ^2δ(2-3 cos ^2δ_p)},
F^×(Ω̂)= 1/2(1-cosθ){cosδsin 2 δ_psin(α-α_p).
. -sinδcos^2δ_psin[2(α-α_p)]},
with (α,δ) and (α_ p,δ_ p) denoting the sky locations of the SMBBH and pulsar, respectively. The angle between the source and pulsar directions with respect to the observer is denoted as θ, and cosθ=cosδcosδ_pcos(α-α_p)+sinδsinδ_p.
The GW induced timing residual s(t, Ω̂) is contributed by both the Earth term A_{+,×}(t) and the pulsar term A_{+,×}(t_ p) as Δ A_{+,×}(t)=A_{+,×}(t)-A_{+,×}(t_ p), where t_ p is the time when the GW signals pass by the pulsar and can be obtained as t_ p = t-D_ p(1-cosθ)/c, with D_ p representing the distance of the pulsar.
A_{+,×}(t) is defined as A_{+,×}(t)= h_{+,×}(t) / 2 π f_ GW(t), and h_{+,×}(t) can be described as
h_+ = h_0{(1+cos^2 ι) cos 2ψsin[ϕ (t)+ϕ_0]
+ 2cosιsin 2ψcos[ϕ(t)+ϕ_0] },
h_× = h_0{(1+cos^2 ι) sin 2ψsin[ϕ (t)+ϕ_0]
- 2cosιcos 2ψcos[ϕ(t)+ϕ_0] },
with ι representing the inclination angle of the normal of the binary orbit with respect to the line of sight. For simplicity, we set the polarization angle of GW ψ=0 and the initial phase angle ϕ_0=0 in the following calculations, which does not affect the main results. The GW amplitude h_0 here is determined by the redshifted chirp mass ℳ_ c^z = (1+z)ℳ_ c = (1+z)M_ BH q_ BH^3/5/(1+q_ BH)^6/5, the GW frequency in the observer rest frame f_ GW, and the luminosity distance of GW source D_ L as
h_0 = 2 (G ℳ_c^z)^5 / 3/c^4(π f_ GW)^2 / 3/D_L.
Once the initial GW observational frequency f_ GW,0 and chirp mass ℳ_c^z are given, the GW frequency f_ GW(t) and phase ϕ(t) at time t can be expressed as
f_ GW(t)=[f_ GW,0^-8 / 3-256/5π^8 / 3(G ℳ_c^z/c^3)^5 / 3 t]^-3 / 8,
ϕ(t)=1/16(G ℳ_c^z/c^3)^-5 / 3{(π f_ GW,0)^-5 / 3-[π f_ GW(t)]^-5 / 3}.
Finally the SNR of a GW signal can be obtained by
ϱ^2=∑_j=1^N_p∑_i=1^N[s_j(t_i)/σ_t, j]^2,
where N_ p is the total number of MSPs applied in a specific PTA experiment, N is the total number of timing data points, and σ_t, j is the rms of the timing precision for j-th MSP. In this paper, we consider four different PTA configurations, including two CPTA and two SKA-PTA possible configurations, see Table <ref> for the detail settings. As for the pulsars location (α_ p,δ_ p) and pulsar distance D_ p, we randomly select N_ p pulsars with distance D_ p<5 kpc from the the Australia Telescope National Facility (ATNF) Pulsar Catalogue[<https://www.atnf.csiro.au/research/pulsar/psrcat/>] and assume that they all have the same timing precision σ_t. Uniform detection cadence Δ t is assumed during the observation time. Figure <ref> shows the locations of 1000 pulsars in the sky map, in which the size of the circles stand for the relative distance of these pulsars. Thus the SNR of the GW radiates from each SMBBH in our sample can be obtained via Equation (<ref>) for different PTA configurations, and further selected as “detectable” if its SNR is larger than a given threshold ϱ_0. In our following analysis, we set three different ϱ_0, i.e., 3, 5, or 10.
§ WAVE EFFECT OF GRAVITATIONAL LENSING
§.§ Wave effect
The amplification factor F_wave(f) is a complex number and fluctuate as a function of frequency due to the wave effect (diffraction and interference) <cit.>. Under the thin-lens assumption, the amplification factor in the wave optics regime is give by <cit.>
F_wave(f)=D_ sξ_0^2/D_ l D_lsf(1+z_ l)/i∫ d^2xexp[2 π i f T_ d(x, y)],
where D_ s, D_ l, and D_ ls are the angular diameter distances from the source and lens to the observer, and that between the source and the lens, f is the observed GW frequency at the observer's frame. Normalization constant of length ξ_0 is set to be the Einstein radius in the SIS lens model, i.e., ξ_0=4 π (σ_ v/c)^2 D_ l D_ ls / D_ s, where σ_ v is the velocity dispersion. Factor (1+z_ l) accounts for the time dilation, where z_ l is the redshift of the lens. The arrival time delay through the deflected path of the propagation T_ d(x, y) is given by
T_ d(x, y)=D_ sξ_0^2/D_ l D_ l s(1+z_ l)[1/2|x-y|^2-ψ(x)+ϕ_ m(y)],
where x is the dimensionless impact parameter in the lens plane and y is the dimensionless source location in the source plane; ψ(x) is the deflection potential and ϕ_ m is added to set the minimal value of T_ d(x, y) as 0.
Under the assumption of the SIS lens model, (x, y) reduce to (x,y) and F_wave(f) reduce to
F_wave(w)=-i w e^i w y^2 / 2∫_0^∞ d x x J_0(w x y) e^i w[1/2 x^2-x+ϕ_ m(y)]
where J_0 is the Bessel function of the zeroth order, w = 8π M_ Lz f is the dimensionless GW frequency, in which M_ Lz is the lens mass included in the Einstein radius defined by M_ Lz= 4π^2σ_ v^4 (1+z_ l) D_ l D_ ls/(G c^2 D_ s). Here we set ϕ_ m(y)=y+1/2, which makes the minimum value of the arrival time delay to be zero.
<cit.> presented explicit details of multiple methods to compute the diffraction integral above. Here we adopt the 10th-order asymptotic expansion method to calculate F_wave(f). By setting z = x^2/2, then
F_wave(w) = C ·∫_0^∞ d z e^i w z g(z)
= C ·[∫_0^b d z e^i w z g(z) + e^iwb∑_n=1^10(-1)^n/(iw)^n∂^n-1 g(z)/∂ z^n-1|_z = b]
where
C = -iwe^iw(y^2/2+y+1/2)
g(z)= e^-iw√(2z) J_0(wy√(2z))
with larger b, Equation (<ref>) may relatively be more accurate and converge faster, but need longer time for computation at the same time. According to <cit.>, b=10^4 is a suitable value of trade-off between accuracy and computation time. Thus this value is adopted in our calculation of F_ wave.
Figure <ref> shows the amplitude |F_ wave| (middle panel) and phase θ_ F (bottom panel) of amplification factor as a function of the dimensionless frequency parameter w, as well as the distribution of w for mock lensed cosmic SMBBHs. As seen from this figure, majority of the lensed GW SMBBH sources detected in the PTA band have small w, e.g., ≲ 0.2, and peak around w ∼ 6 × 10^-3. The lensed GW signals of most sources can only be amplified by a factor of |F_ wave|, which is ∼ 1.04-1.14 for mock BBHs with |F_ wave| ranking from low to high in the 16%-84% range. Few of the lensed GW sources can be amplified by a factor of > 1.5. The bottom panel of Figure <ref> shows the phase θ_ F of the amplification factor, suggesting the phase shift of the waveform by the gravitational lensing is typically in the range from ∼ -0.04 to -0.12 rad (corresponding to the 16% to 84% range of θ_ F with ranking from low to high).
Both the amplitude and the phase of amplification factor for most lensed nHz GW signals are insensitive to the source location due to small w (≲ 0.2). This is totally different from the case in the geometrical optics regime, for which the magnification factor highly depends on the source location y, e.g., μ_+ = (1+y)/y and μ_- = (1-y)/y for the SIS lens model. Note here that, in accordance with previous papers, we use the term μ to describe the amplification of lensed images in the geometrical optics regime, and use the term F_ wave(f) to describe the amplification in the wave optics regime. For w ≳ 1 where F_ wave(f) asymptotically converges to the geometrical optics approximation, F_ wave(f) and μ are related by |F_ wave(f)|^2 = |μ_+| + |μ_-| + 2|μ_+μ_-|^1/2sin(2 π f T_ d) for y < 1 and μ_- =0 for y ≥ 1. Figure <ref> shows the magnification factor for those mock lensed sources in the geometrical optics regime (blue symbol for μ_+ and orange symbol for μ_-) and the amplification factor in the wave optics regime (|F_ wave(f)|^2), respectively, by assuming the SIS model for the lens. Apparently, the magnification effect resulting from the wave optics is substantially different from that resulting from geometrical optics. In the wave optics regime, |F_ wave(f)|^2 is in the range of ∼ 1-2, while in the geometrical optics regime, μ_+ and μ_- are in the range of ∼ 2-40 and ∼ 10^-2-30, respectively. For the same lensed source, μ_+ is always larger than |F_ wave(f)|^2 and can reach higher magnification factor up to several tens for some extreme cases near the caustic, and μ_- can be larger than 1 if y<0.5 and smaller than 1 if 1> y>0.5 (the lensed source is demagnified), while |F_ wave(f)|^2 is always slightly larger than 1. These prove again the importance to take the wave effect into consideration for the detection of lensed nHz GW signals from SMBBHs.
§.§ Lensed waveforms
For the GW signal from an inspiralling SMBBH on circular orbit, its waveform in the time domain is given by Equations (<ref>) and (<ref>). In order to find out how the GW waveform is changed by the gravitational lensing in the wave optics regime, we calculate some typical GW waveforms and the corresponding lensed waveforms. The original waveform without lensing is represented by h_+ for simplicity (the real waveform is actually a combination of h_+ and h_×). As for the lensed GW waveform, it highly depends on the GW frequency and the total observation duration as shown in Figure <ref>. First, we obtain the GW waveform in the frequency domain which is Fourier transformed from the original GW waveform in the time domain. Then we magnify this transformed waveform by F_ wave(w) according to its frequency. Finally, we obtain the lensed waveform in the time domain by taking the inverse Fourier transform for the magnified waveform in the frequency domain.
Figure <ref> shows the lensed GW waveform for three example SMBBHs in the time domain. For these SMBBHs, we set equal component mass with m_1=m_2=10^9 M_⊙, redshift z=1, inclination angle ι = 0, polarization angle of GW ψ = 0, source location fixed at y=0.1 in the lens plane, lens at z_ l=0.5, but different initial GW frequency f_ GW,0=10^-9 Hz (top panel), 5 × 10^-9 Hz (middle panel), and 5 × 10^-8 Hz (bottom panel), respectively. The original and lensed waveforms for these three cases are shown in black solid and blue dashed lines, respectively. The orange dashed lines are the scaled lensed waveforms, where we match the lensed and original waveforms at the initial point. Apparently, all the waveforms are nearly sinusoidal, the scaled lensed waveforms can match the original waveforms almost exactly, as the frequency evolution within 10 yr observation is insignificant and can be neglected. This indicates that the lensing information may be degenerated with the SMBBH parameters (e.g., m_1,m_2,z,ι,ψ).
This feature of nHz GW signals is straightforward to understand. Unlike those high frequency GW signals (e.g., GWs emitted from mergers of compact binaries) which have rapid frequency evolution during the observation duration (≲100 s), the typical frequency of GW signals detected by PTA is in the range of 10^-9-10^-7 Hz and the observation duration lasts about a decade or even a few decades. This means that the frequency evolution within the observation duration is tiny according to Equation (<ref>), thus the GW signal is nearly monochromatic during the whole observation period. For instance, the period of a GW signal emitted by SMBBH with initial frequency f_ GW,0=10^-8 Hz is ∼ 3yr. The relative frequency difference during a ten-year observation is only |(f_ GW,0-f_ GW,10)/f_ GW,0| ∼ 10^-6, thus f_ GW almost remains unchanged during the entire ten year observation (i.e., three times the GW waveform period).
We note here that some lensed SMBBHs may be highly eccentric in reality, for which the GW signals emitted at a number of different harmonics may have comparable power and be detectable, and thus their GW signals are not monochromatic as that in the cases with circular orbits. The amplification factors for GW signals from different harmonics can be substantially different. Therefore, it is possible to identify the lensed events via the GW signals from SMBBHs with large eccentricities. We defer detailed investigation on the lensing effect of GW signals from eccentric SMBBHs to a future work.
§ LENSED SMBBHS IN THE PTA BAND
Each SMBBH in our sample obtained in Section <ref> may have a chance to be strongly lensed by a foreground galaxy with a probability characterized by the optical depth τ. Assuming the SIS model, this optical depth for a source at redshift z_ s is given by
τ_ SIS (z_ s) = ∫_0^z_ sdV/dz_ l dz_ l∫^∞_0d N/d σ_ vA_ SIS/4π d σ_ v,
= ∫^z_ s_0 dz_ l∫^∞_0 dσ_ v P(z_ l,σ_ v|z_ s).
In the above Equation, dV/dz_ l is the comoving volume element per unit redshift and it is equal to 4π c D_ c,l^2 H_0^-1 [Ω_ m(1+z_ l)^3+Ω_Λ]^-1/2 for the adopted ΛCDM model, with D_ c,l denoting the comoving distance to the lens. The cross-section for lensing to occur A_ SIS is equal to π y^2_ mθ_ E^2 for the SIS model, with θ_ E = 4π(σ_ v/c)^2(D_ ls/D_ s) denoting the angular Einstein radius for the SIS model, and y_ m is the boundary of the cross-section. dN/dσ_ v is the distribution of the velocity dispersion for SIS lenses. The probability for a GW event at z_ s is lensed is p(z_ s) = 1- exp(-τ_ SIS(z_ s)) ≃τ_ SIS(z_ s) as τ_ SIS≪ 1. The term P(z_ l,σ_ v|z_ s) (= dV/dz_ ld N/d σ_ vA_ SIS/4 π∝d N/d σ_ vθ_ E^2 D_ c,l^2 [Ω_M(1+z_ l)^3+Ω_Λ]^-1/2) in the second line of Equation (<ref>) denotes the probability distribution of a GW event at redshift z_ s that is lensed by a galaxy with velocity dispersion in the range of σ_ v→σ_ v+dσ_ v at the redshift range of z_ l→ z_ l+ dz_ l.
We only consider those systems with y≤1 as the “strongly” lensed ones in this paper. In principle, systems with y slightly larger than 1 can also have lensing effect in the wave optics regime but less significant than those with y<1. For the lensing of the SMBBH host galaxies (in the geometrical optics regime), it is in the strong lensing regime when y<1 and in the weak lensing regime when y>1. For simplicity, we do not consider those cases with y>1 in the following analysis and thus the cross-section reduces to A_ SIS = πθ_ E^2.
The galaxy-scale strong lensing we considered here is mainly caused by the intervening early-type galaxies <cit.>.[Massive lenes, such as galaxy groups or clusters, may lead to more significant lensing effects on both the GW signals and their host galaxies due to the larger amplification caused by them <cit.>. The lensing events caused by these massive lenses with relatively higher amplification may be easier to be observed by EM observations as they are rare compared with elliptical galaxies, which deserves further investigation. For simplicity, we do not consider them in the present work.]
The number density distribution of these galaxies dN/dσ_ v can be described by a modified Schechter function <cit.> and the redshift evolution of this number density distribution may be simply described by the power-law evolution model, i.e.,
d N/ dσ_ v=ϕ_z(σ_ v/σ_z)^αexp[-(σ_ v/σ_z)^β] β/Γ(α / β)1/σ_ v,
and
ϕ_z = ϕ_*( 1+z_ l)^κ_n; σ_z = σ_*( 1+z_ l)^κ_ v.
Here (ϕ_*, σ_*, α, β)=(8.0 × 10^-3h^3 Mpc^-3, 161 km s^-1, 2.32, 2.67) according to the fitting results given by <cit.>, and the two redshift evolution parameters κ_n=-1.18 and κ_ v=0.18 according to <cit.>.
We adopt the Monte-Carlo method to generate mock lensed SMBBHs according to the sample of cosmic SMBBHs given in Section <ref> and the distribution P(z_ l,σ_ v|z_ s). Then we calculate the GW signals emitted from these SMBBHs and estimate their expected SNR for any given PTA configuration as those listed in Table <ref>. As the lensed SMBBHs radiate continuous GW signals which can be approximately assumed as monochromatic, their SNRs can be directly estimated as ϱ_ l = |F_ wave(f_ GW)| ϱ, in which ϱ represents the SNR of the original GW signals without lensing. We set several different threshold values as ϱ_0=3/5/10 to define a source (either the lensed one or the one without lensing) as “detectable”. These SNR limited samples contain some GW sources with ϱ<ϱ_0 and ϱ_ l>ϱ_0, which are detectable because of the lensing magnification bias. Note that this magnification bias is small for the cases considered here due to the small amplification factor |F_ wave(f_ GW)|. In addition, the lowest observable GW frequency is set to 10^-9 Hz, which corresponds to a waveform period up to ∼ 30 years. Thus we consider two scenarios of observation time T_ obs as 10 years and 30 years. For the 10-year observation, we set a lower limit for detectable GW signals as 1/T_ obs to ensure at least a whole waveform can be observed. In order to obtain the total number of detectable lensed SMBBHs by any PTA configuration for different scenarios, we generate 100 independent such realizations and adopt their average detectable lensed number as our results.
Table <ref> lists the expected number of detectable SMBBHs, either the lensed cases or the cases without lensing, for future PTA experiments with different configurations.
For observation period as 10 years, future optimal PTA configurations, such as CPTA-opt and SKA-opt, may be able to detect up to a large number of SMBBHs (more than a few thousands or tens of thousands if ϱ_0=10 or 3; see Table <ref>), much higher than that detected by conservative CPTA and SKA configurations (several or several dozen for CPTA and hundreds or one thousand for SKA if ϱ_0=10 or 3). However, if the observation time is extended to 30 years, ∼ 10^3-10^6 GW signals can be detected by any future CPTA and SKA-PTA configurations. Though the expected number of detection is promising and encouraging, one should be cautious that it may be extremely challenge to subtract the GW signals of these individual sources from the PTA data. There may be GW signals from many sources overlapped together in each frequency bin, especially at low frequencies. Resolving them from the GW background could be an important task in future PTA data analysis. Among those detectable systems, only a fraction of ∼ 0.01% can be strongly lensed by foreground galaxies. For an observation period of 10 years, both conservation configurations of CPTA and SKA can hardly detect even a single lensed nHz GW signal. Meanwhile, about 1.01/0.09 and 7.44/0.95 can be detected by CPTA-opt and SKA-opt assuming ϱ_0=3/10. As for an observation period of 30 years, the detectable lensed numbers increase accordingly, and about 1.89/0.09 and 25.3/1.76 can be detected by two conservation configurations of CPTA and SKA, 106/10.9 and 289/97.7 detected by CPTA-opt and SKA-opt assuming the same SNR threshold ϱ_0=3/10. Thus, within our total sample future PTA observations may be able to detect up to several tens to hundreds lensed SMBBHs as individual GW sources.
Summing up all the 100 realizations, we can further obtain the parameter distributions (i.e., z, M_ BH, q_ BH, f_ GW) for those detectable SMBBHs. Figure <ref> presents these distributions for the lensed SMBBHs (blue lines) and the SMBBHs without lensing (orange lines) detected by future CPTA-opt assuming SNR threshold as 3/10 (solid/dotted lines). Upper and lower panels show the different scenarios of observation time as 10 and 30 years. Similar to Figure <ref>, Figure <ref> shows the parameter distributions for those SMBBH with/without lensing detected by future SKA-opt. As seen from these two figure, the majority of the detectable lensed SMBBHs are distributed at redshift z_ s∼ 0.5-2 and peak at z_ s∼ 1, while those without lensing are distributed toward lower redshift and peaked at z_ s∼ 0.2. The difference between these redshift distributions is simply due to that sources at higher redshift have significantly larger optical depth. In general, the lensed SMBBHs have slightly larger total mass than the SMBBHs without lensing, both of which are in the range of M_ BH∼ 10^8-10^10 M_⊙. For different SNR threshold ϱ_0=3/10, the total mass of lensed SMBBHs peak at log M_ BH∼ 8.8/9.2. Similarly, the lensed SMBBHs and the SMBBHs without lensing have mass ratios peaked at q_ BH∼ 0.1/0.3 if set ϱ_0=3/10. In general, the total mass and mass ratio for detectable SMBBHs with ϱ>3 are smaller than those with ϱ>10. As for the GW frequency distribution, most the SMBBHs have frequency of detectable GW signals clustered around 10^-9 Hz (≈ 1/30 yr) for 30-year observation and 3.17× 10^-9 Hz (≈ 1/10 yr) for 10-year observation. For both scenarios, only a small fraction of the detectable GW signals have f_ GW >10^-8 Hz (≲ 3% for 10-year observation and ≲ 0.2% for 30-year observation).
Figure <ref> shows the distribution of coalescence time t_ coal and GW frequency f_ GW for those lensed SMBBHs detected by CPTA-opt (upper panel) and SKA-opt (lower panel) with SNR ϱ≥ϱ_0=3 for observation time as 10 years (orange circles) and 30 years (green contour). The coalescence time is estimated by t_ coal = 5/256 [π (1+z) f_ GW]^-8/3 (G ℳ_ c / c^3)^-5/3 <cit.>. As seen from this figure, more than 80 per cent of the lensed SMBBHs detected via 10-year or 30-year observations have the coalescence time in the range from ∼ 10^5 yr to 10^7 yr or from ∼ 10^6 yr to 10^8 yr, which are far longer than the observation time. We note that there are some extreme cases of SMBBH inspiral with τ_ coal smaller than 10^5 yr or even 10^4 yr. For T_ obs=10/30 yr there are ∼ 17%/∼ 0.33% lensed GW sources detected by CPTA and ∼ 7.4%/∼ 0.29% for the lensed GW sources detected by SKA-PTA smaller than 10^5 yr, while there are ∼ 3.0%/∼ 0.038% lensed GW sources detected by CPTA and ∼ 0.94%/∼ 0.035% for the lensed GW sources detected by SKA-PTA smaller than 10^4 yr. These extreme sources may also have relatively higher SNRs (see orange circles in Figure <ref>) at the same time, and may be the first ones being detected by future PTA experiments.
§ DISCUSSIONS ON IDENTIFICATION OF LENSED SMBBHS DETECTED BY PTA
The detection of lensed SMBBHs by future PTA experiments is promising according to the above estimates. However, it is difficult to identify the detected SMBBH as a lensed system via the GW signal alone. Unlike lensed GW signals with much higher frequency such as those from mergers of compact binaries[The lensed high frequency GW sources have multiple lensing images, and the lensed GW signals can be directly identified by matching their sky localization and lens system parameters derived from different lensing images such as time delay, magnification ratio, and phase shift <cit.>.], a lensed nHz GW source does not have well separated multiple images of the GW signal. Its lensed waveform is different from the original one only by a small amplitude magnification and a phase shift, which can be matched well by another unlensed nHz GW signal with slightly larger chirp mass or smaller source distance compared with the true system. However, one may still be able to identify it if its associated EM counterparts (e.g., host galaxy and/or host AGN) could be also observed by future sky surveys, with which one may be able to determine the orbital and other physical parameters of the central SMBBH. Thus the lensing signatures can be found from the EM counterparts of the systems and applied to identify corresponding lensed nHz GW signals. This approach is indeed difficult, nevertheless, there are already many efforts on finding SMBBH candidates (for a summary about the current electromagnetic observational evidence of SMBBHs, see reviews by and ). We further discuss below the detectability of the associated host galaxies for the lensed nHz GW signals as well as possible active SMBBHs, and discuss the possibility of distinguishing the lensed GW signals.
§.§ Lensed host galaxies
For the host galaxies in general, their morphology must be considered when investigating the gravitational lensing effect, which is different from the cases of lensed QSOs and supernovae as point sources <cit.>. Therefore, one has to consider a more complicated criterion for the lensed hosts by combining brightness and morphology. Based on the SIS lens model and assuming y<1, the host galaxy associated with the lensed nHz GW signals will also be strongly lensed and produce extended lensed images with distorted shapes (arcs or rings for highly aligned lens system). In order to identify a lensed galaxy, the distorted image should be detectable and resolvable. In addition, for an extended source, the distorted morphology of the lensed image should also be identifiable. Therefore, we adopt the criteria of defining a detectable lensed galaxy as follows <cit.>: (1) the lensed host galaxy can be observed by a given telescope with limiting magnitude for extended source m_ lim; (2) the image and counter-image must be resolvable, i.e., R_ e^2+(s/2)^2≲θ_ E^2, where R_ e is the effective radius of the galaxy and s is the angular resolution; (3) the tangential shearing of the arcs should be detectable, i.e., μ_ tot R_ e > s and μ_ tot > 3, where μ_ tot is the total magnification of the source.
For future sky survey, one of the core community surveys for Roman Space Telescope (RST) is the High Latitude Wide Area Survey[<https://roman.gsfc.nasa.gov/high_latitude_wide_area_survey.html>], which covers four near-infrared bands (F106, F129, F158, F184). We first assume that the lensed PTA sources can be surveyed by RST without sky coverage limitation, but discuss later about this limitation. For example, we choose F158 filter as the representative of RST and mainly discuss the detectability of host galaxies for RST. RST/F158 has the limiting magnitude for extended galaxy source m_ lim∼ 26.8 and angular resolution s∼ 0.11 arcsec. Then, for those lensed GW signals in each realization obtained in the last section, corresponding effective radius R_ e and the magnitude in F158 filter for each SMBBH host galaxy are obtained by sampling from JAGUAR <cit.>, the extragalactic mock galaxy catalog designed for James Webb Space Telescope (JWST), in different bins of redshift and galaxy mass. JAGUAR mock galaxy catalog covers the redshift range between 0.2 to 15, which is sufficiently deep for our SMBBH samples (0.2<z<3). For each mock galaxy in the catalog, the flux information in all JWST band filters and the stellar mass of the galaxy are provided. We obtain the flux in RST/F158 filter by interpolation according to the flux in the JWST bands (i.e., F150W and F162M) and then convert it to the AB magnitude in the RST/F158 filter. The total mass of SMBBH is correlated with the mass of the host galaxy M_ gal through the MBH mass-bulge mass relation, i.e., p_BH(M_BH, q_BH| M_*, q_*,z') introduced in section <ref>.
By applying the above criteria, we can further obtain the subsample composed of detectable lensed GW signals and associated lensed host galaxies for each realization. Similarly, the detectable number of the associated lensed host galaxies is obtained through averaging all the realizations. Results are presented in the eighth column and the last column in Table <ref> for the cases with a PTA observation period of 10 years and 30 years, respectively. We find that there are ∼ 14% of the lensed nHz GW signals having detectable host galaxies passing our lensing criteria. For the cases with a PTA observation period of 10 years, it is hard to detect even one system with both detectable GW signals and detectable lensed host galaxy for the PTA configurations investigated above, though SKA-opt may detect ∼ 1 such a case with SNR ≥ 3. For the cases with a PTA observation period of 30 years, ∼ 0.28 or 0.02 and ∼ 3.67 or 0.20 host galaxies of those lensed GW signals can be detected by RST associated with the conservative CPTA and SKA assuming SNR threshold ρ_0=3 or 10, while ∼ 14.9 or 1.65 and ∼ 37.4 or 13.5 such cases may be detected by RST associated with the CPTA-opt and SKA-opt. Thus for an unknown lensed nHz GW signal, its lensed host galaxy may be detected and further help identify the corresponding GW signals. The probability to detect the lensed host galaxies may vary with how the lensing morphology criterion is set rather than the brightness criterion for the host galaxies. When the morphology criterion (3) is changed to μ_ tot>4 or 5, which means a much sharper arcs, this probability decreases to ∼ 9% or 5%. However, as for the brightness of the host galaxies, nearly all the host galaxies are bright enough to be observed by RST.
Figure <ref> shows the distributions of the galaxy mass and galaxy magnitude in the F158 filter of RST for the lensed SMBBHs detected by CPTA-opt (upper panel) and SKA-opt (lower panel) with ρ≳ 3. The distributions obtained for the cases with a PTA observation period of 10 years (orange circles) and 30 years (green contour) are similar, in which galaxy masses M_ gal are in the range of ∼ 10^10 - 10^12M_⊙ and galaxy magnitudes m_ gal in the F158 filter of RST are in the range of ∼ 17 - 23, substantially brighter than the limiting magnitude of RST/F158 m_ lim∼ 26.8.
Note here that the sky coverage of RST is not considered in our estimation. The High Latitude Wide Area Survey of RST will cover ∼ 2,000 deg^2 sky region. This means that the realistic detectable probability of lensed host galaxy by RST may decrease by a factor of ∼ 20. However, there will be more than one sky survey carried out in the future, which may work complementarily to achieve much larger sky coverage. For instance, the Euclid <cit.> has the H-band limiting magnitude m_ lim,H=24.5, angular resolution s∼ 0.3 arcsec and sky coverage ∼ 15,000 deg^2; and China Space Station Telescope <cit.> has the z-band limiting magnitude m_ lim,z=24.1, angular resolution s∼ 0.18 arcsec and sky coverage ∼ 17,500 deg^2. Furthermore, RST and other sky survey telescopes may be also used to specifically search for the lensed host of a PTA detected SMBBH in its localized sky area. With the operation of all these telescopes in the future, the all sky lensed host galaxies may have the chance to be observed.
In addition, the real morphology identification of lensed extended galaxies is far more complicated than the simple criterion adopted in this paper and may require visual inspection. The detection strategy and science goal for each telescope may also affect the performance in this aspect. A more precise prediction for any one of those telescopes require more dedicated research.
To close the discussion of all the uncertainties in finding the lensed hosts of PTA SMBBHs above, we take the value of 14% obtained above as the upper limit for the fraction of lensed hosts that can be found by galaxy sky surveys.
§.§ Identification of lensed nHz GW signals
We note here that associating the detected nHz GW signal with its lensed host galaxy is a difficult task even both of them are detected. In order to determine whether the nHz GW signal is lensed or not, one may have to match not only the localization area of the GW signal with the lensed host galaxy but also the properties of the SMBBH inferred from the GW signal and those inferred from the EM observations. For those non-active SMBBHs, only their lensed host galaxies but not themselves can be detected by deep and high-resolution galaxy surveys. Whereas, for those active SMBBHs, the multiple images of the active galactic nuclei (AGN) may be detected together with their lensed host galaxies. Below, we discuss the possible way to identify the lensed nHz GW signals from either non-active or active SMBBHs.
§.§.§ Lensed non-active SMBBHs
The SMBBHs generated from mergers of gas poor galaxies, which may be the dominant loud nHz GW sources, are normally inactive. The localization of the nHz GW signal from an SMBBH may reach ∼ 1 deg^2 <cit.> and the number of lensed galaxies in such an area can be up to several tens <cit.>. Even the GW signal is known to be lensed, one has to figure out which lensed galaxy in the localization area of the GW signal is the corresponding host.
Furthermore, from the GW signal itself one cannot know whether it is lensed or not. In principle, one may first find the central non-active MBHs or SMBBHs of the lensed galaxies in the localization area via kinematics and dynamics by future sufficiently high resolution astrometric and spectral observations (e.g., by the Thirty Meter Telescope, Giant Magellan Telescope, and European Extremely Large Telescope; ) and measure the properties of these MBHs or SMBBHs, though almost impossible by current telescopes. If the measurements can be done, then one could match the SMBBH properties with those determined from the GW signals and identify the lensed SMBBHs and its associated lensed host galaxies.
§.§.§ Lensed active SMBBHs
Gas-rich mergers of galaxies may lead to the formation of active SMBBHs appearing as AGNs, and it may contribute a significant fraction, ∼ 25% <cit.>, to the nHz GW sources.
Without morphology constraint, most of these lensed AGNs should be detectable in principle as they are one of the brightest sources in the universe and most of their multiple images are also resolvable[The population of those active SMBBHs has redshift peaked around z∼1 <cit.>. For similar galaxy-scale lensed point sources, BNSs which also have redshift peaked around z∼1, <cit.> predict that ≳ 90 per cent of multiple images are resolvable by RST.] by high-resolution space telescopes, such as HST and RST. We do not intend to go to details about the detectability of the active SMBBHs and its lensed host galaxies in this paper as our SMBBH sample is generated from gas-poor mergers. Nevertheless, we note that once the active SMBBHs are found by EM observations, one may estimate the properties of the SMBBHs by their various EM signatures <cit.>. Then, by matching the position and physical properties of the lensed active SMBBHs with the nHz GW individual sources, one may find whether the nHz GW signals are associated with a lensed active SMBBH and thus identify the lensed nature of the GW signals.
§ CONCLUSIONS
In this paper, we investigate the diffractive lensing effect on the nHz GWs emitted from SMBBHs lensed by intervening galaxies, and estimate the detectable number of such systems by future PTAs, including CPTA and SKA-PTA. We find that the diffractive lensing caused by the intervening galaxies lead to a small amplification of the GW amplitude (∼ 1.04-1.14 for 16-84% of the mock sample and all ≲ 1.5) and a small phase shift. The lensed GW signals are indistinguishable from those by similar SMBBH systems with slightly larger chirp masses or smaller distances without lensing. We estimate that future PTA experiments, such as CPTA and SKA-PTA, may detect about 10^2 - 10^4 and 10^4 - 10^6 individual SMBBHs within an observation period of 10-30 years, among which ∼ 0.01% are strongly lensed by foreground galaxies, i.e., up to ∼ 106 for CPTA and up to ∼ 289 for SKA-PTA. The lensed nHz SMBBHs may be detectable by future PTAs but difficult to be identified as lensed ones through GW observations only. We further estimate that the lensed host galaxies of up to 14% lensed nHz SMBBHs may be detected by sky survey telescopes, such as RST, and argue that the lensed active galactic nuclei hosting SMBBHs with nHz GW signals, contributing a significant fraction to the nHz SMBBHs (∼ 25%; ), can be detected by electromagnetic telescopes. With the information from electromagnetic observations and measurements on the physical parameters of these lensed SMBBH systems, it may be possible to associate the nHz GW signals with a lensed non-active or active galaxy hosting an SMBBH and identify the diffractive lensing nature of the GW signals.
§ ACKNOWLEDGEMENTS
We thank Prof. Qingjuan Yu for helpful discussions and contribution to this paper.
This work is partly supported by the National Key Program for Science and Technology Research and Development (Grant nos. 2022YFC2205201, 2020YFC2201400), the National Natural Science Foundation of China (Grant nos. 12273050, 11690024, 11991052), and the Strategic Priority Program of the Chinese Academy of Sciences (Grant no. XDB 23040100).
§ DATA AVAILABILITY
The data underlying this article will be shared on reasonable request to the corresponding author.
mnras
|
http://arxiv.org/abs/2307.00862v1
|
20230703090312
|
UniFine: A Unified and Fine-grained Approach for Zero-shot Vision-Language Understanding
|
[
"Rui Sun",
"Zhecan Wang",
"Haoxuan You",
"Noel Codella",
"Kai-Wei Chang",
"Shih-Fu Chang"
] |
cs.CV
|
[
"cs.CV",
"cs.CL"
] |
Ground state EIT cooling of ^171Yb^+ ion in polychromatic field
S. N. Bagaev
August 1, 2023
===============================================================
Vision-language tasks, such as VQA, SNLI-VE, and VCR are challenging because they require the model’s reasoning ability to understand the semantics of the visual world and natural language. Supervised methods working for vision-language tasks have been well-studied. However, solving these tasks in a zero-shot setting is less explored. Since Contrastive Language-Image Pre-training (CLIP) has shown remarkable zero-shot performance on image-text matching, previous works utilized its strong zero-shot ability by converting vision-language tasks into an image-text matching problem, and they mainly consider global-level matching (e.g., the whole image or sentence). However, we find visual and textual fine-grained information, e.g., keywords in the sentence and objects in the image, can be fairly informative for semantics understanding. Inspired by this, we propose a unified framework to take advantage of the fine-grained information for zero-shot vision-language learning, covering multiple tasks such as VQA, SNLI-VE, and VCR. Our experiments show that our framework outperforms former zero-shot methods on VQA and achieves substantial improvement on SNLI-VE and VCR. Furthermore, our ablation studies confirm the effectiveness and generalizability of our proposed method. Code is available at https://github.com/ThreeSR/UniFinehttps://github.com/ThreeSR/UniFine.
§ INTRODUCTION
VQA <cit.>, SNLI-VE <cit.>, and VCR <cit.> are vision-language tasks, which utilize the text and corresponding image to test a system's cross-modal reasoning ability. These tasks are challenging as requiring models to obtain a joint understanding of visual and textual modality. Nevertheless, they are also meaningful since this capability plays an essential role in daily human-robot interaction, e.g., asking a robot how many people are in the image. Despite the difficulty, a line of work <cit.> has been dedicated to resolving these vision-language tasks in a supervised setting and obtaining impressive progress. However, these methods all suffer from a significant problem of being costly as they require expert knowledge to collect well-annotated image-text data. On the other hand, zero-shot methods for vision-language tasks can successfully bypass this problem without costly annotations. Unfortunately, limited methods and relatively fewer works have been dedicated to exploring this direction.
Recently, CLIP <cit.> has been proposed to acquire visual concepts using natural language supervision. It jointly trains an image encoder and a text encoder on 400M noisy image-text pairs collected from the Internet by aligning images and texts through a contrastive loss.
Previous works <cit.> demonstrated that CLIP can achieve strong zero-shot performance of vision-language tasks by converting original tasks into the image-text matching format. However, they mainly consider matching on an instance or global level, i.e., the whole image or sentence, ignoring the significance of fine-grained elements, e.g., keywords in the sentence and objects in the image. Meanwhile, we find these fine-grained elements are important for specific downstream tasks, especially in zero-shot learning.
For instance, in Fig. <ref>, CLIP makes three incorrect predictions in three zero-shot vision-language tasks. For VQA, the model infers the wrong object "pancake" for the verb "eating", as it does not capture the details in the image (pizza on the table) and captions (pizza is mentioned). We posit that if we can find a proper solution to navigate the model to focus on these detailed pieces of textual and visual information, the model would likely have a better chance of selecting the correct answer label. This conjecture also seems true and generalizable across multiple zero-shot downstream tasks as shown by the three examples from different vision-language tasks, i.e., VCR, VQA, and SNLI-VE in Fig. <ref>. Yet, we also recognize potential challenges also exist as those different tasks may differ from many perspectives including the distribution of image categories or scenes, the different semantic focus, and format of text premises between declarative statements and questions, and different task formats in terms of image-text matching or classification.
To overcome these challenges, we first identify two common fundamental steps required to utilize the fine-grained information across different vision-language tasks: 1) Extraction of the fine-grained information from context information, e.g., the extraction of the word "pizza" from the caption in VQA as in Fig. <ref>. 2) The semantic matching between these extracted fine-grained information and answer choices or hypothesis. Based on these, we propose a unified approach leveraging these two common steps thus it can assist the model to generalize over different vision-language tasks.
For the extractor, we have two branches – 1) the vision branch and 2) the textual branch. In the vision branch, we employ Faster-RCNN <cit.> to extract object-level information. We select relevant object regions guided by the question in VQA and VCR or hypothesis in SNLI-VE. After that, we concatenate the whole image and its selected image regions and input them into the image encoder of CLIP. For textual information extraction, we exploit rich information from the image caption generated by a recently-developed captioning model OFA <cit.> and question in VQA and VCR or hypothesis in SNLI-VE to boost the zero-shot performance.
It's noted that although we employ the image caption and question on a sentence level rather than a word level, we compute the cosine similarity between them and answer texts, which means if there are keywords in the answer texts which can be matched in the caption or question, then we will obtain high scores in zero-shot prediction.
Therefore, it is still a process of fine-grained information extraction. By using fine-grained information, our model outperforms previous methods on zero-shot VQA and we are the first to benchmark zero-shot VCR and SNLI-VE. The experiments confirm the effectiveness of our proposed method.
Our contributions can be summarized as follows:
* To the best of our knowledge, we are the first to propose a unified approach based on fine-grained information extraction for zero-shot learning of different vision-language tasks.
* Our approach outperforms previous CLIP-based methods for zero-shot learning of VQA and we are the first to study CLIP's zero-shot ability for SNLI-VE and VCR.
* The experiments and ablation studies confirm the generalizability of our proposed method and the significance of visual and textual fine-grained information for zero-shot learning of vision-language tasks.
§ RELATED WORK
Vision-language understanding tasks. Unlike unimodal tasks, vision-language understanding tasks need joint understanding between vision and language, which require a deeper reasoning ability of the system. In VQA <cit.>, given a question, the model needs to understand the details of the corresponding image based on the question to answer correctly. The real images in VQA come from MS COCO<cit.> and each of them is paired with a caption in COCO Captions <cit.>. For another task VCR <cit.>, its semantic focus is different from VQA since it concentrates more on commonsense questions. The model needs to answer the recognitive questions (like VQA) at first, then it is also required to correctly answer the cognitive questions, which are rationales of the choice in the first question. The images in VCR are collected from movie clips. SNLI-VE originated from Stanford Natural Language Inference (SNLI) <cit.>, which is a text entailment (TE) task based on the Flicker30k <cit.> image captions. It extends TE into the visual domain and it has a different task format from the VQA and VCR because the previous question-answering format is replaced with the hypothesis. Given the image and hypothesis, the model needs to predict whether the image semantically entails the text. The images in SNLI-VE are from Flicker30k with annotated captions.
Vision-language pre-trained models. Early vision-language pre-trained models <cit.> utilize cross-modal transformer <cit.> pre-trained on well-annotated image-text pairs. Different from these models, contrastive learning frameworks <cit.> are trained on noisy image-text pairs crawled from the Internet through contrastive loss, which employs the dot product between visual and textual modality. Due to the large-scale training data, these models acquire rich prior knowledge and show strong zero-shot ability on vision benchmarks like ImageNet <cit.>.
Vision-language zero-shot learning. There is a line of work utilizing CLIP to do zero-shot learning for vision-language tasks. ReCLIP <cit.> utilizes CLIP to present a zero-shot method for referring expression comprehension (ReC), which outperforms prior zero-shot ReC approaches. CLIP-ViL <cit.> exploits CLIP to do zero-shot VQA by simply concatenating question and answer pair for each question and constructing "question: [question text] answer: [answer text]" as the prompt. Then, they feed the text and image into the text encoder and the image encoder of CLIP, which produces the near-chance level performance. The most relevant work to ours is TAPC <cit.>, which manually designs the prompt and leverages T5 <cit.>, a large pre-trained text-to-text Transformer, to convert the question-answering problem into the image-text matching task. Then, it employs CLIP's remarkable zero-shot image-text matching ability on VQA, whose results surpass CLIP-ViL by a large margin. However, these works handle different tasks on an instance level rather than fully utilizing the visual and textual fine-grained information (, keywords in the sentence and objects in the image) like ours. Moreover, we can tackle a diverse set of tasks but they just concentrate on one specific task.
§ METHOD
In this section, we introduce our method for visual and textual fine-grained information extraction to improve zero-shot learning of vision-language tasks including VQA, VCR, and SNLI-VE.
§.§ Baseline Method
In the baseline method shown in Fig. <ref>, we use CLIP to do zero-shot learning of vision-language tasks. CLIP consists of a visual encoder 𝕍 (e.g., ResNet <cit.> and ViT <cit.>) and a text encoder 𝕋 (e.g., transformer <cit.>), where the image and text are processed independently.
Followed by the encoder, there is the dot product (i.e., alignment score) between visual and textual features, i.e., 𝕍() ·𝕋().
We input the image from VQA, VCR, and SNLI-VE into the CLIP visual encoder. Since there is a difference in task format, answer choices from VQA and VCR and the hypothesis from SNLI-VE are input into the CLIP text encoder. After encoding, we can obtain the alignment score between the image and text. In VQA and VCR, we select the answer with the highest score. In SNLI-VE, there is a clustering process after the dot product, which is demonstrated in Algo. <ref>, and we select the answer with the lowest score.
§.§ Visual fine-grained information extraction
In visual fine-grained information extraction, we aim to find the related image regions to the question in VQA and VCR or the hypothesis in SNLI-VE since these regions can provide local visual clues to complement the global image. The objects and attributes are detected by Faster-RCNN <cit.>, which is pre-trained on Visual Genome <cit.> provided by <cit.>. We select the top N relevant image regions (N is a hyperparameter, which will be analyzed in Sec. <ref>) by image region score (i.e., cosine similarity) between the textual features of the question or hypothesis and the object class&attribute (e.g., yellow flowers) encoded by RoBERTa <cit.>:
N_o_i ∈ O{(ℝ(Query), ℝ({Attr(o_i), Class(o_i)}))}
where ℝ is RoBERTa, cos(,) is cosine similarity, O is the set of objects detected by Faster-RCNN, Attr() and Class() are attribute and class of object respectively, and Query is the question in VQA and VCR or the hypothesis in SNLI-VE. After selection, the global image and selected image regions will be fed into CLIP visual encoder to obtain the encoded feature of each.
§.§ Textual fine-grained information extraction
Next, we present how textual fine-grained information is extracted and incorporated into our framework. To be more specific, two types of information are studied: image caption and question. Questions as a prior can narrow down the range of answer candidates and get rid of irrelevant answers.
Image caption can transform the information inside the image into text so that it can be compared with answers in the same domain. Image captions are generated from the image, but their format is language.
Thus, we arguably regard image captions as textual fine-grained information. Overcoming the challenge in different formats of vision-language tasks, we introduce a relatively unified way to extract and utilize textual fine-grained information in the zero-shot scenario.
Visual Question Answering: VQA
Following previous work, we experiment on the validation set of VQAv2 <cit.>. Typically, VQA is regarded as a classification problem and there are 3,129 most frequent answers used for classification. There are 65 types of questions (e.g., does this type) and 3 types of answers including Yes/No, Number, and Other in VQAv2.
Although in VQA, each image is paired with a ground truth caption from MS COCO, we still choose to use OFA, a SOTA model of image captioning, to generate the caption given the image, because not every dataset is annotated with ground truth captions and we would like to make our method generalizable.
As <cit.> shows, directly inputting the concatenation of the question and answer into CLIP will lead to near-chance level performance. In addition, there are more than 3,000 answer candidates in VQAv2, which will largely slow down the inference speed of zero-shot VQA with all answers input into CLIP. To bypass that, we utilize an answer-filtering method to downsize the number of answer choices inspired by <cit.>.
Following <cit.>, we first convert the question-answering format into declarative templates with the <extra_id_0> token by T5 low-shot demonstration. Then, templates with <extra_id_0> token are input into the T5 and we obtain the plausibility of each answer candidate based on T5 output probability. Next, we select the top K answers. More details can be found at Sec. <ref>.
In this way, we can downsize the number of answers in VQA. There are three different answer types in VQA, which are processed differently in the answer filtering process. For Yes/No type, we treat it as a binary classification problem. For Number type, since its answers are highly related to numerical answers in the 3,129 most frequent answers set, we heuristically filter 285 numerical answers from 3,129 answers before answer filtering. As for Other type, we preserve the original answer candidates without filtering.
After obtaining top K filtered answers, on one hand, they will be sent to the CLIP text encoder and dot-product with image features will be calculated, denoted as CLIP alignment score S_. On the other hand, we will calculate the question prior score S_ (i.e., cosine similarity between textual features, encoded by RoBERTa, of the question and answers) and the caption prior score S_ (i.e., cosine similarity between textual features, encoded by RoBERTa, of image caption generated by OFA and answers). The whole process can be summarized as the following equations:
S_ = 𝕋(A) ·𝕍(I)
S_ = (ℝ(Q), ℝ(A))
S_ = (ℝ(𝕆(I), ℝ(A))
where 𝕍 and 𝕋 are image and text encoders of CLIP, ℝ is RoBERTa, 𝕆 is OFA, and cos(,) means cosine similarity. I denotes images including one global image I_g and N selected image regions {I_l ∈ Reg}. Q and A correspond to the question and its top K filtered answers. 𝕆(I) means image caption generated by OFA.
In the end, all scores are ensembled. We select the answer with the highest score as zero-shot prediction result:
max_i { S_(A_i, I_g) + k_1 ·max_I_l ∈ Reg{S_(A_i, I_l)}
+ k_2 · S_(Q, A_i) + k_3 · S_(I_g, A_i)},
where k_1, k_2, and k_3 are hyperparameters.
Visual Commonsense Reasoning: VCR
VCR is similar to VQA since both of them are in question-answering formats. However, there are only four answer choices per question, which means we don't need to do answer filtering. Q2A and QA2R are two subtasks of VCR. Q2A is similar to VQA in that there is only one question per sample. So the process of Q2A is the same as VQA except for omitting answer filtering. QA2R aims to dig out the rationale why one correct answer is chosen in Q2A question. Since there is no question text in QA2R and the correct answer is provided, we directly utilize the correct answer as the question text. Other procedures in QA2R are the same as Q2A.
Visual Entailment: SNLI-VE
The task format of SNLI-VE is different from VQA and VCR. For each sample, only one image premise I and one hypothesis H are given, without answer candidates. It is a three-way classification problem, aiming to predict the relation between the image premise and hypothesis text into one of three classes: Entailment, Contradiction, and Neutral.
Since there are no answer candidates, we cannot directly compare CLIP alignment scores of answers to select the best answer, as in VQA and VCR. To tackle that, we compute the CLIP alignment scores between image and hypothesis of each sample in whole evaluation set, and cluster those scores into three clusters with three centroids. We rank the centroids from high to low and sequentially treat them as entailment centroid C_^e, neutral centroid C_^n and contradictory centroid C_^c. The detail of clustering can be found in Algo. <ref>. It's noted that, to make cluster centroids meaningful, an assumption is required: three relationships are uniformly distributed in the evaluation dataset. That assumption is true in SNLI-VE but not guaranteed in other less-calibrated datasets. We can measure how close S_ of each sample is to each centroid:
Dis (C_^i, S_)=
C_^i -
( S_(H, I_g) + k_1 ·max_I_l ∈ Reg{S_(H, I_l)})
where centroid C_^i∈{C_^e, C_^n, C_^c}.
Besides the CLIP alignment score comparison, we can obtain the caption prior score S_(I, H) using the image caption generated by OFA. Same as above, we also use the clustering method in Algo. <ref>, with only changing CLIP score to caption score, to get three centroids {C_^e, C_^n, C_^c}. And we measure how close S_ of each sample is to each centroid:
Dis (C_^i, S_)= C_^i - S_(I, H)
It's noted that due to the lack of answer candidates, we can't get the question prior score S_. In the end, we ensemble two distances and predict the relationship by picking the closest centroid:
min_i {Dis (C_^i, S_) + k_2 · Dis (C_^i, S_) }
§ EXPERIMENTS
In this section, we will talk about benchmark comparison first to show our strong performance. Then, we conduct extensive ablation studies to confirm the effectiveness of fine-grained information.
§.§ Experimental setup
Datasets. We analyze three vision-language tasks in our paper. For each of them, we utilize the validation set of VQAv2 <cit.>, VCR <cit.>, and SNLI-VE <cit.>. More details about the validation set can be found in Sec. <ref>. In VQAv2, we employ vqa scores to evaluate the model. In VCR and SNLI-VE, we use the accuracy of the validation set for evaluation.
Models. The core component of our method is CLIP [https://github.com/openai/CLIPhttps://github.com/openai/CLIP]. There are different variants of CLIP since we can use different models to act as the image or text encoder. Following previous work, we leverage CLIP Res50x16 and CLIP ViT-B/16 in VQA for comparison. Since we are the first to evaluate CLIP's zero-shot ability in SNLI-VE and VCR, there is no need for us to compare them with prior work. So we just exploit CLIP ViT-B/16 in VCR and SNLI-VE. We believe the scale of the model will have a big impact on the result, so we also utilize CLIP ViT-L/14@336px in VQA, VCR, and SNLI-VE to see how much improvement can be obtained by using a larger model. In addition to CLIP, we also use T5-large [https://huggingface.co/modelshttps://huggingface.co/models] for task format conversion, OFA-base [https://github.com/OFA-Sys/OFAhttps://github.com/OFA-Sys/OFA] for image captioning, RoBERTa-large [https://github.com/UKPLab/sentence-transformershttps://github.com/UKPLab/sentence-transformers] for the following calculation of cosine similarity, and Faster-RCNN [https://github.com/peteanderson80/bottom-up-attentionhttps://github.com/peteanderson80/bottom-up-attention] for object detection.
§.§ Benchmark comparison
VQA. Results of zero-shot VQA are reported in Tab. <ref>. For a fair comparison, we compare our method with two CLIP-based methods. We choose TAP-C <cit.> as our baseline. Since the author didn't release the code, we reimplement it from scratch. After reimplementation, we obtain a lower score than TAP-C. There might be some reasons like differences in specific prompt design and answer filtering process making our result different from the original one. Although our reimplemented results are lower than the reported ones, we can surpass TAP-C after extracting and exploiting visual and textual fine-grained information. Compared to our reimplemented results, our method can elevate the performance of all answer types. By using a larger CLIP model, we can achieve better performance. Our best performance can surpass the reimplemented and TAP-C result by 2.83% and 1.63%. Currently, our method outperforms previous CLIP-based methods for zero-shot VQA.
SNLI-VE. We report the results of SNLI-VE in Tab. <ref>. By using the baseline method, we can get an accuracy of 47.37% in all categories, which is 14.04% higher than random performance. This result reveals that our baseline method is strong and it confirms CLIP's zero-shot ability in SNLI-VE. By extracting fine-grained information and upscaling the model, we can increase accuracy by 2.79% at most. For each answer type, Neutral type increases the most (+10.91%) and Entailment type decreases by 3.24%. We need to note that Neutral type is more complex than Entailment and Contradiction since this type is not as clear as the other two types requiring a model's deeper reasoning ability. The improvement in Neutral type shows the significance of fine-grained information. As for the decrement in Entailment type, it is likely due to the deficiency of our clustering method, which should be improved in the future. Since there is no CLIP-based zero-shot method for SNLI-VE before, we choose the supervised method EVE-Image from SNLI-VE paper <cit.> for comparison. Although the overall performance is still not comparable to the supervised method, our result of Contradiction type is approaching EVE-Image.
VCR. The results of VCR are reported in Tab. <ref>. We carry out experiments in two VCR subtasks, namely Q2A and QA2R. Compared to the random performance of Q2A and QA2R, our baseline method can improve by 28.24% and 21.51% respectively. The improvement confirms CLIP's strong zero-shot ability for VCR. By extracting fine-grained information and using a larger model, we can improve the baseline by 5.24% and 5.37% at most, which proves the effectiveness of our proposed method. There is no prior CLIP-based method for zero-shot VCR so we select the supervised model R2C, proposed in VCR paper <cit.>, for comparison. Although we cannot surpass the supervised model, the result of Q2A is approaching R2C and our results are competitive.
§.§ Ablation studies
In this section, we will analyze every important component of our proposed method. In Tab. <ref>, we can see all of the fine-grained (FG) information can help zero-shot learning and all fine-grained (FG) information combined together can bring more improvement.
Textual FG Information - Question: By adding the question prior information, we can see it can help VCR the most. We think the first reason is the question and answer in VCR are longer and more complex than the other two datasets. Consequently, the question and answers can provide more useful and richer information in zero-shot inference. Secondly, the correct answer is likely to have more overlap with the question. Plus, we can observe that question doesn't help a lot in VQA Yes/No answer type since this is a binary classification problem and a large number of questions are like "Is this A or B?" type, which cannot provide more useful information to zero-shot prediction.
Visual FG Information - Image Region: We can observe that the image region can largely improve the performance of Other answer type in VQA because the questions of this type tend to query the details of the image. And image regions can provide finer details to zero-shot inference. At the same time, we also find that the image region cannot help SNLI-VE much. We think SNLI-VE concentrates more on the global image thus image regions can't help a lot.
Textual FG Information - Image Caption: In Tab. <ref>, we can observe that the image caption can better assist the Number and Other answer type in VQA. For Number type, we think the image caption may contain numerical information which aids zero-shot prediction of Number type. Since there are a large number of questions in Other type, they will cover diverse question types, some of which may focus on information on an instance level. Normally, the image caption captures the instance-level information, so it can help VQA Other answer type. We can also notice that using image captions may hurt some categories of SNLI-VE, we think this result may suffer from the quality of the generated caption.
Generation vs. Ground Truth: Since not every dataset is well human-annotated, we employ these two settings to test the generalizability of our proposed method. In the generation setting, we generate image captions by OFA and detect objects by Faster-RCNN. In the ground truth setting, as mentioned above, there are ground truth captions paired with images in VQA and SNLI-VE. For VCR, images are not paired with human-annotated caption texts. However, 68% images of VCR validation set are the same as images in VisualCOMET <cit.> and VisualCOMET is paired with the ground truth caption. Thus, we directly leverage captions from VisualCOMET in VCR. Although images in VCR are not paired with captions, they are annotated with ground truth bounding boxes, so we have a ground truth image region experiment for VCR. However, VQA and SNLI-VE are not annotated with ground truth bounding boxes. As Tab. <ref> shows, we can conclude that our method can work well in a situation without many annotations because we achieve similar performance in generation and ground truth scenarios, which confirms the generalizability of our proposed method.
Model Scale: We believe that the model scale will affect the final result since larger models are able to better process visual and textual information. In our experiments, we mainly focus on two variants of CLIP, namely CLIP ViT-B/16 and CLIP ViT-L/14@336px. We also carry out experiments on CLIP Res50x16 in VQA task, which can be found in Tab. <ref>. We can observe that larger models can elevate the performance and all of our best results are achieved by using CLIP ViT-L/14@336px.
Number of Image Regions: In this subsection, we would like to see how selected N image regions affect the zero-shot performance of different vision-language tasks. For convenience, we select Yes/No answer type of VQA, SNLI-VE, and Q2A task of VCR to carry out experiments. Full results are reported in Tab. <ref>. For better visualization, we normalize the results.
In Fig. <ref>, we can observe that with the increment of the image regions, the performance of all three tasks increases and then decreases. Moreover, selecting 5 image regions is optimal for VQA and VE. For VCR, 12 image regions are optimal. Visual fine-grained information can help CLIP and play an important role in the zero-shot prediction since it provides fine details of the image but more image regions after a certain point will result in a decrement. Too many image regions will introduce irrelevant visual information. In our experiments, we select 5 regions for VQA and SNLI-VE, and 12 regions for VCR.
§ CONCLUSION
In this work, we propose a unified and fine-grained approach for vision-language tasks including VQA, SNLI-VE, and VCR. We outperform previous CLIP-based methods for zero-shot VQA. Plus, we are the first to empirically study CLIP's zero-shot ability for SNLI-VE and VCR, which achieves strong zero-shot performance. In addition to the benchmark comparison, we conduct extensive ablation studies confirming the significance of visual and textual fine-grained information and the generalizability of our proposed method.
§ LIMITATIONS
Although our proposed method is effective in three vision-language tasks, we still have some limitations. Firstly, we utilize T5 to convert the question-answering format into the declarative sentence in VQA and it works well in most cases, but it still faces out-of-coverage problems, which will affect the following zero-shot prediction of CLIP. We need to design more rules for these special cases for better conversion. Secondly, our clustering algorithm for SNLI-VE can achieve strong zero-shot performance, but the clustering centroids are close to each other and the algorithm is sensitive to these centroids. The robustness of this algorithm should be improved. What's more, we leverage Faster-RCNN in visual fine-grained information extraction, so the detectable object attributes and classes are constrained in a relatively limited object set of Faster-RCNN, which may hinder further improvement from visual fine-grained information. The Faster-RCNN can be replaced with a better vision module. Besides, since we only utilize CLIP in our paper, we can explore the zero-shot ability of other contrastive pre-training models in future work.
§ ETHICS STATEMENT
There are many large-scale pre-trained models used in our paper like OFA, T5, RoBERTa, and CLIP. Our method relies heavily on CLIP, which is pre-trained on approximately 400M image-text pairs crawled from the Internet. Since the pre-training dataset is noisy, CLIP is likely to have potential racial and gender bias. Therefore, if someone finds our work interesting and would like to use it in a specific environment, we suggest the user check the potential bias before application. We think one advantage of our work is we only utilize existing pre-trained models and we don't need to train any new models. Compared to the energy-consuming model training, our method can be more environmentally friendly.
§ ACKNOWLEDGEMENTS
We thank anonymous reviewers for their comments.
This work is supported by the DARPA MCS program under Cooperative Agreement N66001-19-2-4032.
acl_natbib
§ APPENDIX
§.§ Data Statistics
Following previous work, we use the val2014 split of VQAv2. In zero-shot SNLI-VE and VCR, we use the validation set.
§.§ Answer filtering for VQA
Answer filtering. As in TAP-C <cit.>, we first manually design the demonstrations and employ T5 to convert the question-answering format of VQA into the declarative template with the <extra_id_0> token. Then, we input the concatenation of demonstrations and declarative statements converted from question-answering format with the <extra_id_0> token into the T5 encoder. Next, encoded features from the T5 encoder and answer candidates are input into the T5 decoder. At the end of the T5 decoder, it will calculate the probability of each answer candidate. We select Top K answers to replace <extra_id_0> token in the template to generate K prompts, which will be fed into the CLIP text encoder.
Setting of hyperparameter K. Since we employ answer filtering to select top K answers, K is a significant hyperparameter. In Tab. <ref>, we show how the zero-shot performance of VQA Number and Other type varies with the increment of selected top K answers. We carry out six and seven experiments on these two types. We can observe that with the increment of K, the performance first increases and then decreases. When K is small, many correct answers are directly removed by T5, which makes it impossible for CLIP to choose the right answer. Conversely, if K is very big, there are too many answers, which are likely to disturb CLIP's zero-shot prediction. In our experiments, we select the top 10 answers in VQA Other type and the top 4 answers in VQA Number type.
§.§ More ablation studies of VQAv2
Since previous work uses CLIP RN50x16 to do zero-shot VQA, we also conduct ablation studies on it. Results can be found in Tab. <ref>
§.§ Clustering algorithm and centroids of SNLI-VE
Algo. <ref> is utilized in zero-shot SNLI-VE. After running the Algo. <ref>, we can obtain three clustering centroids. In fact, we can cache the centroids in advance. In order to achieve better performance, we tune the centroids, which are reported in Tab. <ref>. The effectiveness of Algo. <ref> is based on the relatively even data distribution. K-Means [https://en.wikipedia.org/wiki/K-means_clusteringhttps://en.wikipedia.org/wiki/K-means_clustering] can also be utilized here but it also requires a relatively even data distribution. In the validation split of SNLI-VE, we have 17858 samples, which are not divisible by 3. However, we can assume there are 5952, 5953, and 5953 samples in entailment, neutral, and contradiction category respectively.
§.§ How # image regions affect performance
The full results are reported in Tab. <ref>. They are values before normalization in Fig. <ref>. Through the table and figure, we can see how selected N images affects the zero-shot performance.
§.§ Zero-shot learning by only using textual fine-grained information
We think it is interesting to investigate the zero-shot performance if we only use textual fine-grained information. We only exploit the language model to accomplish zero-shot prediction in all three vision-language tasks. All results are shown in Tab. <ref>. In VQA, we use T5-large (for answer filtering) and RoBERTa-large. In SNLI-VE and VCR, we only utilize RoBERTa-large. Visual information is not considered and textual fine-grained information includes the image caption and question in this experimental setting. All results show that only using textual fine-grained information can achieve fair performance. (Note: We can notice that only using ground truth textual fine-grained information in SNLI-VE can surpass baseline performance. It is because the relation between the ground truth caption and hypothesis is well annotated in SNLI <cit.>)
|
http://arxiv.org/abs/2307.00736v1
|
20230703034719
|
Ultra-high Q alumina optical microresonators in the UV and blue bands
|
[
"Chengxing He",
"Yubo Wang",
"Carlo Waldfried",
"Guangcanlan Yang",
"Jun-Fei Zheng",
"Shu Hu",
"Hong X. Tang"
] |
physics.optics
|
[
"physics.optics",
"physics.app-ph"
] |
1Department of Electrical Engineering, Yale University, New Haven, CT 06520, USA
2Entegris Inc., Billerica, MA 01821, USA
3Energy Sciences Institute, Yale University, West Haven, CT 06516, USA
*hong.tang@yale.edu
UV and visible photonics enable applications ranging from spectroscopic sensing to communication and quantum information processing. Photonics structures in these wavelength regimes, however, tend to experience higher loss than their IR counterpart. Particularly in the near-UV band, on-chip optical microresonators have not yet achieved a quality factor beyond 1 million.
Here we report ultra-low-loss photonic waveguides and resonators patterned from alumina thin films prepared by a highly scalable atomic layer deposition process.
We demonstrate ultra high Q factor of 1.5×10^6 at 390 nm,
a record value at UV bands, and 1.9×10^6 at 488.5 nm.
§ INTRODUCTION
UV and visible band integrated photonics has witnessed rapid progress in recent years. Applications such as atom clocks<cit.>, biochemical sensing<cit.>, visible light communications<cit.>, quantum sensing <cit.>, quantum information processing based on trapped ions<cit.> and atoms<cit.>, all call for UV and blue band photonic integrated circuits with high scalability and low loss. Yet low-loss photonics at short wavelengths remains difficult to achieve as material absorption dramatically increases when photon energy approaches the bandgap of material, and Rayleigh scattering scales as λ^-4. One approach to reduce the loss at short wavelengths is to use very thin silicon nitride (Si_3N_4) waveguides cladded with low loss silica<cit.> so that the propagation mode is weakly confined. In this way, absorption inherent to the Si_3N_4 waveguide core can be diluted, while scattering loss induced by waveguide sidewall roughness is also reduced due to the finite sidewall height. However, devices employing Si_3N_4 as the waveguide core so far show strong absorption in the UV and blue bands.<cit.>
Further reduction of waveguide loss at short wavelengths requires new waveguide core material that demonstrates a large bandgap while still maintaining a higher refractive index than low-loss cladding material, usually silica, to provide good confinement. One of such candidate is AlN, which has a large bandgap of 6.2 eV. In our previous work, we employed single-crystalline AlN as waveguide core<cit.>.
and demonstrates a quality factor of 210 k at 390 nm, which was a significant advance for devices operating in the near-UV bands but still a moderate value compared to optical resonators operating at near-visible and IR wavelengths.
An alternative to AlN for short-wavelength passive photonic integrated platforms is amorphous alumina. Recent progresses in deposition methods have greatly improved the quality of amorphous alumina films, which
now demonstrate a band gap that is comparable to bulk sapphire (7.0 - 8.3 eV of ALD alumina <cit.> vs. 8.8 eV<cit.>). Many reports on amorphous alumina films, deposited via either reactive sputtering or atomic layer deposition (ALD) also confirmed very low loss at short wavelengths (< 0.3 dB/cm at 405 nm)<cit.>, and their compatibility with photonic platforms, either standalone <cit.>or combining with other materials<cit.>. Recently, by interfacing low-loss alumina waveguide with InGaN semiconductor amplifier, near-UV extended cavity diode laser (ECDL) was demonstrated.<cit.>. Due to the amorphous microstructure of these alumina films, the deposition process also has no requirements for the lattice structure of the substrate on which it is grown, thus relaxing requirements for substrate material. Furthermore, both ALD and reactive sputtering process are CMOS compatible, paving the way for CMOS integration of amorphous alumina-based photonics.
In this letter, we leverage an industrial ALD process to grow alumina as the waveguide core. This highly scalable process is capable of providing uniform growth coverage to substrates over 20" in diameter and can coat hundreds of 4" wafers in a single batch run. Because the absorption of UV and blue light is already low in alumina, the propagation mode can be fully supported in the waveguide core,
minimizing the scattering loss at top and bottom surfaces of the alumina film. With new waveguide core material and corresponding design principles, our resonators demonstrated ultra high Q of 1.5×10^6 at 390 nm and 1.9×10^6 at 488.5 nm, which are the highest quality factors reported at corresponding wavelengths for resonators featuring high confinement design, including previously demonstrated alumina resonator <cit.>.
§ DESIGN AND SIMULATION
We employ a shallow etch geometry to minimize scattering loss due to sidewalls. The resonators are air cladded to promote refractive index contrast with the alumina core, reducing etch depth needed for confinement.
The etch depth is optimized by simulating the radiation loss of waveguides subject to different etch depth in Lumerical. For ring resonators with 400 m radius, it was found that radiation loss can be suppressed to less than 0.06 dB/cm at 488.5 nm and less than 0.001 dB/cm at 390 nm when the etch depth is greater than 80 nm out of 400 nm thick alumina film. During fabrication, the etch depth is targeted at 100 nm, deep enough so that the resonators are not radiation loss limited.
The width of the waveguide is set at 4.5 m in order that the outer sidewall provides most of the confinement for the propagation mode, and the overlapping between propagation mode, particularly TE00 mode, and the inner sidewall is minimal, further reducing scattering loss. It should be noted this wide waveguide allows for propagation of multiple propagation modes, however, as Fig. 2 shows, by optimizing bus to resonator coupling, the coupling to the higher order modes can be greatly suppressed.
The coupling between straight bus waveguide and ring resonator is optimized by varying the bus waveguide width and tuning the gap between bus waveguide and ring resonator to meet the phase matching condition between TE00 modes at coupling region, thereby reducing coupling loss and suppressing coupling to higher order modes. We use a combination of simulations with FIMMWAVE software and experimental data to optimize the parameters. The optimal coupling condition for 390 nm light is 0.65 m wide bus waveguide and 1 m gap between bus waveguide and ring resonator. For 488.5 nm light, the optimum coupling condition is 0.75 m wide bus waveguide and 1.1 m gap.
§ DEVICE FABRICATION AND RESULTS
The fabrication starts with 4 m wet thermal oxide grown on silicon wafers. The test wafers were then coated at Entegris by applying a blanket layer of atomic layer deposition (ALD) amorphous alumina. The ALD deposition of alumina coating was performed by sequentially cycling of TMA / H_2O with pulsing times between 0.05 s and 0.15 s and nitrogen purge times between 18 s and 20 s at temperatures between 180 ^∘C and 250 ^∘C, using a 20” diameter crossflow thermal ALD coatings system custom-built at Entegris in a class 10,000 clean room. The growth rate of this deposition recipe is approximately 1.1Å/cycle. The thickness of the alumina coatings was measured to be nominal 420 nm, as determined by spectroscopic reflectometry, using an Angstrom Sun SR300 system. The ring resonators and associated bus waveguides are defined with a 100kV electron-beam lithography system (Raith EBPG 5200+) with a negative FOx-16 resist. To mitigate electron charging effects due to the highly insulating alumina and silica layers, 200 nm of poly(4-styrene sulfonic acid) (PSSA) is spun on top of FOx-16 resist before 10nm of gold is sputtered to provide grounding for stray electrons. The PSSA is water soluble to help the removal of gold after e-beam lithography. After the removal of gold by dipping in water, the chip is developed in 25 percent tetramethylammonium hydroxide (TMAH) developer. The pattern is then transferred to the alumina layer using an Oxford PlasmaPro 100 Cobra Inductively Coupled Plasma Reactive Ion Etching (ICP-RIE) system with a BCl_3 based etching recipe. Leftover FOx-16 resist is then removed from the chip by dipping the chip in 10:1 buffered oxide etch for 10 seconds. To further reduce the absorptive loss in alumina waveguide, the chip is annealed in the atmosphere at 500 ^∘C (for 390nm chip) and 600 ^∘C (for 488.5nm chip) for 5 hours to achieve the lowest loss and meanwhile avoid crystallization which reported to take place at above 800^∘C<cit.>.
To characterize the ring resonators, we construct a sweeping blue/UV laser by frequency doubling a Ti-Sapphire laser (M2 SolsTiS, 700-1000 nm) to 390 nm and 488.5 nm. The Ti-Sapphire laser is locked to an external cavity, ensuring <50kHz linewidth and the wavelength of the laser is precisely determined by a 0.1 pm resolution wavemeter, or 0.05 pm resolution after frequency doubling. To create a sweeping ∼390 nm laser, the ∼780 nm pump laser from Ti:Sapphire laser is coupled to a lithium triborate (LBO) doubling crystal in a resonant cavity (M2 ECD-X). To create a sweeping ∼488.5 nm laser, ∼977 nm pump laser from Ti:Sapphire laser is sent through a Magnesium-doped Periodically Poled Lithium Niobate (MgO:PPLN, Covesion MSHG 976-0.5-30) crystal to frequency double to 488.5 nm. The MgO:PPLN crystal is put in an oven, whose temperature is adjusted as the frequency of the pump laser is scanned to maintained phase matching condition of MgO:PPLN crystal for maximal frequency doubling efficiency. It should be noted that the output power from Ti:Sapphire laser is wavelength dependent as the spacing of étalon in resonant cavity is being continuously tuned. The extended transmittance spectrum covering two FSRs is therefore stitched from four continuous scans around 390 nm (five around 488nm), with étalon spacing being retuned for maximum power output at the beginning of each piecewise scan.
Fig. 2 shows the transmittance of the alumina ring resonator at ∼ 390 nm and ∼ 488.5 nm, respectively. For the extended transmittance spectrum, the pump Ti:Sapphire laser is scanned at 500 MHz/s, corresponding to a scan speed of 1 GHz/s after the frequency doubling. For the zoomed in resonances depicted in the insets, to ensure high wavelength resolution, the pump Ti:Sapphire laser is scanned at 200 MHz/s, corresponding to a scan speed of 400 MHz/s after the frequency doubling. Multiple sets of resonance peaks can be observed for both 390 nm and 488.5 nm cases as the 4.5 m wide waveguide supports multiple TE transmission modes, despite the coupling to TE00 mode is being optimized while coupling to other modes are suppressed. For TE00 modes at 390 nm, the TE00 modes exhibit a FSR of 65.6 GHz, while the TE10 modes have a FSR of 66.3 GHz, as predicted by the 400 m radius ring geometry. Even higher TE modes are not prominent. One of the TE00 mode resonance peaks demonstrated a loaded Q factor of 1.2 M, and has an extinction ratio of 2.4 dB. Using the formula Q_int=2Q_L/1+√(10^-ER/10) (Here Q_int stands for intrinsic Q, Q_L stands for loaded Q, and ER stands for extinction ratio in dB.)for under-coupled conditions, we obtain an intrinsic Q of 1.5 M for this resonance. For TE modes at ∼ 488.5 nm, the FSR is 68.5 GHz for TE00 mode and 68.6 GHz for TE10 modes, with one of the TE00 mode resonance peaks demonstrating a high loaded Q of 1.4 M and has an extinction ratio of 3.2 dB, corresponding to a loaded Q of 1.9 M.
The current Q of our device is likely limited by the residual absorption of alumina and the scattering of remaining alumina sidewall roughness as the radiation loss limited Q for the resonator is calculated to be beyond 10^10 for both TE00 and TE10 modes at wavelengths shorter than 500 nm. Since the modal absorption for TE00 and TE10 mode is the same, the Q difference between the two sets of resonances can be attributed to coupling loss and scattering loss. The waveguide is also capable of transmitting TM modes, however, the confinement of bus waveguide is weak for TM modes and the radiation loss limited Q for TM modes of ring resonator is calculated to be <10 M, thus we did not perform any further measurements of TM mode transmittance.
In Fig. 3, we compare the performance of our alumina ring resonator to other recent works on UV and blue band photonics. At wavelengths larger than 450 nm, low confinement Si_3N_4 ring resonators with > 1.5 mm radius still hold the record for quality factors<cit.>. At shorter wavelengths, absorption inherent to the Si_3N_4 waveguide core would drastically impacts the performance of these devices. AlN was the star material for nanophotonic devices operating at UV-blue band, and progress in AlN film quality boost the quality factor of AlN based resonators to up to 2.1×10^5 at 390 nm<cit.>. Alumina film deposited with reactive sputtering and ALD boasts even larger bandgap compared to AlN and renewed the quality factor record to 4.7×10^5 at 405 nm<cit.>. With ALD deposited alumina film and optimized geometry, our alumina ring resonator raises the quality factor record at UV band once again to 1.5×10^6 at 390 nm.
From the Q measurements, the propagation loss of the current ring resonator can then be derived to be 0.84 dB/cm at 390 nm and 0.51 dB/cm at 488.5 nm based on expression α = 4.343×2π n_g/Q_intλ, where n_g is group refractive index obtained through n_g=c/2π R_ring*FSR. The superior performance of the ring resonator in this paper can be attributed to the implementation of low-loss amorphous alumina as waveguide core material, as well as the ring geometry, which utilizes shallow etching to reduce the scattering loss.
§ CONCLUSION
In conclusion, we demonstrate ultra-high-Q UV and blue band ring resonators featuring low loss ALD alumina as waveguide core, and optimized geometry which sees propagation mode strongly confined within alumina waveguide core. This work pushes the intrinsic Q record of high confinement ring resonators to 1.5 M at 390 nm and 1.9 M at 488.5 nm, corresponding to a low propagation loss of only 0.84 dB/cm and 0.51 dB/cm respectively. Our results present an important solution in terms of material choice and waveguide design to achieve low-loss integrated photonics in UV and blue band.
§
Funding
This work is funded in part by the Office of Naval Research (ONR) grant N00014-20-1-2693. The materials used in this work is developed under the support of Department of Energy under grant No. DE-SC0019406.
Acknowledgments
The authors thanks Michael Rooks, Yong Sun, Lauren McCabe and Kelly Woods for support in the cleanroom and assistance in device fabrication.
Disclosures
The authors declare no conflicts of interest.
Data availability Data is available upon reasonable request.
|
http://arxiv.org/abs/2307.02132v1
|
20230705092046
|
Going Retro: Astonishingly Simple Yet Effective Rule-based Prosody Modelling for Speech Synthesis Simulating Emotion Dimensions
|
[
"Felix Burkhardt",
"Uwe Reichel",
"Florian Eyben",
"Björn Schuller"
] |
cs.SD
|
[
"cs.SD",
"eess.AS"
] |
Differentially Private Adversarial Auto-Encoder
to Protect Gender in Voice Biometrics
Melek Önen
=====================================================================================
We introduce two rule-based models to modify the prosody of speech synthesis in order to modulate the emotion to be expressed. The prosody modulation is based on speech synthesis markup language (SSML) and can be used with any commercial speech synthesizer. The models as well as the optimization result are evaluated against human emotion annotations. Results indicate that with a very simple method both dimensions arousal (.76 UAR) and valence (.43 UAR) can be simulated.
§ INTRODUCTION
Affect-modulated speech synthesis of a text can be achieved amongst others by
modifying the prosody of the utterance accordingly <cit.>. In this work,
emotions will be represented in terms of the dimensional approach of Schlosberg <cit.>,
who identified the three emotion dimensions
valence, arousal, and dominance. Valence is referred to as pleasure in the following.
For this paper, we neglect the dominance dimension for the benefit to focus on the main topic: the control of emotional expression in speech synthesis with a very limited set of prosodic rules.
§.§ Acoustic correlates of emotions
For each of these dimensions, several acoustic correlates have been
found. These findings are summarized in <cit.>,
<cit.>, and <cit.> (for further
details please see the references therein).
High as opposed to low arousal is characterized by higher speech rate,
higher intensity mean and variability, higher fundamental frequency
(F_0) mean and variability, higher spectral balance indicating
increased vocal effort, and a higher first formant due to an
increased mouth opening.
Positive as opposed to negative pleasure is amongst others
characterized by higher speech rate and by lower intensity mean and
variability. In addition, pleasure is positively correlated with the
second formant due to more lip spreading caused by smiling
<cit.>. The relation between pleasure and pitch is
more complicated as found by <cit.>: Higher F_0 characterizes both
elation joy (positive) and fear (negative), while comfort (positive)
and boredom (negative) are both reflected by lower F_0
<cit.>.
In general, many results from the literature, for example <cit.>, indicate that it is difficult to predict and simulate the valence dimension by acoustic cues alone, as opposed to linguistic ones, an assumption that is confirmed also in this investigation.
§.§ Emotions in speech synthesis
There are many articles that deal with the simulation of emotional speech and even many that review them, for example <cit.>; we refer to these for a deeper discussion.
Historically, first algorithms to simulate emotional expression were based on prosody rules and categorical emotions. Later, and in line with the new statistical techniques to synthesize speech, data based approaches were used and emotional dimensions as well as speaking styles targeted. Triantafyllopoulos et al. review deep learning based approaches in <cit.>.
Marc Schröder was the first on to target emotional dimensions with prosody rules in his dissertation
<cit.> and his work is one of the foundations of this paper.
An approach to simulate emotion dimensions with learned features was presented by Hamada <cit.> by mapping acoustic features to the valence-arousal space.
Later, Stanton et al. <cit.>
showed
how to target the latent space within a Tacotron architecture to generate expressive speaking styles.
§.§ Emotional simulation with SSML
Within the scope of the European H2020 EASIER project <cit.>, we faced the problem to enable a, not-yet emotional but available in many languages, commercial speech synthesizer to simulate emotional expression.
The obvious way to do this, in a way that is agnostic to a specific synthesizer, is to utilize the W3C's Speech Synthesis Markup Language (SSML) <cit.> which is, at least in parts, interpreted by almost all speech synthesis engines that are available.
This has also been done for categorical emotions by Shaikh et al. <cit.>.
SSML amongst others allows for specifying prosodic modifications of an utterance along the prosodic dimensions pitch, energy, and duration. Thus, speech synthesis can be affect-modulated by mapping emotion dimensions to prosodic parameters based on the findings above, and by passing on these parameters to the Text to speech (TTS) engine via SSML.
We tested this for the commercial Google Speech API[<https://cloud.google.com/text-to-speech>] and the open source MARY TTS engine <cit.>, where the support is only partial.
This paper is structured as follows:
In section <ref>, we
introduce two rule-based model variants in order to adapt the prosody of an
utterance accordingly. In sections <ref> and <ref>, we describe the perceptual evaluation procedure and their results, respectively.
Section <ref> concludes the paper with an outlook.
Contributions of this paper are as follows:
* We present two approaches to simulate emotional dimensions with SSML, which has to our knowledge not been done before.
* We simulate the valence dimension by a very simple pitch manipulation approach.
§ RULE-BASED AFFECT MODULATION
In our study, emotion scores are mapped to speech prosody parameters in two rule-based
algorithms based on the findings introduced in section <ref>.
§.§ Method syntact
As a naive baseline we simply implemented the prosody rules as being positively correlated with pitch and speech rate. To distinguish between arousal and valence, we simply assigned speech rate to arousal and pitch to valence, an approach that worked surprisingly well.
Of course, we can not be sure if the outcomes are specific to the Google synthesizer that was used to generate the samples.
§.§ Method Schroeder
To try out a more complex approach than Syntact, we implemented a very reduced version of the approach that Marc Schröder described in his dissertation <cit.>.
To this end, we analyzed Schroeder's MARY TTS <cit.> sources [<https://github.com/marytts/marytts/blob/79e4edef3f478dcef0aad3609ba77090e91f0b6d/marytts-client/src/main/resources/marytts/tools/emospeak/emotion-to-mary.xsl>].
According to the sources, we extracted the rules displayed in Listing <ref>, originally in Java, and implemented them in the Python language.
[caption=Prosody rules according to the MARY emotion module]
pitch => 0.3 * arousal + 0.1 * valence - 0.1 * power
pitch-dynamics => -15 + 0.3 * arousal - 0.3 * power
range (in Semitones) => 4 + 0.04 * arousal
range-dynamics (min 100) => -40 + 1.2 * arousal + 0.4 * power
accent-prominence => 0.5 * arousal - 0.5 * valence
preferred-accent-shape => when valence < -20: falling,
when valence > 40: alternating, else rising
accent-slope => 1 * arousal - 0.5 * valence
rate => 0.5 * arousal + 0.2 * valence
number-of-pauses => 0.7 * arousal pause
duration => -0.2 * arousal
vowel-duration => 0.3 * valence + 0.3 * power
nasal-duration => 0.3 * valence + 0.3 * power
liquid-duration => 0.3 * valence + 0.3 * power
plosive-duration => 0.5 * arousal - 0.3 * valence
fricative-duration => 0.5 * arousal - 0.3 * valence
volume => 50 + 0.33 * arousal
To implement this in SSML, we filtered the list for pitch and speech rate global values resulting in the two rules shown in Listing <ref>.
These rules had already been tested in the scope of a project to generate an appropriate robot voice for children with the autistic spectrum <cit.>.
The resulting values were then scaled as described in the next Section. Of course, this is only a very small subset of the rules defined by Marc Schröder and this might well be the main reason that this approach did not show to be very successful.
[caption=Reduced prosody rules according to the MARY emotion module]
pitch => 0.3 * arousal + 0.1 * valence
- 0.1 * power
rate => 0.5 * arousal + 0.2 * valence
§.§ Mapping from dimensions to rules
We adapted the emotion to prosody
mapping approach of <cit.>:
Values for the emotion dimensions arousal, and pleasure are mapped to the pitch, rate, and volume attributes of
the SSML element<prosody>. This mapping is carried out in the
following way:
* rescale the scores of emotion dimension e ∈{pleasure, arousal} to the range [-1,
1]
* calculate each of the prosody parameters y ∈{pitch, rate, volume} by the
following linear combination
y = ∑_e∈{pleasure, arousal} w_e, y· e
* rescale y to a range defined by still natural sounding
minimum and maximum values of the respective prosody dimension.
For both variants, Schroeder, and Syntact, all weightsw_e, yof emotion dimensionefor the calculation of the prosodic dimensionywere set manually based on perceptual expert judgments how well the synthesized speech prosodically matches the intended emotions. <cit.> showed that emotion recognition performance can be increased by adding emotional speech samples synthesized this way to the training data.
For reproducibility, all code is open sourced in the Syntact gitlab repository[<https://github.com/felixbur/syntAct>].
§ PERCEPTUAL EVALUATION
We conducted a perception experiment to validate the effectiveness of the approaches.
We used the Google speech API as a speech synthesizer, with the standard male and female voices.
As text material, we used two short sentences of the Berlin Emotional Database <cit.>, which are meant to be emotionally undecided:
* ”In sieben Stunden wird es soweit sein.“ (it will happen in seven hours.)
* ”Heute Abend könnte ich es ihm sagen.“ (i could tell him tonight.)
The idea is that these sentences are neither too mundane nor already have a linguistic emotional connotation.
These four combinations (two sexes times two sentences) were synthesised with both methods and with all combinations (9) of three valence and arousal levels:.1, .5, .9, with.5being the neutral level.
resulting in 72 samples (2·2·2·9).
The samples were annotated by 10 subjects employed by audEERING GmbH using the I-hear-U-play platform <cit.>.
The labelers were 6 women and 4 men of mean age 34.87 years with 13.79 years standard deviation.
After judging 10 test samples to get acquainted with the task, they answered for each sample the following two questions:
* ” Please rate the arousal level on a scale of low, mid, and high.“
* ” Please rate the valence level on a scale of negative, neutral, and positive.“
To measure the inter-rater agreement we used Fleiss' kappa, the results are depicted in Table <ref>. While the raters could agree on the arousal annotations, the values are only slightly agreed on for valence when simulated by the Syntact method but not for the Schroeder method.
§ RESULTS
The results of the perception experiment are shown in Table <ref>. The confusion matrix for the arousal dimension levels is shown in Figure <ref> and for the valence dimension in Figure <ref>.
The results were computed based on all listeners ratings, without a unified label.
As can be seen, the simulation of arousal was successful with both approaches but to a clearly higher degree with the simpler Syntact method.
For the Schroeder method, low arousal is often confused with the neutral versions and the neutrally meant samples with high arousal.
With respect to valence, we must admit that we were only partly successful with the Syntact method.
As outlined above, this may largely be also due to the fact that it is mostly convey in linguistic information. This has recently again been shown also for deep representations of acoustics that succeed in better automatic valence recognition from speech due to their inherent encoding of linguistics <cit.>.
Nonetheless, we think this is a valuable finding because this method basically hypothesizes that valence correlates positively with pitch which is an interesting approach based on its simplicity.
The samples generated by the Schroeder method were labeled as neutral or high valence by the majority of the listeners, an outcome perhaps that indicates that the “normal” expression of the Google voices is rather friendly.
As discussed in Section <ref>, it is quite difficult to simulate the valence dimension by acoustic cues alone and accordingly, we are satisfied to have reached even a partial success.
§ CONCLUSION AND OUTLOOK
This paper investigated two methods to simulate emotional expression in speech synthesis by controlling prosody with SSML.
The chosen method following a (strongly) reduced version of Marc Schröder's work did not outperform our rather naive baseline.
Of course, we cannot be sure if the outcomes are specific to the Google synthesizer that was used to generate the samples.
Hence, a more general investigation that includes several speech engines will remain future work.
Also, it is much more promising to learn emotion-to-expression rules from data than to manually determine them based on isolated trials, amongst others because emotional expression is at least to a degree culture specific and the same rules can not be applied in all cultural and social contexts. This has to our knowledge not yet been done for SSML-based approaches and also remains future work.
Thirdly, we restricted the investigation on two dimensions; valence and arousal. Future studies will take at least the dominance dimension into account, which is important to distinguish for example anger from fear.
It further appears interesting to measure how automatic speech emotion recogniser would recognise such rule- and SSML-based samples. In addition, one could evaluate if they could be used for model augmentation as was first suggested in <cit.>.
On the opposing end – and likewise closing the circle between analysis and synthesis, one could implement a related rule- and SSML-based recognition of emotion from speech. Presumably, however, this would require some form of speaker normalisation grounded in neutral speech, hence, requiring an enrolment procedure.
§ ACKNOWLEDGEMENTS
This research has been partly funded by the European SHIFT (MetamorphoSis of cultural Heritage Into augmented hypermedia assets For enhanced accessibiliTy and inclusion) project (Grant Agreement number: 101060660).
essv
|
http://arxiv.org/abs/2307.02876v1
|
20230706092547
|
Hypergraphs with arbitrarily small codegree Turán density
|
[
"Simón Piga",
"Bjarne Schülke"
] |
math.CO
|
[
"math.CO",
"05C65 05D99"
] |
Let k≥ 3.
Given a k-uniform hypergraph H, the minimum codegree δ(H) is the largest d∈ N such that every (k-1)-set of V(H) is contained in at least d edges.
Given a k-uniform hypergraph F, the codegree Turán density γ(F) of F is the smallest γ∈ [0,1] such that every k-uniform hypergraph on n vertices with δ(H)≥ (γ + o(1))n contains a copy of F.
Similarly as other variants of the hypergraph Turán problem, determining the codegree Turán density of a hypergraph is in general notoriously difficult and only few results are known.
In this work, we show that for every >0, there is a k-uniform hypergraph F with 0<γ(F)<.
This is in contrast to the classical Turán density, which cannot take any value in the interval (0,k!/k^k) due to a fundamental result by Erdős.
TOWARDS ACCURATE INSTANCE SEGMENTATION IN LARGE-SCALE
LIDAR POINT CLOUDS
Binbin Xiang1, Torben Peters1, Theodora Kontogianni1, Frawa Vetterli1, Stefano Puliti2, Rasmus Astrup2, Konrad Schindler1
=============================================================================================================================
§ INTRODUCTION
A k-uniform hypergraph (or k-graph) H consists of a vertex set V(H) together with a set of edges E(H)⊆ V(H)^(k)={S⊆ V(H):| S| =k}.
Given a k-graph F and n∈N, the Turán number of n and F, (n,F), is the maximum number of edges an n-vertex k-graph can have without containing a copy of F.
Since the main interest lies in the asymptotics, the Turán density π(F) of a k-graph F is defined as
π(F) = lim_n →∞(n,F)/nk .
Determining the value of π(F) for k-graphs (with k≥ 3) is one of the central open problems in combinatorics.
In particular, the problem of determining the Turán density of the complete 3-graph on four vertices, i.e., π(K_4^(3)), was asked by Turán in 1941 <cit.> and Erdős <cit.> offered 1000$ for its resolution.
Despite receiving a lot of attention (see for instance the survey by Keevash <cit.>), this problem, and even the seemingly simpler problem of determining π(K_4^(3)-), where K_4^(3)- is the K_4^(3) minus one edge, remain open.
Several variations of this type of problem have been considered, see for instance BCL:21,Erdossos,Christiansurvey and the references therein.
The variant that we are concerned with here asks how large the minimum codegree of an F-free k-graph can be.
Given a k-graph H=(V,E) and S⊆ V, the degree d(S) of S (in H) is the number of edges containing S, i.e., d(S)=|{e∈ E:S⊆ e}|.
The minimum codegree of H is defined as δ(H)=min_x∈ V^(k-1)d(x).
Given a k-graph F and n∈N, Mubayi and Zhao <cit.> introduced the codegree Turán number _co(n,F) of n and F as the maximum d such that there is an F-free k-graph H on n vertices with δ(H)≥ d.
Moreover, they defined the codegree Turán density γ(F) of F as
γ(F) = lim_n→∞ex_co(n,F)/n
and proved that this limit always exists.
It is not hard to see that γ(F) ≤π(F) .
The codegree Turán density of a family ℱ of k-graphs is defined analogously.
Similarly as for the Turán density, determining the exact codegree Turán density of a given hypergraph can be very difficult and so it is only known for very few hypergraphs (see the table in <cit.>).
In this work, we show that there are k-graphs with arbitrarily small but strictly positive codegree Turán densities.
For every ξ>0 and k≥ 3, there is a k-graph F with 0<γ(F)<ξ .
Note that this is in stark contrast to the Turán density and the uniform Turán density, another variant of the Turán density that was introduced by Erdős and Sós <cit.>.
Regarding the former, a classical result by Erdős <cit.> states that for no k-graph the Turán density is in the interval (0,k!/k^k).
Regarding the latter Reiher, Rödl, and Schacht <cit.> proved that for no 3-graph the uniform Turán density is in (0,1/27).
Mubayi and Zhao <cit.> defined
Γ^(k):={γ(F) F is a k-graph}⊆[0,1]
and
Γ^(k):={γ(ℱ) ℱ is a family of k-graphs}⊆[0,1] .
We remark that Γ^(k)⊆Γ^(k) and that similar sets have been studied for the classical Turán density (see, for instance, <cit.>).
Mubayi and Zhao <cit.> showed that Γ^(k) is dense in [0,1] and asked if this is also true for Γ^(k).
Their proof for Γ^(k) is based on showing that zero is an accumulation point of Γ^(k).
Theorem <ref> implies the same for Γ^(k).
Zero is an accumulation point of Γ^(k).
Given a k-graph H=(V,E) and a subset of vertices A={v_1,…, v_s}⊆ V, we omit parentheses and commas and simply write A=v_1⋯ v_s.
For the proof of Theorem <ref>, we consider the following hypergraphs.
For integers ℓ≥ k≥ 2, we define the k-uniform zycle of length ℓ as the k-graph Z_ℓ^(k) given by
V(Z_ℓ^(k))= {v_i^j i∈ [ℓ], j∈ [k-1] }, and
E(Z_ℓ^(k))= {v_i^1v_i^2⋯ v_i^k-1 v_i+1^j i∈ [ℓ], j∈[k-1]} ,
where the sum of indices is taken modulo ℓ.
Observe that Z_ℓ^(k) has (k-1)ℓ vertices and (k-1)ℓ edges.
Moreover, Z_ℓ^(2) = C_ℓ.
When k∈ N is clear from the context, we omit it in the notation.
The following bounds on the codegree Turán density of zycles imply Theorem <ref>.
Let k≥ 3. For every d∈ (0,1], there is an ℓ∈ N such that
1/2(k-1)^ℓ≤γ (Z_ℓ) ≤ d .
In fact we show that γ (Z_ℓ)>0 for every ℓ≥ 3 (see Lemma <ref>).
Finally, we prove that any proper subgraph of Z^(3)_ℓ has codegree Turán density zero.
Let Z^(3)-_ℓ be the 3-graph obtained from Z^(3)_ℓ by deleting one edge.
Let ℓ≥ 3. Then
γ(Z_ℓ^(3)-)=0 .
To prove Theorem <ref>, we generalise a method developed by the authors together with Sales in <cit.>.
§ PROOF OF THEOREM <REF>
Given a k-graph H=(V,E), we define the neighbourhood of x∈ V^(k-1) as
N(x)={v∈ V:x∪{v}∈ E} .
Given a (k-1)-subset of vertices e∈ V^(k-1), we define the back neighbourhood of e and the back degree of e, respectively, by
N(e) = {f∈ V^(k-1) f∪{v}∈ E for every v∈ e}d(e)=| N(e)| .
Moreover, given a k-graph H and two disjoint (k-1)-sets of vertices e,f∈ V(H)^(k-1), we write e ▹ f to mean e∈ N(f) .
Thus, it is easy to see that Z_ℓ can be viewed as a sequence of (k-1)-sets of vertices e_1,…, e_ℓ such that e_i▹ e_i+1 for every i∈ [ℓ] (where the sum is taken modulo ℓ).
We split the proof in the lower and upper bound.
§.§ Upper bound
Here we prove the following lemma that yields the upper bound in Theorem <ref>.
Let k≥ 3.
For every d∈ (0,1], there is a positive integer ℓ∈ N such that
γ (Z_ℓ) ≤ d .
We will make use of the following lemma due to Mubayi and Zhao <cit.>.
Fix k≥ 2.
Given , α>0 with α+<1, there exists an m_0∈ N such that the following holds for every n-vertex k-graph H with δ(H)≥ (α + )n.
For every integer m with m_0≤ m≤ n, the number of m-sets S⊆ V(H) satisfying δ(H[S])≥ (α + /2)m is at least 12nm.
For positive integers f, c and a k-graph F on f vertices, denote the c-blow-up of F by F(c).
This is the f-partite k-graph F(c)=(V, E) with V = V_1 ∪̇…∪̇V_f, |V_i| = c for 1≤ i ≤ f, and E = {v_i_1⋯ v_i_k : v_i_j∈ V_i_j for every j∈[k] and i_1,…, i_k ∈ E(F)}.
By cyclically going around the vertices, it is easy to check that the blow-up of a zycle of length r contains zycles whose length is a multiple of r.
For k,r≥ 3 and c∈ N, we have Z_cr⊆ Z_r(c).
The following supersaturation result follows from a standard application of Lemma <ref> combined with a classical result by Erdős <cit.>.
Let t,k,c∈ N with k≥ 2 and let ℱ={F_1, … F_t} be a finite family of k-graphs with | V(F_i)|=f_i for all i∈ [t].
For every >0, there exists a ζ>0 such that for sufficiently large n∈ N, the following holds.
Every n-vertex k-graph H with δ(H)≥ (γ(ℱ)+)n
contains ζnf_i copies of F_i for some i∈[t].
Consequently, H contains a copy of F_i(c).
Given t,k,c and >0, let m_0∈ N be given by Lemma <ref>, and let C∈N with C^-1≪ c^-1.
Let m∈N with m^-1≪, m_0^-1, C^-1,f_i^-1,k^-1,t^-1, and set
ζ=1/2tmmax_i f_i .
Now let n∈N be sufficiently large, i.e., n^-1≪ζ.
Let H be given as in the statement of the lemma.
Due to Lemma <ref>, at least 1/2nm induced m-vertex subhypergraphs of H have minimum codegree at least (γ(ℱ)+/2)m.
Since m is sufficiently large, each of those subgraphs will contain a copy of a hypergraph in ℱ.
Therefore, there exists an i∈[t] such that there are at least 1/2tnm induced m-vertex subgraphs of H containing a copy of F_i.
Set F=F_i and f=f_i, and define an auxiliary f-uniform hypergraph G_F by V(G_F)=V(H) and E(G_F)={S∈ V(H)^(f) F⊆ H[S]}.
By the counting above, we have
|E(G_F)| ≥1/2tnm/n-fm-f = 1/2tmfnf≥ζnf .
A result by Erdős <cit.> implies that G_F contains a copy of K_f^(f)(C).
Each edge of K_f^(f)(C) corresponds to (at least) one embedding of F into H, in one of the at most f! possible ways that F could be embedded into the f vertex classes of K_f^(f)(C) (viewed as vertex sets of H).
Thus, when colouring the edges of K_f^(f)(C) accordingly, Ramsey's theorem entails that there is a K_f^(f)(c)⊆ K_f^(f)(C) for which all embeddings of F follow the same permutation.
This yields a copy F(c) in H.
No we are ready to prove Lemma <ref>.
Given k≥ 3 and d∈ (0,1) (since for d=1 the statement is clear), take t=⌈ d^-2(k-1)⌉+1 and ℓ=(2t)!.
We first prove the following claim.
γ(Z_2, Z_4,…, Z_2t) ≤ d .
Let ≪ 1/k,1/t,1-d and pick n∈N with n^-1≪.
Let H=(V,E) be a k-graph on n vertices with δ(H)≥ (d + )n.
We shall prove that Z_2r⊆ H for some r∈{1,…, t}.
To this end, we find a sequence of (k-1)-sets of vertices e_1,…, e_2r∈ V^(k-1) with e_i▹ e_i+1 for every i∈ [2r] (where the sum is modulo 2r).
First, we show that there is a sequence of pairwise disjoint (k-1)-sets of vertices e_1, e_3, …, e_2t-1∈ V^(k-1) such that
|
N(e_2i-1)^(k-1)∩N(e_2i+1)
|
> 1/t-1nk-1+t(k-1)n^k-2 ,
for every i∈ [t-1].
Pick e_1 arbitrarily.
We choose e_3,…, e_2t-1 iteratively as follows.
Suppose that for j∈[t-1], we have already found a sequence e_1,…, e_2j-1 satisfying (<ref>) for every i≤ j.
Let U_j=⋃_i∈ [j] e_2i-1 and note that |U_j|≤ (k-1)t≤ n2.
The following identity holds by a double counting argument, and the inequality follows from the minimum codegree condition
∑_e∈ (V∖ U)^(k-1)|N(e_2j-1)^(k-1)∩ N(e)|
=∑_e∈ N(e_2j-1)^(k-1)|N(e)∖ U|k-1≥(d+2)nk-1^2 .
Therefore, by averaging there is an e_2j+1∈ (V∖ U_j)^(k-1) such that
| N(e_2j-1)^(k-1)∩ N(e_2j+1)|
≥(d+/2)nk-1^2/nk-1≥ (d+/4)^2(k-1)nk-1
≥
d^2(k-1)nk-1+t(k-1)n^k-2
≥ 1/t-1nk-1+t(k-1)n^k-2 .
Hence, after t steps we found e_1,e_3,…,e_2t-1∈ V^(k-1) satisfying (<ref>) for every i∈ [t-1].
Note that the number of (k-1)-sets containing at least one vertex in ⋃_i∈[t]e_2i-1 is at most t(k-1)n^k-2.
Thus, because of (<ref>), the pigeonhole principle implies that there are indices i,j∈[t-1] with i<j and e_2i∈⋂_s∈{i,j}(N(e_2s-1)^(k-1)∩ N(e_2s+1)) such that e_2i is disjoint from each of e_1,e_3,…,e_2t-1.
In particular, we have
e_2i▹ e_2i+1 e_2j-1▹ e_2i .
Next we choose the other (k-1)-sets with even indices in the sequence forming Z_2r.
We shall choose j-i-1 pairwise disjoint (k-1)-sets e_2i+2, …, e_2j-2∈ V^(k-1) such that e_2m∈ N(e_2m-1)∩ N(e_2m+1) for every i<m<j (note that if j=i+1, we are done).
In other words, for i<m<j, we need
e_2m-1▹ e_2m▹ e_2m+1 .
Moreover, the e_2m have to be disjoint from the already chosen sets in the sequence.
Each set e ∈ V(H)^(k-1) can intersect at most (k-1)n^k-2 other elements of V(H)^(k-1).
Thus, we can greedily pick disjoint the even sets e_2m∈ N(e_2m-1)^(k-1)∩ N(e_2m+1) one by one for each i<m<j.
Indeed, for every m≤ j-i-1, the number of (k-1)-sets in N(e_2m-1)^(k-1)∩ N(e_2m+1) which do not intersect any previously chosen (k-1)-set in the sequence is at least
|N(e_2m-1)^(k-1)∩ N(e_2m+1)| - 2t(k-1)n^k-2(<ref>)≥1/t-1nk-1- t(k-1)n^k-2(<ref>)> 0 .
This means that we can always pick an e_2m∈ N(e_2m-1)^(k-1)∩ N(e_2m+1) that is disjoint from all previously chosen sets.
Putting (<ref>) and (<ref>) together yields that the (k-1)-sets e_2i,e_2i+1,…, e_2j-1, form a zycle of length 2(j-i)≤ 2t.
This concludes the proof of the claim.
Let 0<≪ 1/ℓ and m≥ℓ/2.
Let n∈N with n^-1≪ and let H be a k-graph with δ(H)≥ (d+)n.
We shall prove that Z_ℓ⊆ H.
Notice that Proposition <ref> and Claim <ref> imply that H contains a copy of Z_2r(m) with r∈{1,…, t}.
Applying Fact <ref> with c = ℓ2r≤ m, we obtain a copy of Z_ℓ in H as desired.
§.§ Lower bound
The following construction will provide an example of a Z_ℓ-free hypergraph with large minimum codegree.
Let n,p,k∈ N be such that p is a prime, k ≥ 2 and p| n.
We define the n-vertex k-graph F_p^(k)(n) as follows.
The vertex set consists of p disjoint sets of size n/p each, i.e., V( F_p^(k)(n)) = V_0 … V_p-1 with | V_i|=n/p for all i∈ [p].
Given a vertex v∈ V( F_p^(k)(n)) we write 𝔣(v)=i if and only if v∈ V_i for i∈{0,1,…, p-1}.
We define the edge set of F_p^(k)(n) by
v_1⋯ v_k ∈ E( F_p^(k)(n)) ⇔𝔣(v_1)+…+ 𝔣(v_k) ≡ 0 p and 𝔣(v_i) ≠ 0 for some i∈ [k] , or
𝔣(v_σ(1))=…=𝔣(v_σ(k-1))=0 and 𝔣(v_σ(k))=1 for some σ∈ S_k .
When k is obvious from the context, we omit it from the notation and we always consider the indices of the clusters modulo p.
Let k≥ 3. For every ℓ≥ 2, we have 1/2(k-1)^ℓ≤γ(Z_ℓ^(k)).
Given k≥ 3 and ℓ≥ 2, let n,p∈ N be such that p| n, p is a prime larger than k, and n^-1≪ p^-1<1/(k-1)^ℓ+1.
Observe that by the Bertrand–Chebyshev theorem we might take p≤ 2(k-1)^ℓ.
We shall prove that
δ( F_p(n)) = np≥n/2(k-1)^ℓ
Z_ℓ⊈ F_p(n) .
To check the codegree condition in (<ref>), take a (k-1)-set of vertices v_1,…,v_k-1.
If there is an i∈ [k-1] such that 𝔣 (v_i) ≠ 0, then let j be the only solution in {0,1,…, p-1} to the equation
𝔣(v_1)+…+ 𝔣(v_k-1) + x ≡ 0 p .
Then, N(v_1⋯ v_k-1)⊇ V_j and therefore d(v_1⋯ v_k-1)≥np.
If f(v_i)=0 for all i∈ [k-1], then N(v_1⋯ v_k-1)=V_1 and we obtain d(v_1⋯ v_k-1)=np.
To check the second part of (<ref>), assume that there are r≥ 2 and sets e_1,…, e_r ∈ V( F_p(n))^(k-1) forming a copy of Z_r, i.e., we have e_i▹ e_i+1 for all i.
Here, and for the rest of the proof, we take the sum of indices of the e_i's to be modulo r.
We shall prove that
r>ℓ .
The following claim states that there is an i_0 for which e_i_0 is completely contained in one of the clusters of F_p(n).
Moreover, that cluster is not V_0.
There is an i_0∈ [r] and a j∈[p-1] such that e_i_0⊆ V_j.
Fix any i∈ [r], let e_i=v_1⋯ v_k-1, and pick v_k ∈ e_i+1 arbitrarily.
We consider four cases.
* |e_i∩ V_0|=k-1.
By Definition <ref> and since v_1⋯ v_k∈ E( F_p(n)), we have v_k∈ V_1.
Since we picked v_k∈ e_i+1 arbitrarily, we have that e_i+1⊆ V_1 and finish the proof of this case by taking i_0=i+1.
* |e_i∩ V_0| < k-2.
Let j≡ -(𝔣(v_1)+⋯ + 𝔣(v_k-1)) p.
By Definition <ref> and since v_1⋯ v_k∈ E( F_p(n)), we have
0≡𝔣(v_1)+⋯ + 𝔣(v_k)
≡𝔣(v_k) - j .
This means that v_k∈ V_j and since we picked v_k∈ e_i+1 arbitrarily, similarly as above we get e_i+1⊆ V_j.
If j≢0, we finish by taking i_0=i+1.
If j≡0, the claim follows from Case [case:k-1](1) for e_i+1 instead of e_i.
* |e_i∩ V_0| = k-2 and |e_i∩ V_1| =0.
This case follows from similar arguments as the previous one.
* |e_i∩ V_0| = k-2 and |e_i∩ V_1| =1.
By Definition <ref>, we either have v_k∈ V_0 or v_k∈ V_p-1.
Thus, since we picked v_k∈ e_i+1 arbitrarily, we certainly have e_i+1⊆ V_0∪ V_p-1.
Hence, |e_i+1∩ V_1|=0 and so the proof follows from Cases [case:k-1](1) - [case:=k-2andno1](3) for e_i+1 instead of e_i.
We now show that for every i∈[r],
if e_i ⊆ V_j with j≢0 p, then e_i+1⊆ V_(1-k)j .
Indeed, let e_i=v_1⋯ v_k-1⊆ V_j and pick v_k∈ e_i+1 arbitrarily.
Since 𝔣(v_i) ≡ j p for i∈ [k-1], we have
𝔣(v_1)+…+ 𝔣(v_k-1) ≡ (k-1)j p .
Therefore, since e_i▹ e_i+1 implies v_1⋯ v_k∈ E( F_p(n)) and because 𝔣(v_i) ≡ j≢0 p for i∈ [k-1], we have
0≡𝔣(v_1)+…+ 𝔣(v_k) ≡ (k-1)j + 𝔣(v_k) p .
Hence 𝔣 (v_k) ≡ (1-k)j, meaning that v_k∈ V_(1-k)j.
Since we picked v_k∈ e_i+1 arbitrarily, we have e_i+1⊆ V_(1-k)j proving (<ref>).
Finally, we are ready to show (<ref>).
Let i_0 and j be given by Claim <ref>.
As p is a prime, F_p is a field.
Together with j 0, this entails that (1-k)^sj 0 p for all s∈[r].
Thus, r applications of (<ref>) imply that
e_i_0+r⊆ V_m with m≡ (1-k)^r j p .
Since e_i_0+r = e_i_0∈ V_j, we have (1-k)^rj ≡ j p, and as j 0, we have (1-k)^r ≡ 1.
Recalling that we chose p such that p>(k-1)^ℓ+1, (<ref>) follows.
§ PROOF OF THEOREM <REF>
§.§ Method
As mentioned in the introduction, to prove Theorem <ref> we apply the method developed by the authors together with Sales in <cit.>.
Given a k-graph H=(V,E), a picture is a tuple (v,m,ℒ,ℬ), where
* v∈ V,
* m∈N,
* ℒ is a collection of m-tuples ℒ⊆ (V∖{v})^m, and
* ℬ⊆ [m]^(k-1) is a fixed family of (k-1)-subsets of V(H),
such that for every (x_1,…,x_m)∈ℒ and every i_1⋯ i_k-1∈ℬ, the k-sets vx_i_1⋯ x_i_k-1 are edges of H.
That is to say, x_i_1⋯ x_i_k-1 is an edge in the link of H at v.
We use pictures to find a copy of a k-graph F on H.
Roughly speaking, we say that a picture is nice if it `encodes' a set of edges that would yield a copy of F, but whose existence we cannot (yet) guarantee when considering the link of H at v.
Given k-graphs F and H=(V,E), and vertex set S⊆ V, we say that a picture (v,m,ℒ,ℬ) is S-nice for F, if for every w∈ S and every (x_1,…,x_m)∈ℒ, the hypergraph with vertex set V and edge set
E∪{wx_i_1⋯ x_i_k-1:i_1⋯ i_k-1∈ℬ}
contains a copy of F.
If F is clear from the context, we speak simply of S-nice pictures.
The following lemma describes how the existence of S-nice pictures implies that H contains a copy of F.
Let F be a k-graph.
Given ξ,ζ> 0 and c,m∈ N, let n∈N such that n^-1≪ξ, ζ, | V(F)|^-1,c^-1,m^-1, and let H be an n-vertex k-graph.
Suppose that there are m∈N and ℬ⊆[m]^(k-1) such that for every S⊆ V(H) with | S|≥ c, there is an S'-nice picture (v,m,ℒ,ℬ), with v∈ S, S'⊆ S, |S'|≥ξ |S|, and |ℒ|≥ζ n^m.
Then H contains a copy of F.
Let t=⌈ζ^-1⌉+1.
By iteratively applying the conditions of the lemma, we find a nested sequence of subsets V(H)=S_0⊇ S_1⊇…⊇ S_t such that for i∈[t], there are S_i-nice pictures (v_i,m, ℒ_i,ℬ) satisfying v_i∈ S_i-1, |S_i|≥ξ^i n>c, and |ℒ_i| ≥ζ n^m.
Since t≥ζ^-1+1, by the pigeonhole principle, there are two indices 0<i<j≤ t such that ℒ_i∩ℒ_j≠∅.
Let (x_1,…,x_m)∈ℒ_i∩ℒ_j.
Then because (v_i,m,ℒ_i,ℬ) is an S_i-nice picture and v_j∈ S_j-1⊆ S_i, Definition <ref> guarantees that
E(H)∪{v_jx_i_1⋯ x_i_k-1 i_1⋯ i_k-1∈ℬ}
contains a copy of F.
Since (v_j,m,ℒ_j,ℬ) is a picture, Definition <ref> yields v_jx_i_1⋯ x_i_k-1∈ E(H) for all i_1⋯ i_k-1∈ℬ.
Thus, we conclude that this copy of F is in fact in H.
Now we apply Lemma <ref> to prove Theorem <ref>.
§.§ Proof of Theorem <ref>
Let ℓ≥ 3 be an integer and let >0.
Let ξ,ζ > 0, and let n, c∈ N such that n^-1≪ c^-1≪ζ,ξ≪.
Let H be a 3-graph with δ(H)≥ n.
We aim to show that Z_ℓ^-⊆ H.
Set m=2 and ℬ={{1,2}}, then due to Lemma <ref>, we only need to prove that for every S⊆ V(H) of size at least c, there is an S'-nice picture (v,2, ℒ,{{1,2}}) with v∈ S, S'⊆ S, |S'|≥ξ |S|, and |ℒ|≥ζ n^2.
Given S⊆ V(H) with | S|≥ c, take any vertex v∈ S and let V=V(H)∖{v}.
Observe that using the minimum codegree condition and the above hierarchy, we have
∑_bb'∈ V^(2) |N_L_v(b)∩ N_L_v(b')∩ S| = ∑_u∈ S∖{v}d_L_v(u)2≥ n2(|S|-1) ≥ξn2|S| ,
where L_v denotes the link of H at v.
Thus, by averaging there is a pair b_1,b_2∈ V such that | N_L_v(b_1)∩ N_L_v(b_2)∩ S|≥ξ| S|.
We pick S'⊆ N_L_v(b_1)∩ N_L_v(b_2)∩ S with | S'|=⌈ξ| S|⌉.
Since δ(H) - 2ℓ - |S'|≥ n/2≥ 2 we can greedily pick pairwise disjoint pairs of vertices e_1,…, e_ℓ-2∈ (V(H)∖ S')^(2) such that
b_1b_2=e_1▹ e_2 ▹⋯▹ e_ℓ-2 .
Now let R=⋃_i∈ [ℓ-2]e_i and take
ℒ = {(x_1,x_2) ∈ V^2 x_1∈ N_H(e_ℓ-2)∖ R and x_2∈ N_H(x_1v)∖ R} .
Note that |ℒ| ≥ (δ(H)/2)^2≥^2 n^2/4≥ζ n^2.
Further, since x_1x_2∈ E(L_v) for every (x_1,x_2)∈ℒ, (v,m,ℒ,ℬ) is a picture in H.
Moreover, observe that it is S'-nice.
Indeed, we only we need to check that for any u∈ S' and (x_1,x_2)∈ℒ, the hypergraph with edges E(H)∪{ux_1x_2} contains a copy of Z_ℓ^-.
For this, note that in E(H)∪{ux_1x_2} we have x_1x_2▹ uv.
Further, u∈ S' and the choice of b_1 and b_2 imply uv▹ b_1b_2.
Together with (<ref>), this gives x_1x_2▹ uv▹ e_1▹⋯▹ e_ℓ-2, and using the fact that x_1∈ N(e_ℓ-2), we obtain a copy of Z_ℓ^- (where the missing edge is x_2e_ℓ-2).
§ CONCLUDING REMARKS
Following a very similar proof as that for Theorem <ref>, we can show a general upper bound for γ(Z_ℓ^(3)) for every ℓ≥ 3.
For ℓ≥ 3,
γ(Z_ℓ^(3))≤ 1/2.
Given ℓ≥ 3 and >0, let ξ,ζ > 0 and n, c∈ N such that n^-1≪ c^-1≪ζ,ξ≪.
Let H be a 3-graph with δ(H)≥(1/2+) n.
We aim to show that Z_ℓ⊆ H.
As in the proof of Theorem <ref>, we pick m=2 and ℬ={{1,2}} and due to Lemma <ref>, we only need to prove that for every S⊆ V(H) of size at least c, there is an S'-nice picture (v,2, ℒ,{{1,2}}) with v∈ S, S'⊆ S, |S'|≥ξ |S|, and |ℒ|≥ζ n^2.
For the first part of the proof we proceed as in the proof of Theorem <ref> and we only use δ(H)≥ n.
In particular, we obtain two vertices b_1,b_2∈ V(H)∖{v}=:V and a set S'⊆ N_L_v(b_1)∩ N_L_v(b_1)∩ S with |S'|=⌈ξ |S|⌉.
Moreover, we again greedily pick pairwise disjoint pairs of vertices e_1,…, e_ℓ-2∈ (V∖ S')^(2) satisfying (<ref>).
The set ℒ is chosen differently.
Set R=⋃_i∈[ℓ-2]e_i and
ℒ = {(x_1,x_2)∈ V^2 x_1,x_2∈ N(e_ℓ-2)∖ R and x_1x_2∈ E(L_v)} .
Observe that given x_1∈ N(e_ℓ-2)∖ R, any vertex x_2∈ (N(xv)∩ N(e_ℓ-2))∖ R, gives rise to (x_1,x_2)∈ℒ.
Furthermore, since δ(H)≥ (1/2+)n,
|(N(xv)∩ N(e_ℓ-2))∖ R| ≥ n - 2ℓ≥/2n ,
and similarly we have N(e_ℓ-2)∖ R≥ n/2.
Therefore, we obtain |ℒ|≥ n^2/4, and since x_1x_2∈ E(L_v) for all (x_1,x_2)∈ℒ, (v,m,ℒ,ℬ) is a picture in H.
To see that the tuple (v,m,ℒ,ℬ) is indeed an S'-nice picture, we shall prove that for every u∈ S' and (x_1,x_2)∈ℒ, the hypergraph with (vertex set V(H) and) edges E(H)∪{ux_1x_2} contains a copy of Z_ℓ.
Indeed, the definition of ℒ implies x_1x_2v∈ E(H) and therefore x_1x_2▹ uv in E(H)∪{ux_1x_2}.
Also due to the definition of ℒ, we have x,y∈ N(e_ℓ-2) and thus, e_ℓ-2▹ xy.
Moreover, u∈ S' and the choice of b_1 and b_2 entails uv▹ b_1b_2=e_1.
Combining this with (<ref>), we obtain uv▹ e_1▹… e_ℓ-2▹ x_1x_2▹ uv, that is a copy of Z_ℓ, in E(H)∪{ux_1x_2}.
It would be interesting to know whether Proposition <ref> is sharp for some ℓ≥ 3.
The following construction gives a lower bound of 1/3 for the codegree Turán density of any zycle of length not divisible by 3.
Let n∈ N be divisible by 3 and let H=(V,E), where V=V_1 V_2 V_3 with |V_i|=n/3 and E={uvw∈ V^(3) u,v∈ V_i and w∈ V_i+1}, where the sum is taken modulo 3.
It is not hard to check that δ(H)≥ n/3 and that Z_ℓ⊈H for every ℓ not divisible by 3.
Observe that Z_2^(3)=K_4^(3).
For this 3-graph, a well-known conjecture by Czygrinow and Nagle <cit.> states that γ(Z_2^(3)) = γ(K_4^(3)) = 1/2.
Regarding the next case, Z_3^(3), note that its codegree Turán density is not bounded by the previous construction.
The following 3-graph entails γ(Z_3^(3))≥ 1/4, and in fact it provides the same lower bound for every Z_ℓ^(3) with ℓ not divisible by 4.
Let n∈ N divisible by 4 and let H=(V,E), where V=V_1 V_2 V_3 V_4 with |V_i|=n/4.
Define the edges of H as
E={xyz x, y∈ V_i and z∈ V_i+1}{xyz x∈ V_1, y∈ V_2, z∈ V_3∪ V_4} ,
where the sum of indices is taken modulo 4.
Clearly, δ(H)≥ n/4.
To see that Z_ℓ⊈H for ℓ not divisible by 4, it can be checked that all zycles are of the form e_1▹…▹ e_r such that e_i⊆ V_j_i for some j_i∈[4].
Together with Proposition <ref>, this yields
1/4≤γ(Z_3^(3))≤1/2 .
Determine the value of γ(Z_3^(3)).
On a different note, recall that Theorem <ref> states that Z_ℓ^(3) is (inclusion) minimal with respect to the property of having strictly positive codegree Turán density.
It would be interesting to know if this also holds for larger uniformities.
For k>3 and sufficiently large ℓ, what are the minimal subgraphs F⊆ Z_ℓ^(k) with γ(F)>0?
Let e_1,…,e_ℓ be pairwise disjoint (k-1)-sets of vertices.
Consider the k-graph whose edges are given by e_1▹…▹ e_ℓ plus one additional edge of the form e_ℓ∪{v} with v∈ e_1.
Following the same arguments as in the proof of Theorem <ref>, we obtain that this k-graph has codegree Turán density zero for every ℓ≥ 3.
In order to prove that the lower bound of Lemma <ref> in Subsection <ref>, we introduce the k-graphs F_p^(k)(n) that have large minimum codegree and are Z_ℓ^(k)-free for small ℓ.
It would be interesting to study the codegree Turán density of F_p^(k)(n) itself.
Observe however, that for n≥ pk we have K_k+1^(k)-⊆ F_p^(k)(n), which suggests that this problem might be very difficult for general n.
It is perhaps more natural to study the codegree Turán density of the following k-graph.
For p> k, let F_p^(k) be the k-graph on p(k-1) vertices with V( F_p^(k)) = V_1… V_p where |V_i|=k-1 for every i∈ [p] and whose edges are given by
v_1⋯ v_k ∈ E( F_p^(k))
⟺ 𝔣(v_1)+…+ 𝔣(v_k) ≡ 0 p ,
where the function 𝔣 V( F_p^(k))→ [p] is analogous as in Definition <ref>.
For k≥ 3, determine the codegree Turán density of F_p^(k).
Consider the indices of the clusters V_1, …, V_p of F_p^(k) to be modulo p.
Observe for j∈ [p], we have V_j∪{v}∈ E( F_p^(k)) for every v∈ V_(1-k)j.
It follows that
V_1 ▹ V_1-k▹…▹ V_(1-k)^p-2▹ V_(1-k)^p-1 = V_1 ,
where the last identity is given by Fermat's little theorem.
Hence, there is an ℓ≤ p-1 such that Z_ℓ⊆ F_p^(k) and therefore, Lemma <ref> yields γ( F_p^(k))≥12(k-1)^p > 0.
For k≥ 3, is it true that lim_p→∞γ( F_p^(k))=0?
|
http://arxiv.org/abs/2307.01059v1
|
20230703143711
|
Optimal form of light cones for bosonic transport in long-range systems
|
[
"Tan Van Vu",
"Tomotaka Kuwahara",
"Keiji Saito"
] |
quant-ph
|
[
"quant-ph",
"cond-mat.quant-gas",
"cond-mat.stat-mech",
"hep-th",
"math-ph",
"math.MP"
] |
tan.vu@riken.jp
Analytical quantum complexity RIKEN Hakubi Research Team, RIKEN Center for Quantum Computing (RQC), 2-1 Hirosawa, Wako, Saitama 351-0198, Japan
tomotaka.kuwahara@riken.jp
Analytical quantum complexity RIKEN Hakubi Research Team, RIKEN Center for Quantum Computing (RQC), 2-1 Hirosawa, Wako, Saitama 351-0198, Japan
PRESTO, Japan Science and Technology (JST), Kawaguchi, Saitama 332-0012, Japan
keiji.saitoh@scphys.kyoto-u.ac.jp
Department of Physics, Kyoto University, Kyoto 606-8502, Japan
Understanding the ultimate rate at which information propagates is a pivotal issue in nonequilibrium physics.
Nevertheless, the task of elucidating the propagation speed inherent in quantum bosonic systems presents challenges due to the unbounded nature of their interactions.
In this Letter, we tackle the problem of particle transport in long-range bosonic systems through the lens of both quantum speed limits and the Lieb-Robinson bound.
Employing a unified approach based on optimal transport theory, we rigorously prove that the minimum time required for macroscopic particle transport is always bounded by the distance between the source and target regions, while retaining its significance even in the thermodynamic limit.
Furthermore, we derive an upper bound for the probability of observing a specific number of bosons inside the target region, thereby providing additional insights into the dynamics of particle transport.
Our results hold true for arbitrary initial states under both long-range hopping and long-range interactions, thus resolving an open problem of particle transport in generic bosonic systems.
Optimal form of light cones for bosonic transport in long-range systems
Keiji Saito
August 1, 2023
=======================================================================
IntroductionThe investigation of the velocity at which information propagates is a central topic in quantum mechanics.
In relativistic theory, a fundamental constraint, known as information causality, rigorously prohibits information transfer outside the light cone.
Remarkably, Lieb and Robinson <cit.> discovered similar restrictions in nonrelativistic quantum theory.
Specifically, they demonstrated the existence of an effective light cone, outside of which information propagation undergoes an exponential decay with distance.
The Lieb-Robinson bound, as commonly referred to, has proven to be a powerful tool for the analysis of quantum many-body systems, with diverse applications across various fields <cit.>.
For short-range interacting spin systems, the Lieb-Robinson bound is characterized by a finite velocity, implying that the information propagation is effectively limited within the linear light cone that grows with a finite speed.
In scenarios involving long-range interactions, however, the existence of linear light cones becomes significantly intricate, as such interactions have the capability to instantaneously transmit information to distant places <cit.>.
Here, long-range interaction means that the interaction strength between distinct sites obeys a power-law decay as 1/d^α with distance d.
Intuitively, the shape of the light cone is dependent on the exponent α and may no longer be linear.
Given the ubiquity of long-range systems in nature <cit.> and their fundamental interest, it is of great importance to elucidate the shape of the light cone with respect to the power decay α.
Thus far, a comprehensive characterization of the shape of the light cone has been established for long-range interacting quantum spin and fermionic systems <cit.>.
In Ref. <cit.>, the condition for the linear light cone was proven to be α>2D+1, where D denotes the spatial dimension.
Furthermore, for the entire regime of α, the optimal form of the light cone has been identified as τ≳ d^min(1,α-2D) <cit.>.
As another important class, quantum bosonic systems (e.g., the paradigmatic Bose-Hubbard model <cit.>) hold significant importance in the study of cold atoms in experimental setups.
Nevertheless, the theoretical investigation for bosonic systems has persisted due to the unbounded nature of interactions in such systems.
Classifying the effective light cone within the Bose-Hubbard model has been substantiated through both experimental observations and numerical simulations using ultracold gases confined in optical lattices <cit.>.
While the speeds of bosonic transport and information propagation have been studied for bosonic systems with short-range hoppings <cit.>, the exploration of long-range cases still remains in an early stage of development.
With regard to macroscopic bosonic transport, Faupin and coworkers <cit.> have recently demonstrated the finite velocity of macroscopic particle transport by showing the existence of the linear light cone for α≥ D+3.
However, we are still far from the complete classification of the effective light cone for particle transport.
In particular, it is a critical problem to identify an optimal form of the light cone in bosonic systems, as has been studied for quantum spin and fermion systems.
In this Letter, we resolve the above problem for macroscopic bosonic transport in generic systems with long-range hopping and long-range interactions, spanning the entire regime of α>D.
By developing a unified approach based on optimal transport theory, we demonstrate the finite speed of macroscopic bosonic transport through the lens of quantum speed limits <cit.>.
Specifically, we rigorously prove that the minimum time required for the transport of a macroscopic number of bosons is always bounded by the distance d as τ≳ d^min(1,α-D) [cf. Eq. (<ref>); see Fig. <ref> for illustration].
The exponent in this bound is optimal, as it can be achieved through a constructive state-transfer protocol.
Additionally, we derive a limit for the probability of achieving a specific number of boson transport into a target region [cf. Eq. (<ref>)].
We emphasize that all our results unconditionally hold for arbitrary initial states including pure and mixed states.
Notably, our proofs are significantly simple, necessitating only a few fundamental properties of the Wasserstein distance <cit.>.
Our findings establish the complete classification for bosonic transport with long-range interactions.
SetupWe consider a generic model of bosons on an arbitrary D-dimensional lattice, wherein bosons can hop between arbitrary sites and interact with each other.
The system Hamiltonian is time dependent (i.e., an external control protocol can be applied to the system) and can be expressed in the following form:
H_t-∑_i≠ j∈ΛJ_ij(t)/b̂_i^†b̂_j+∑_Z⊆Λh_Z(t).
Here, Λ denotes the set of all the sites in the lattice, b̂_i^† and b̂_i are the bosonic creation and annihilation operators for site i, respectively, and h_Z(t) is an arbitrary function of {n̂_i}_i∈ Z, where n̂_ib̂_i^†b̂_i is the number operator.
Note that h_Z(t) does not need to be local (i.e., Z can be arbitrarily large).
In other words, both long-range hopping and long-range interactions are allowed in our setup.
The hopping terms J_ij(t) are upper bounded by a power law in the Euclidean distance, i.e., |J_ij(t)|≤ J/i-j^α, where · denotes the Euclidean norm and the power decay α satisfies α>D.
Examples include the standard Bose-Hubbard model, specified by J_ij(t)=0 for any non-neighboring sites i and j and ∑_Z⊆Λh_Z(t)=(U/2)∑_in̂_i(n̂_i-1)-μ∑_in̂_i, where U and μ are real constants.
Let ϱ_t be the system's density matrix at time t.
Then, its time evolution is described by the von Neumann equation (ħ=1 for simplicity):
ϱ̇_t=-i[H_t,ϱ_t].
It can be easily shown that the total number of particles in this system is conserved, denoted hereafter as N.
We introduce some notations that will be used in the Letter.
For an arbitrary set X of lattice sites, X_cΛ∖ X is the set of sites that are not present in X, and |X| denotes its cardinality.
The distance between any two disjoint sets X and Y is defined as the minimum distance between sites belonging to each of them:
d_XYmin_i∈ X, j∈ Yi-j.
The number operator of local bosons occupied inside region X is given by n̂_X∑_i∈ Xn̂_i, and the corresponding concentration operator is n_Xn̂_X/N.
Let Π_N⃗=N⃗ be the projection onto the Mott state |N⃗⟩ (i.e., n̂_i|N⃗⟩=n_i|N⃗⟩ for N⃗=[n_i]_i∈Λ).
Then, the projection P_n̂_X≤ N_0 is defined as
P_n̂_X≤ N_0∑_N⃗:N⃗n̂_XN⃗≤ N_0Π_N⃗.
The expectation of an observable A with respect to state ϱ is denoted as A_ϱAϱ.
Main resultsGiven the aforementioned setup, we are now ready to explain the results, whose proofs are postponed to the end of the Letter.
Our first main result is the following statement, which describes a fundamental limit on the operational time required for a macroscopic transport of bosons in long-range systems.
Consider a situation where a fraction μ∈(0,1] of all bosons is transported from region X to a distant region Y.
Then, the operational time τ required for this macroscopic bosonic transport is lower bounded by the distance between the two regions as
τ≥κ_1^ε d_XY^min(1,α-D-ε).
Here, 0<ε<α-D is an arbitrary number and κ_1^ε>0 is a system-independent constant.
Some remarks on Theorem <ref> are in order.
(i) First, macroscopic transport here means that an average number μN of bosons is transported from X to Y, where μ is a O(1) constant.
The result holds for arbitrary initial states, including both pure and mixed states, and for the entire range of the power decay α>D.
Notably, this type of time constraint cannot be deduced by the standard quantum speed limits <cit.>.
(ii) Second, bound (<ref>) can be physically interpreted as follows.
For the case α>D+1, the bound reads τ≥ O(d_XY), implying that bosonic transport always takes time at least proportional to the distance.
Surprisingly, this implication is the same as in the case of short-range hopping <cit.>, indicating that long-range hopping and long-range interactions do not enhance the speed of macroscopic bosonic transport <cit.>.
On the other hand, in the case D<α≤ D+1, the minimum operational time is bounded in terms of the power decay α as τ≥ O(d_XY^α-D), indicating the effect of long-range hopping on speeding up bosonic transport.
(iii) Last, we discuss the optimality of the bound (<ref>) and show that it is optimal.
The optimality here is interpreted in the sense that the power coefficient associated with the distance term in the bound is optimal.
In other words, we need only show that there exist transport protocols such that bosonic transport can be accomplished within time τ=O(d_XY^min(1,α-D)).
To this end, it is sufficient to consider the case of a single particle (i.e., N=1).
In this case, it was shown that one could construct a rapid state-transfer protocol such that the particle can be transferred between two sites of distance d within time τ=O(d) for α>D+1 and τ=O(d^α-D) for D<α≤ D+1 <cit.> (see Theorem 11 therein).
Therefore, it can be concluded that the bound (<ref>) is optimal.
Theorem <ref> concerns the minimum time required to transport an average number of bosons between two regions.
In the sequel, we focus on the probability of finding a number of bosons occupied inside the target region within a finite time.
Our second main result, which is stated below, establishes an upper bound on this probability in terms of the operational time and the distance.
Assume that the initial boson number outside of region X does not exceed N_0, i.e., P_n̂_X_c≤ N_0_ϱ_0=1.
Then, the probability of finding bosons inside a disjoint region Y is upper bounded by the operational time τ and the distance between the two regions as
P_n̂_Y≥ N_0+Δ N_0_ϱ_τ≤κ_2^εN/Δ N_0τ/d_XY^min(1,α-D-ε).
Here, 0<ε<α-D is an arbitrary number and κ_2^ε>0 is a system-independent constant.
This result holds under a generic setting for both initial states and system dynamics, same as in Theorem <ref>.
It can be interpreted as follows.
Initially, there are at most N_0 bosons outside of X; therefore, in order to observe more than N_0 bosons inside Y, a certain number of bosons must be transferred from X to Y.
Bound (<ref>) indicates that the probability of finding at least N_0+Δ N_0 inside Y is at most linear in time τ and inversely proportional to boson number Δ N_0 and distance d_XY.
While the inequality (<ref>) is valid for any boson number Δ N_0, it is particularly relevant and meaningful to discuss macroscopic transport [i.e., Δ N_0/Nμ is a O(1) constant].
In this case, we obtain
P_n_Y≥ξ_ϱ_τ≤κ_2^εμ^-1τ d_XY^-min(1,α-D-ε),
where ξ (N_0+Δ N_0)/N>μ.
In the case α≥ D+3, a similar probability constraint has been derived for pure states in the absence of long-range interactions <cit.> [see Eq. (7) therein], remaining an open question for the case of D<α<D+3.
Our result thus completely resolves this open problem of bosonic transport for generic long-range systems.
Proof of Theorem <ref>The key ingredient in our derivation is using the Wasserstein distance, which is initially developed in the optimal transport theory and plays crucial roles in various disciplines, such as statistics and machine learning <cit.>, computer vision <cit.>, linguistics <cit.>, molecular biology <cit.>, and stochastic thermodynamics <cit.>.
First, we introduce the discrete L^1-Wasserstein distance between two |Λ|-dimensional distributions *x and *y.
Suppose that we have a transport plan that redistributes *x to *y by sending an amount of π_ij≥ 0 from x_j to y_i with a cost of c_ij≥ 0 per unit mass for all ordered pairs i,j.
Here, the coupling π=[π_ij]∈R_≥ 0^N× N is a joint probability distribution of *x and *y such that ∑_jπ_ij=y_i and ∑_jπ_ji=x_i.
Each coupling π thus defines an admissible transport plan.
Evidently, there is an infinite number of such transport plans.
The Wasserstein distance is then defined as the minimum transport cost for all feasible plans, given by
W(*x,*y)min_π∈C(*x,*y)∑_i,jc_ijπ_ij.
Here, C(*x,*y) denotes the set of couplings π.
Next, the cost matrix [c_ij] needs to be specified. We consider the following cost matrix:
c_ij=i-j^α_ε,
where 0<ε<α-D is an arbitrary number and α_εmin(1,α-D-ε) is defined for convenience.
Since a^z+b^z≥ (a+b)^z for any a,b≥ 0 and 0≤ z≤ 1, we can verify that
c_ij+c_jk≥ c_ik
for any i,j,k∈Λ.
Consequently, the Wasserstein distance defined using this cost matrix satisfies the triangle inequality.
As can be observed, this distance includes spatial information of the lattice and can be the order of the system size.
Before proceeding further, we explain some useful notations that will be used later.
For any site i, we denote by i[r] the ball of radius r≥ 0, which contains all sites j such that i-j<r.
Using this definition, the set Λ∖{i} can be decomposed as
Λ∖{i}=∪_ℓ=1^∞ (i[ℓ+1]∖ i[ℓ]).
In addition, we define γ as a constant such that the following inequality holds for any r≥ 0:
|i[r+1]∖ i[r]|≤γ r^D-1.
We are now ready to prove Theorem <ref>.
Consider the vector of boson concentrations occupied at each site, x_i(t)n_iϱ_t.
Evidently, ∑_ix_i(t)=1 for any t.
Using the relation [b̂_i,n̂_i]=b̂_i, we can show that the time evolution of x_i(t) is governed by the equation:
ẋ_i(t)=∑_j(≠ i)2J_ij(t)N^-1[b̂_j^†b̂_iϱ_t]∑_j(≠ i)ϕ_ij(t).
Take the time discretization of Eq. (<ref>) with time interval δ t=τ/K.
Then, for each k∈[0,K-1] and t=kδ t, we have
x_i(t+δ t)=x_i(t)+∑_j(≠ i)ϕ_ij(t)δ t.
Equation (<ref>) indicates that we can transform *x_t into *x_t+δ t by transporting an amount of |ϕ_ij(t)|δ t between i and j with a cost of c_ij|ϕ_ij(t)|δ t.
This transport plan takes the total cost of
1/2∑_i≠ jc_ij|ϕ_ij(t)|δ tΦ_tδ t,
which should be larger than or equal to W(*x_t,*x_t+δ t).
Taking the sum of Eq. (<ref>) from k=0 to k=K-1 and applying the triangle inequality for W yield ∑_k=0^K-1Φ_tδ t≥W(*x_0,*x_τ).
By taking the δ t→ 0 limit, we immediately obtain
∫_0^τΦ_tt≥W(*x_0,*x_τ).
Since a fraction μ of bosons must be transported from X to Y, we have ∑_i∈ Y∑_j∈ Xπ_ij≥μ.
Therefore,
W(*x_0,*x_τ)≥(min_i∈ Y,j∈ Xc_ij)∑_i∈ Y,j∈ Xπ_ij≥μ d_XY^α_ε.
On the other hand, by applying the Cauchy-Schwarz inequality, ϕ_ij(t) can be upper bounded as
|ϕ_ij(t)|≤ |J_ij(t)|[x_i(t)+x_j(t)].
Consequently, the cost term Φ_t is bounded from above as follows:
Φ_t ≤∑_ix_i(t)∑_j(≠ i)|J_ij(t)|i-j^α_ε
≤∑_ix_i(t)∑_ℓ=1^∞∑_j∈(i[ℓ+1]∖ i[ℓ])J/i-j^α-α_ε
≤ Jγ∑_ix_i(t)∑_ℓ=1^∞1/ℓ^α-α_ε-D+1
= Jγζ(α-α_ε-D+1).
Here, we use the facts that i-j≥ℓ for any j∈(i[ℓ+1]∖ i[ℓ]) and |i[ℓ+1]∖ i[ℓ]|≤γℓ^D-1 in the third line, and ζ(·) denotes the Riemann zeta function.
Define κ_1^ε[Jγζ(α-α_ε-D+1)]^-1μ. Note that κ_1^ε is system-independent and finite because α-α_ε-D+1≥ 1+ε.
Combining Eqs. (<ref>), (<ref>), and (<ref>) yields
τ≥κ_1^ε d_XY^α_ε,
which completes the proof.
Proof of Theorem <ref>The proof strategy is similar to that of Theorem <ref>.
The different point is that we consider the probability distribution of boson numbers at all sites instead of the vector of average boson concentrations.
Define the probability p_N⃗(t){Π_N⃗ϱ_t}.
Since the total boson number is invariant, we need only consider feasible states |N⃗⟩ that satisfy {n̂_ΛΠ_N⃗}=N.
The time evolution of the probability distribution *p_t=[p_N⃗(t)] can be derived from the von Neumann equation as
ṗ_N⃗(t) =-iΠ_N⃗[H_t,ϱ_t]
=∑_i≠ j-iJ_ij(t)√(n_in_j')(N⃗ϱ_tN⃗'-N⃗ϱ_tN⃗')
∑_i≠ jφ_N⃗N⃗'(t),
where N⃗' is obtained from N⃗ by setting n_i'=n_i-1, n_j'=n_j+1, and n_k'=n_k for all k≠ i,j.
Note that φ_N⃗N⃗'(t)∈R.
Two states N⃗ and M⃗ are said to be neighboring if there exist i≠ j such that |n_i-m_i|=|n_j-m_j|=1 and n_k=m_k for any k≠ i,j.
For such neighboring states, the transport cost is defined as c_N⃗N⃗'=i-j^α_ε.
For arbitrary states N⃗ and M⃗, the cost is defined as the shortest-path cost over all possible paths connecting the two states:
c_M⃗N⃗=min∑_k=1^Kc_N⃗_kN⃗_k-1,
where N⃗_0=N⃗, N⃗_K=M⃗, and N⃗_k-1 and N⃗_k are neighboring states for all 1≤ k≤ K.
Obviously, c_N⃗N⃗=0.
Using the defined shortest-path costs, we consider the following Wasserstein distance:
W(*p,*q)min_π∈C(*p,*q)∑_M⃗,N⃗c_M⃗N⃗π_M⃗N⃗.
Following the same strategy as in Eqs. (<ref>)–(<ref>), we can similarly prove that
1/2∫_0^τ∑_N⃗∑_i≠ jc_N⃗N⃗'|φ_N⃗N⃗'(t)|t≥W(*p_0,*p_τ).
Applying the Cauchy-Schwarz inequality, we obtain
|φ_N⃗N⃗'(t)|≤|J_ij(t)|[n_ip_N⃗(t)+n_j'p_N⃗'(t)].
Consequently, the left-hand side of Eq. (<ref>) can be upper bounded as
1/2∑_N⃗∑_i≠ jc_N⃗N⃗'|φ_N⃗N⃗'(t)| ≤∑_N⃗p_N⃗(t)∑_in_i∑_j(≠ i)J/i-j^α-α_ε
≤∑_N⃗p_N⃗(t)NJγζ(α-α_ε-D+1)
=NJγζ(α-α_ε-D+1).
Let S_0={N⃗ | ∑_i∈ X_cn_i≤ N_0} and S_τ={N⃗ | ∑_i∈ Yn_i≥ N_0+Δ N_0}.
To reach state M⃗∈S_τ from state N⃗∈S_0 by hopping bosons between sites, an amount of Δ N_0 bosons must be transferred from sites of X to sites of Y.
Let N⃗_i_0→N⃗_i_1→…→N⃗_i_L be a sequence of states that transfers a boson from site i_0∈ X to site i_L∈ Y by iteratively hopping bosons between sites i_l-1 and i_l for 1≤ l≤ L. Then, by applying the triangle inequality, we obtain that the cost of transferring one boson is lower bounded as
∑_l=1^Lc_N⃗_i_lN⃗_i_l-1=∑_l=1^Li_l-i_l-1^α_ε≥i_L-i_0^α_ε≥ d_XY^α_ε.
Therefore, we immediately obtain c_M⃗N⃗≥Δ N_0d_XY^α_ε for any M⃗∈S_τ and N⃗∈S_0.
Since P_n̂_X_c>N_0_ϱ_0=0, p_N⃗(0)=0 for any N⃗∉S_0.
Combining this with p_N⃗(0)=∑_M⃗π_M⃗N⃗ immediately derives π_M⃗N⃗=0 for any N⃗∉S_0 and M⃗.
Using these facts, the Wasserstein distance can be lower bounded as follows:
W(*p_0,*p_τ) =min_π∑_M⃗,N⃗c_M⃗N⃗π_M⃗N⃗
≥min_π∑_M⃗∈S_τ,N⃗∈S_0c_M⃗N⃗π_M⃗N⃗
≥Δ N_0d_XY^α_εmin_π∑_M⃗∈S_τ∑_N⃗∈S_0π_M⃗N⃗
=Δ N_0d_XY^α_εmin_π∑_M⃗∈S_τp_M⃗(τ)
=Δ N_0d_XY^α_εP_n̂_Y≥ N_0+Δ N_0_ϱ_τ.
Here, we use the relation ∑_N⃗∈S_0π_M⃗N⃗=∑_N⃗π_M⃗N⃗=p_M⃗(τ) in the fourth line.
Combining Eqs. (<ref>), (<ref>), and (<ref>) yields
P_n̂_Y≥ N_0+Δ N_0_ϱ_τ≤NJγζ(α-α_ε-D+1)τ/Δ N_0d_XY^α_ε,
which completes the proof by setting κ_2^ε Jγζ(α-α_ε-D+1).
ConclusionsUsing optimal transport theory, we elucidated fundamental limits on the speed of particle transport in long-range bosonic systems from both perspectives of quantum speed limits and the Lieb-Robinson bound.
The results hold for the entire range of power decay and arbitrary initial states under generic long-range characteristics.
Our findings completely resolve the critical open problem of the speed of macroscopic particle transport.
It is worth noting that the speed of bosonic transport may not be finite in the presence of higher-order hopping terms (i.e., terms like b̂_ib̂_jb̂_k^†b̂_l^†), such as interaction-induced tunneling terms <cit.> (see the Supplemental Material <cit.> for a discussion).
The exploration of the speed of information propagation is left as future work.
T.V.V. was supported by JSPS KAKENHI Grant Number JP23K13032.
T.K. acknowledges Hakubi projects of RIKEN and was supported by JSPS KAKENHI Grant Number 18K13475 and JST, PRESTO Grant Number JPMJPR2116.
K.S. was supported by JSPS KAKENHI Grant Numbers JP23H01099, JP19H05603, and JP19H05791.
71
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Lieb and Robinson(1972)]Lieb.1972.CMP
author author E. H. Lieb and author D. W. Robinson, title title The finite group
velocity of quantum spin systems, https://doi.org/10.1007/bf01645779 journal journal Commun. Math. Phys. volume 28, pages 251 (year 1972)NoStop
[Hastings and Wen(2005)]Hastings.2005.PRB
author author M. B. Hastings and author X.-G. Wen, title title Quasiadiabatic continuation
of quantum states: The stability of topological ground-state degeneracy and
emergent gauge invariance, https://doi.org/10.1103/PhysRevB.72.045141 journal journal Phys. Rev. B volume 72, pages 045141 (year 2005)NoStop
[Bravyi et al.(2006)Bravyi,
Hastings, and Verstraete]Bravyi.2006.PRL
author author S. Bravyi, author M. B. Hastings, and author F. Verstraete, title title Lieb-Robinson bounds
and the generation of correlations and topological quantum order, https://doi.org/10.1103/PhysRevLett.97.050401 journal
journal Phys. Rev. Lett. volume 97, pages 050401 (year 2006)NoStop
[Hastings and Koma(2006)]Hastings.2006.CMP
author author M. B. Hastings and author T. Koma, title title Spectral gap and
exponential decay of correlations, https://doi.org/10.1007/s00220-006-0030-4 journal journal Commun. Math. Phys. volume 265, pages 781 (year 2006)NoStop
[Nachtergaele and Sims(2006)]Nachtergaele.2006.CMP
author author B. Nachtergaele and author R. Sims, title title Lieb-Robinson bounds and
the exponential clustering theorem, https://doi.org/10.1007/s00220-006-1556-1 journal journal Commun. Math. Phys. volume 265, pages 119 (year 2006)NoStop
[Osborne(2006)]Osborne.2006.PRL
author author T. J. Osborne, title title Efficient approximation
of the dynamics of one-dimensional quantum spin systems, https://doi.org/10.1103/PhysRevLett.97.157202 journal
journal Phys. Rev. Lett. volume 97, pages 157202 (year 2006)NoStop
[Hastings(2007)]Hastings.2007.JSM
author author M. B. Hastings, title title An area law for
one-dimensional quantum systems, https://doi.org/10.1088/1742-5468/2007/08/p08024 journal
journal J. Stat. Mech.: Theory Exp. volume 2007, pages P08024NoStop
[Van Acoleyen et al.(2013)Van Acoleyen, Mariën, and Verstraete]Acoleyen.2013.PRL
author author K. Van Acoleyen, author M. Mariën, and author F. Verstraete, title title Entanglement rates
and area laws, https://doi.org/10.1103/PhysRevLett.111.170501
journal journal Phys. Rev. Lett. volume 111, pages 170501 (year
2013)NoStop
[Roberts and Swingle(2016)]Roberts.2016.PRL
author author D. A. Roberts and author B. Swingle, title title Lieb-Robinson bound and
the butterfly effect in quantum field theories, https://doi.org/10.1103/PhysRevLett.117.091602 journal
journal Phys. Rev. Lett. volume 117, pages 091602 (year 2016)NoStop
[Iyoda et al.(2017)Iyoda,
Kaneko, and Sagawa]Iyoda.2017.PRL
author author E. Iyoda, author K. Kaneko, and author T. Sagawa, title title Fluctuation theorem for many-body pure quantum
states, https://doi.org/10.1103/PhysRevLett.119.100601 journal journal Phys. Rev. Lett. volume 119, pages 100601 (year
2017)NoStop
[Anshu et al.(2021)Anshu,
Arunachalam, Kuwahara, and Soleimanifar]Anshu.2021.NP
author author A. Anshu, author S. Arunachalam,
author T. Kuwahara, and author M. Soleimanifar, title title Sample-efficient learning of
interacting quantum systems, https://doi.org/10.1038/s41567-021-01232-0 journal journal Nat. Phys. volume 17, pages
931 (year 2021)NoStop
[Alhambra and Cirac(2021)]Alhambra.2021.PRXQ
author author A. M. Alhambra and author J. I. Cirac, title title Locally accurate tensor
networks for thermal states and time evolution, https://doi.org/10.1103/PRXQuantum.2.040331 journal journal PRX Quantum volume 2, pages
040331 (year 2021)NoStop
[Kuwahara and Saito(2022)]Kuwahara.2022.PRX
author author T. Kuwahara and author K. Saito, title title Exponential clustering of
bipartite quantum entanglement at arbitrary temperatures, https://doi.org/10.1103/PhysRevX.12.021022 journal journal Phys. Rev. X volume 12, pages 021022 (year 2022)NoStop
[Chen et al.(2023)Chen,
Lucas, and Yin]Chen.2023.arxiv
author author C.-F. Chen, author A. Lucas, and author C. Yin, title title Speed limits and locality in many-body quantum
dynamics, https://arxiv.org/abs/2303.07386 journal
journal arXiv:2303.07386 (year
2023)NoStop
[Eisert et al.(2013)Eisert,
van den Worm, Manmana, and Kastner]Eisert.2013.PRL
author author J. Eisert, author M. van den
Worm, author S. R. Manmana, and author M. Kastner, title title Breakdown of quasilocality in
long-range quantum lattice models, https://doi.org/10.1103/PhysRevLett.111.260401 journal
journal Phys. Rev. Lett. volume 111, pages 260401 (year 2013)NoStop
[Saffman et al.(2010)Saffman, Walker, and Mølmer]Saffman.2010.RMP
author author M. Saffman, author T. G. Walker, and author K. Mølmer, title title Quantum information
with Rydberg atoms, https://doi.org/10.1103/RevModPhys.82.2313
journal journal Rev. Mod. Phys. volume 82, pages 2313 (year
2010)NoStop
[Foss-Feig et al.(2015)Foss-Feig, Gong, Clark, and Gorshkov]Foss-Feig.2015.PRL
author author M. Foss-Feig, author Z.-X. Gong,
author C. W. Clark, and author A. V. Gorshkov, title title Nearly linear light cones in
long-range interacting quantum systems, https://doi.org/10.1103/PhysRevLett.114.157201 journal
journal Phys. Rev. Lett. volume 114, pages 157201 (year 2015)NoStop
[Matsuta et al.(2016)Matsuta, Koma, and Nakamura]Matsuta.2016.AHP
author author T. Matsuta, author T. Koma, and author S. Nakamura, title title Improving the Lieb-Robinson bound for long-range
interactions, https://doi.org/10.1007/s00023-016-0526-1
journal journal Ann. Henri Poincaré volume 18, pages 519 (year
2016)NoStop
[Chen and Lucas(2019)]Chen.2019.PRL
author author C.-F. Chen and author A. Lucas, title title Finite speed of quantum scrambling
with long range interactions, https://doi.org/10.1103/PhysRevLett.123.250605 journal
journal Phys. Rev. Lett. volume 123, pages 250605 (year 2019)NoStop
[Kuwahara and Saito(2020)]Kuwahara.2020.PRX
author author T. Kuwahara and author K. Saito, title title Strictly linear light
cones in long-range interacting systems of arbitrary dimensions, https://doi.org/10.1103/PhysRevX.10.031010 journal journal Phys. Rev. X volume 10, pages 031010 (year 2020)NoStop
[Else et al.(2020)Else,
Machado, Nayak, and Yao]Else.2020.PRA
author author D. V. Else, author F. Machado,
author C. Nayak, and author N. Y. Yao, title
title Improved Lieb-Robinson bound for many-body Hamiltonians
with power-law interactions, https://doi.org/10.1103/PhysRevA.101.022333 journal journal Phys. Rev. A volume 101, pages 022333 (year 2020)NoStop
[Tran et al.(2021a)Tran, Guo,
Deshpande, Lucas, and Gorshkov]Tran.2021.PRX
author author M. C. Tran, author A. Y. Guo,
author A. Deshpande, author A. Lucas, and author
A. V. Gorshkov, title
title Optimal state transfer and entanglement generation in
power-law interacting systems, https://doi.org/10.1103/PhysRevX.11.031016 journal journal Phys. Rev. X volume 11, pages 031016 (year 2021a)NoStop
[Kuwahara and Saito(2021a)]Kuwahara.2021.PRL2
author author T. Kuwahara and author K. Saito, title title Absence of fast scrambling
in thermodynamically stable long-range interacting systems, https://doi.org/10.1103/PhysRevLett.126.030604 journal
journal Phys. Rev. Lett. volume 126, pages 030604 (year 2021a)NoStop
[Tran et al.(2021b)Tran, Guo,
Baldwin, Ehrenberg, Gorshkov, and Lucas]Tran.2021.PRL
author author M. C. Tran, author A. Y. Guo,
author C. L. Baldwin, author A. Ehrenberg, author
A. V. Gorshkov, and author
A. Lucas, title title Lieb-Robinson light cone for power-law interactions, https://doi.org/10.1103/PhysRevLett.127.160401 journal
journal Phys. Rev. Lett. volume 127, pages 160401 (year 2021b)NoStop
[Chen and Lucas(2021)]Chen.2021.PRA
author author C.-F. Chen and author A. Lucas, title title Optimal Frobenius light cone in spin
chains with power-law interactions, https://doi.org/10.1103/PhysRevA.104.062420 journal journal Phys. Rev. A volume 104, pages 062420 (year 2021)NoStop
[Gong et al.(2023)Gong,
Guaita, and Cirac]Gong.2023.PRL
author author Z. Gong, author T. Guaita, and author J. I. Cirac, title title Long-range free fermions: Lieb-Robinson bound,
clustering properties, and topological phases, https://doi.org/10.1103/PhysRevLett.130.070401 journal
journal Phys. Rev. Lett. volume 130, pages 070401 (year 2023)NoStop
[Bloch et al.(2008)Bloch,
Dalibard, and Zwerger]Bloch.2008.RMP
author author I. Bloch, author J. Dalibard, and author W. Zwerger, title title Many-body physics with ultracold
gases, https://doi.org/10.1103/RevModPhys.80.885 journal journal Rev. Mod. Phys. volume
80, pages 885 (year 2008)NoStop
[Läuchli and Kollath(2008)]Luchli.2008.JSM
author author A. M. Läuchli and author C. Kollath, title title Spreading of
correlations and entanglement after a quench in the one-dimensional
Bose-Hubbard model, https://doi.org/10.1088/1742-5468/2008/05/p05018 journal
journal J. Stat. Mech.: Theory Exp. volume 2008, pages P05018NoStop
[Carleo et al.(2014)Carleo,
Becca, Sanchez-Palencia, Sorella, and Fabrizio]Carleo.2014.PRA
author author G. Carleo, author F. Becca,
author L. Sanchez-Palencia,
author S. Sorella, and author M. Fabrizio, title
title Light-cone effect and supersonic correlations in one- and
two-dimensional bosonic superfluids, https://doi.org/10.1103/PhysRevA.89.031602 journal journal Phys. Rev. A volume 89, pages 031602 (year 2014)NoStop
[Cheneau et al.(2012)Cheneau, Barmettler, Poletti, Endres, Schauß, Fukuhara, Gross, Bloch, Kollath, and Kuhr]Cheneau.2012.N
author author M. Cheneau, author P. Barmettler,
author D. Poletti, author M. Endres, author
P. Schauß, author
T. Fukuhara, author
C. Gross, author I. Bloch, author C. Kollath, and author S. Kuhr, title title
Light-cone-like spreading of correlations in a quantum many-body system, https://doi.org/10.1038/nature10748 journal journal Nature volume 481, pages
484 (year 2012)NoStop
[Takasu et al.(2020)Takasu,
Yagami, Asaka, Fukushima,
Nagao, Goto, Danshita, and Takahashi]Takasu.2020.S
author author Y. Takasu, author T. Yagami,
author H. Asaka, author Y. Fukushima, author
K. Nagao, author S. Goto, author I. Danshita, and author Y. Takahashi, title title Energy redistribution
and spatiotemporal evolution of correlations after a sudden quench of the
Bose-Hubbard model, https://doi.org/10.1126/sciadv.aba9255
journal journal Science Advances volume 6 (year 2020)NoStop
[Schuch et al.(2011)Schuch,
Harrison, Osborne, and Eisert]Schuch.2011.PRA
author author N. Schuch, author S. K. Harrison, author T. J. Osborne, and author J. Eisert, title title Information propagation
for interacting-particle systems, https://doi.org/10.1103/PhysRevA.84.032309 journal journal Phys. Rev. A volume 84, pages 032309 (year 2011)NoStop
[Wang and Hazzard(2020)]Wang.2020.PRXQ
author author Z. Wang and author K. R. Hazzard, title title Tightening the
Lieb-Robinson bound in locally interacting systems, https://doi.org/10.1103/PRXQuantum.1.010303 journal journal PRX Quantum volume 1, pages
010303 (year 2020)NoStop
[Kuwahara and Saito(2021b)]Kuwahara.2021.PRL
author author T. Kuwahara and author K. Saito, title title Lieb-Robinson bound and
almost-linear light cone in interacting boson systems, https://doi.org/10.1103/PhysRevLett.127.070403 journal
journal Phys. Rev. Lett. volume 127, pages 070403 (year 2021b)NoStop
[Yin and Lucas(2022)]Yin.2022.PRX
author author C. Yin and author A. Lucas, title title Finite speed of quantum information
in models of interacting bosons at finite density, https://doi.org/10.1103/PhysRevX.12.021039 journal journal Phys. Rev. X volume 12, pages 021039 (year 2022)NoStop
[Faupin et al.(2022a)Faupin, Lemm, and Sigal]Faupin.2022.CMP
author author J. Faupin, author M. Lemm, and author I. M. Sigal, title title On Lieb-Robinson bounds for the Bose-Hubbard
model, https://doi.org/10.1007/s00220-022-04416-8 journal journal Communications in Mathematical Physics
(year 2022a)NoStop
[Kuwahara et al.(2022)Kuwahara, Van Vu, and Saito]Kuwahara.2022.arxiv
author author T. Kuwahara, author T. Van Vu, and author K. Saito, title title Optimal light cone and digital quantum
simulation of interacting bosons, https://arxiv.org/abs/2206.14736 journal journal
arXiv:2206.14736 (year 2022)NoStop
[Faupin et al.(2022b)Faupin, Lemm, and Sigal]Faupin.2022.PRL
author author J. Faupin, author M. Lemm, and author I. M. Sigal, title title Maximal speed for macroscopic particle transport
in the Bose-Hubbard model, https://doi.org/10.1103/PhysRevLett.128.150602 journal
journal Phys. Rev. Lett. volume 128, pages 150602 (year 2022b)NoStop
[Mandelstam and Tamm(1945)]Mandelstam.1945.JP
author author L. Mandelstam and author I. Tamm, title title The uncertainty relation
between energy and time in non-relativistic quantum mechanics, https://doi.org/10.1007/978-3-642-74626-0_8 journal journal J. Phys. USSR volume 9, pages 249 (year 1945)NoStop
[Uhlmann(1992)]Uhlmann.1992.PLA
author author A. Uhlmann, title title An energy dispersion
estimate, https://doi.org/10.1016/0375-9601(92)90555-z journal journal Phys. Lett. A volume
161, pages 329 (year 1992)NoStop
[Margolus and Levitin(1998)]Margolus.1998.PD
author author N. Margolus and author L. B. Levitin, title title The maximum speed of
dynamical evolution, https://doi.org/10.1016/S0167-2789(98)00054-2 journal
journal Physica D volume 120, pages 188 (year 1998)NoStop
[del Campo et al.(2013)del
Campo, Egusquiza, Plenio, and Huelga]Campo.2013.PRL
author author A. del
Campo, author I. L. Egusquiza,
author M. B. Plenio, and author S. F. Huelga, title title Quantum speed limits in open system dynamics, https://doi.org/10.1103/PhysRevLett.110.050403 journal
journal Phys. Rev. Lett. volume 110, pages 050403 (year 2013)NoStop
[Deffner and Lutz(2013)]Deffner.2013.PRL
author author S. Deffner and author E. Lutz, title title Quantum speed limit for non-Markovian
dynamics, https://doi.org/10.1103/PhysRevLett.111.010402
journal journal Phys. Rev. Lett. volume 111, pages 010402 (year
2013)NoStop
[Taddei et al.(2013)Taddei,
Escher, Davidovich, and de Matos Filho]Taddei.2013.PRL
author author M. M. Taddei, author B. M. Escher,
author L. Davidovich, and author R. L. de Matos Filho, title title Quantum speed limit for physical
processes, https://doi.org/10.1103/PhysRevLett.110.050402
journal journal Phys. Rev. Lett. volume 110, pages 050402 (year
2013)NoStop
[Pires et al.(2016)Pires,
Cianciaruso, Céleri, Adesso, and Soares-Pinto]Pires.2016.PRX
author author D. P. Pires, author M. Cianciaruso,
author L. C. Céleri, author G. Adesso, and author
D. O. Soares-Pinto, title
title Generalized geometric quantum speed limits, https://doi.org/10.1103/PhysRevX.6.021031 journal journal Phys. Rev. X volume 6, pages
021031 (year 2016)NoStop
[Mondal et al.(2016)Mondal,
Datta, and Sazim]Mondal.2016.PLA
author author D. Mondal, author C. Datta, and author S. Sazim, title title Quantum coherence sets the quantum speed limit
for mixed states, https://doi.org/10.1016/j.physleta.2015.12.015
journal journal Phys. Lett. A volume 380, pages 689 (year
2016)NoStop
[Deffner(2017)]Deffner.2017.NJP
author author S. Deffner, title title Geometric quantum speed
limits: a case for Wigner phase space, https://doi.org/10.1088/1367-2630/aa83dc journal journal New J. Phys. volume 19, pages 103018 (year 2017)NoStop
[Shanahan et al.(2018)Shanahan, Chenu, Margolus, and del Campo]Shanahan.2018.PRL
author author B. Shanahan, author A. Chenu,
author N. Margolus, and author A. del Campo, title title Quantum speed limits across the
quantum-to-classical transition, https://doi.org/10.1103/PhysRevLett.120.070401 journal
journal Phys. Rev. Lett. volume 120, pages 070401 (year 2018)NoStop
[Campaioli et al.(2018)Campaioli, Pollock, Binder, and Modi]Campaioli.2018.PRL
author author F. Campaioli, author F. A. Pollock, author F. C. Binder, and author K. Modi, title title Tightening quantum speed limits for
almost all states, https://doi.org/10.1103/PhysRevLett.120.060409
journal journal Phys. Rev. Lett. volume 120, pages 060409 (year
2018)NoStop
[Funo et al.(2019)Funo,
Shiraishi, and Saito]Funo.2019.NJP
author author K. Funo, author N. Shiraishi, and author K. Saito, title title Speed limit for open quantum systems, https://doi.org/10.1088/1367-2630/aaf9f5 journal journal New J. Phys. volume 21, pages 013006 (year 2019)NoStop
[García-Pintos and del
Campo(2019)]GarcaPintos.2019.NJP
author author L. P. García-Pintos and author A. del Campo, title title
Quantum speed limits under continuous quantum measurements, https://doi.org/10.1088/1367-2630/ab099e journal journal New J. Phys. volume 21, pages 033012 (year 2019)NoStop
[Sun et al.(2021)Sun,
Peng, Hu, and Zheng]Sun.2021.PRL
author author S. Sun, author Y. Peng, author X. Hu, and author
Y. Zheng, title title Quantum speed limit quantified by the changing rate of phase, https://doi.org/10.1103/PhysRevLett.127.100404 journal
journal Phys. Rev. Lett. volume 127, pages 100404 (year 2021)NoStop
[Van Vu and Hasegawa(2021)]Vu.2021.PRL
author author T. Van Vu and author Y. Hasegawa, title title Geometrical bounds of
the irreversibility in Markovian systems, https://doi.org/10.1103/PhysRevLett.126.010601 journal
journal Phys. Rev. Lett. volume 126, pages 010601 (year 2021)NoStop
[Shiraishi and Saito(2021)]Shiraishi.2021.PRR
author author N. Shiraishi and author K. Saito, title title Speed limit for open
systems coupled to general environments, https://doi.org/10.1103/PhysRevResearch.3.023074 journal
journal Phys. Rev. Res. volume 3, pages 023074 (year 2021)NoStop
[Hamazaki(2022)]Hamazaki.2022.PRXQ
author author R. Hamazaki, title title Speed limits for
macroscopic transitions, https://doi.org/10.1103/PRXQuantum.3.020319 journal journal PRX Quantum volume 3, pages
020319 (year 2022)NoStop
[Van Vu and Saito(2023a)]Vu.2023.PRL.TSL
author author T. Van Vu and author K. Saito, title title Topological speed limit, https://doi.org/10.1103/PhysRevLett.130.010402 journal
journal Phys. Rev. Lett. volume 130, pages 010402 (year 2023a)NoStop
[Hasegawa(2023)]Hasegawa.2023.NC
author author Y. Hasegawa, title title Unifying speed limit,
thermodynamic uncertainty relation and Heisenberg principle via bulk-boundary
correspondence, https://doi.org/10.1038/s41467-023-38074-8
journal journal Nat. Commun. volume 14 (year 2023)NoStop
[Deffner and Campbell(2017)]Deffner.2017.JPA
author author S. Deffner and author S. Campbell, title title Quantum speed limits:
from Heisenberg's uncertainty principle to optimal quantum control, https://doi.org/10.1088/1751-8121/aa86c6 journal journal J. Phys. A volume 50, pages
453001 (year 2017)NoStop
[Villani(2008)]Villani.2008
author author C. Villani, @noop title Optimal Transport: Old
and New (publisher Springer, address Berlin,
Heidelberg, year 2008)NoStop
[fnt()]fnt1
@noop Note that it may not be the case when considering the speed
of information propagation.Stop
[Tran et al.(2020)Tran,
Chen, Ehrenberg, Guo,
Deshpande, Hong, Gong,
Gorshkov, and Lucas]Tran.2020.PRX
author author M. C. Tran, author C.-F. Chen,
author A. Ehrenberg, author A. Y. Guo, author
A. Deshpande, author
Y. Hong, author Z.-X. Gong, author A. V. Gorshkov, and author A. Lucas, title title
Hierarchy of linear light cones with long-range interactions, https://doi.org/10.1103/PhysRevX.10.031009 journal journal Phys. Rev. X volume 10, pages 031009 (year 2020)NoStop
[Kolouri et al.(2017)Kolouri, Park, Thorpe, Slepcev, and Rohde]Kolouri.2017.SPM
author author S. Kolouri, author S. R. Park,
author M. Thorpe, author D. Slepcev, and author
G. K. Rohde, title title Optimal mass transport: Signal processing and machine-learning
applications, https://doi.org/10.1109/msp.2017.2695801 journal journal IEEE Signal Process. Mag. volume 34, pages 43 (year
2017)NoStop
[Haker et al.(2004)Haker,
Zhu, Tannenbaum, and Angenent]Haker.2004.IJCV
author author S. Haker, author L. Zhu, author A. Tannenbaum, and author S. Angenent, title
title Optimal mass transport for registration and warping, https://doi.org/10.1023/b:visi.0000036836.66311.97 journal journal Int. J. Comput. Vision volume 60, pages 225 (year 2004)NoStop
[Huang et al.(2016)Huang,
Guo, Kusner, Sun,
Sha, and Weinberger]Huang.2016.NIPS
author author G. Huang, author C. Guo, author M. J. Kusner, author
Y. Sun, author F. Sha, and author K. Q. Weinberger, title title
Supervised word mover's distance, in @noop booktitle Advances in Neural Information Processing Systems, Vol. volume 29 (year 2016)NoStop
[Schiebinger et al.(2019)Schiebinger, Shu, Tabaka, Cleary, Subramanian, Solomon, Gould, Liu, Lin, Berube
et al.]Schiebinger.2019.Cell
author author G. Schiebinger, author J. Shu,
author M. Tabaka, author B. Cleary, author
V. Subramanian, author
A. Solomon, author J. Gould, author S. Liu, author S. Lin, author P. Berube, et al., title title Optimal-transport analysis
of single-cell gene expression identifies developmental trajectories in
reprogramming, https://doi.org/10.1016/j.cell.2019.01.006
journal journal Cell volume 176, pages 928 (year
2019)NoStop
[Aurell et al.(2011)Aurell,
Mejía-Monasterio, and Muratore-Ginanneschi]Aurell.2011.PRL
author author E. Aurell, author C. Mejía-Monasterio, and author P. Muratore-Ginanneschi, title title Optimal protocols and optimal transport in stochastic
thermodynamics, https://doi.org/10.1103/PhysRevLett.106.250601
journal journal Phys. Rev. Lett. volume 106, pages 250601 (year
2011)NoStop
[Nakazato and Ito(2021)]Nakazato.2021.PRR
author author M. Nakazato and author S. Ito, title title Geometrical aspects of entropy
production in stochastic thermodynamics based on wasserstein distance, https://doi.org/10.1103/PhysRevResearch.3.043093 journal journal Phys. Rev. Res. volume
3, pages 043093 (year 2021)NoStop
[Dechant(2022)]Dechant.2022.JPA
author author A. Dechant, title title Minimum entropy
production, detailed balance and Wasserstein distance for continuous-time
Markov processes, https://doi.org/10.1088/1751-8121/ac4ac0
journal journal J. Phys. A volume 55, pages 094001 (year
2022)NoStop
[Van Vu and Saito(2023b)]Vu.2023.PRX
author author T. Van Vu and author K. Saito, title title Thermodynamic unification of optimal
transport: Thermodynamic uncertainty relation, minimum dissipation, and
thermodynamic speed limits, https://doi.org/10.1103/PhysRevX.13.011013 journal journal Phys. Rev. X volume 13, pages 011013 (year 2023b)NoStop
[Sowi ńński et al.(2012)Sowi ńński,
Dutta, Hauke, Tagliacozzo, and Lewenstein]Sowinski.2012.PRL
author author T. Sowi ńński, author
O. Dutta, author P. Hauke, author L. Tagliacozzo, and author M. Lewenstein, title title
Dipolar molecules in optical lattices, https://doi.org/10.1103/PhysRevLett.108.115301 journal
journal Phys. Rev. Lett. volume 108, pages 115301 (year 2012)NoStop
[Sup()]Supp.PhysRev
@noop note See Supplemental Material for a discussion
on the speed of bosonic transport in quantum bosonic systems with
interaction-induced tunneling terms.Stop
|
http://arxiv.org/abs/2307.02632v1
|
20230705200426
|
Stability of Q-Learning Through Design and Optimism
|
[
"Sean Meyn"
] |
cs.LG
|
[
"cs.LG",
"cs.SY",
"eess.SY",
"math.OC",
"68T05, 93E35, 62L20, 93E20"
] |
=1
font=small
top=.75in,bottom=1in,right=1in,left=1in
=1
font=footnotesize,labelfont=small
Input:
Output:
corollaryCorollaryCorollaries
eqnarrayeq.eqs.
equationeq.eqs.
figureFig.Figs.
tabularTab.Tabs.
tableTab.Tabs.
lemmaLemmaLemmas
propositionProp.Propositions
theoremThm.Thms.
definitionDef.Defs.
sectionSectionSections
assumptionAssumptionAssumptions
exmpExampleExamples
exerciseExerciseExercises
ℬ
λ_
λ_
N
𝒲
Θ
𝒞^
c^
c^
R^
R^
b^
b^
Σ_CLT
ℓ
ℓ_u
ℓ_ϕ
ι
h^s
dist≈
b_e
ϱ_e
#1#1
#1#1
#1#1
#1#2
< g r a p h i c s >
#1Fig. <ref>
#1#1
[t]#1
#1
#1
#1<#1>
#1#1
#1 #1
⋆
*
u̅
Umathx45
Umathxmn<-> mathx10
mathxUmathxmn
0mathx"73
∇
L
𝒳
ERM
p
I̅
D
a
B
B
𝒟
𝒟̅
θ^PR
θ̃^PR
Σ^PR
^PR
^PRF
^PR-
N
β
ϕ^F
ϕ^D
𝒯
ϖ
^M
_X
ζ
ζ
c
h
q
H
Q
θ
Γ
γ
β
-
-
#1
S
c^
W
PS
PS^M
PS^S
PS^RS
PS^M
PS^S
PS
MTTF
MTTR
fluid
GTO
GTO-DR
STO
c_DR
ι̂
Π
⊃
L_∞^V
L_∞^f
L_∞^F
L_∞^W
#1[.9][c].85
=0pt
=12pt plus .1pt
=6pt plus 1.5pt minus 1pt
#1
#1
§.§
#1
§.§
#1
RESUME EDITING HERE
⟨⟨
⟩⟩
[[
]]
#1#2()0pt0#1#2
()0pt1#1#2
()0pt3#1#2
()0pt3#1#2
#1#20cm2#1#2
#1#20pt#1#2
#1#2#3#1#2#3
#1#20#1#2
1#1#2
3#1#2
3#1#2
1ddt
1ddt
3ddt
3ddt
1ddu
1ddu
3ddu
3ddu
1ddx
1ddx
3ddx
3ddx
1ddy
1ddy
3ddy
3ddy
1ddw
1ddw
3ddw
3ddw
1ddθ
1ddθ
3ddθ
3ddθ
1ddr
1ddr
3ddr
3ddr
1d^to 2pt+dt
1d^to 2pt+dt
3d^to 2pt+dt
3d^to 2pt+dt
1d^to 2pt+dy
1d^to 2pt+dy
3d^to 2pt+dy
3d^to 2pt+dy
112
112
312
312
113
113
313
313
114
114
314
314
115
115
315
315
125
125
325
325
135
135
335
335
132
132
332
332
134
134
334
334
116
116
316
316
118
118
318
318
Χ
w⟶
v⟶
#1#1⟹
d⟶
dist=
#1#1⟶
#1#1
tr
rank
deg
sign
supp
det
lim suplim sup
lim inflim inf
arg min
arg max
ess sup
ess inf
min
max
arg min
arg max
s. t.
F
G
F
f̅
F_m
G_m
f
g
X
𝒳
U
Y
Z
X_♢^M
π^M
ϕ^M
X
X
B()
Y
Y_♢
S
()
width 3pt depth -1.5pt height 1.9pt
-1.5pt
1.3ptheight 3pt depth 0pt width .4pt
X^z
B(^z)
X^z_
(X,B(X),m)
B()
B(^z_ ) ^+()
Φ̌
Φ̌
π̌
Ǎ
Ǧ
P̌
B^+()
()
^Λ
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z
a
b
c
d
e
f
g
h
i
j
k
l
m
n
o
p
q
r
s
t
u
v
w
x
y
z
#1
Φ
Ψ
Γ
Δ
Θ
ι
λ
ϕ
ξ
ζ
θ
φ
ψ
π
η
a
b
d
e
m
q
r
μ
ν
u
v
x
y
w
z
A
B
C
D
E
F
G
H
I
L
M
N
Q
R
R̅
S
T
U
N
Q
X
X
Q
q̃
Y
Y
to 0pt1.25pt
W
Z
q
w
y
z
Q
S
U
W
Ψ
Γ
Σ
Π
P
Z
f̂
ĝ
ĥ
m̂
y
v̂
θ̂
ψ̂
ν̂
μ̂
α̂
β̂
γ̂
ϕ̂
∇
λ̂
Λ̂
A
K
T
X
A
B
C
C
E
H
I
M
P
Q
T
d
g
k
s
u
x
P
E
Δ
Γ
Φ
L
Ã
M
N
P
Q
R
Q
T
U
V
W
X
Y
X
M
N
Q
b̃
ϕ̃
μ̃
ν̃
π̃
σ̃
τ̃
θ̃
ψ
c̃
ẽ
f̃
g̃
r̃
ṽ
x̃
ỹ
m̃
F
B
C
E
F
G
H
J
I
K
L
M
N
O
P
Q
R
S
U
V
W
X
Y
Z
L
I
C
q̃
q̃
b_CBM
^+(γ)
^+(s)
#1
#1
⇒
:=
μ^Leb
M
P
P
E
Var
Cov
CV
#1
1
0
#1_#1P
_P
X_
_
_
x^
h_
c_
τ_
i.o.
a.s.
vec
Tol
ε
·
#1#212pt=0pt3ptto 3pt#2
2pt
0pt
T
I
N
c
n
ρ
#1ψ_(#1)
#1ψ_(#1)
#1ψ_(#1)
#1c_#1
#1c̃_#1
a
b
c
c
w
y
e̅
_X
_Y
d
f
g
h
k
l
m
n
p
q
r
s
v
w
x
y
A̅
B̅
C̅
D̅
E̅
F̅
H̅
J̅
L̅
P̅
Q̅
R̅
S̅
T̅
U̅
X̅
Y̅
Z̅
ζ
η
ρ
μ
ν
θ
Θ
ψ̅
∇̅
λ̅
α̅
β̅
ϖ̅
Φ
σ
Σ
Δ
Γ
Λ
τ
assumptions[1]
#1
(#1#1)Assumption (#1#1) #1
(#1:#2)(#1<ref>)
#1
(_model:#1)
recall-ass[1]
_model#1
<cit.>
⊓⊔
501em=0pt=0
501em=0pt=0
rmnum
romannum(rmnum)rmnum
anum
alphanum(anum)anum
ρ⃗
R_+
R
Z
Z_+
A
Q
C
P
P
E
Z
C
B
⊺
rank
closure
interior
deg
diag
Re
s.t.
sign
supp
arg min
1
x^∙
x^∙∙
trace
ε
·
∫∫
s
r
^𝒫
0cm1.3ex0cm1.4ex
w
λ
ζ^e
ζ
F
H
J
L
N
V
W
Y
to 0pt1.25ptY
Ξ
Ξ_Y
Ξ_Y
b
q
w
x
z
A
C
G
Î
M
Q
S
U
Y
η
U
V
W
Y
f
F
θ
Φ
π
ρ
Υ
down
D
⋆
(<ref>,<ref>)
A
CLT
σ_
σ_
_
T
R
B
D
D
γ
α
β
f
1ddz
1ddz
3ddz
3ddz
Ξ
r
Z
Z
Z
Ξ
Ξ^I
Ξ^II
∇
ϱ^
H
▴̀
z^
z̅^
^
^
a^
λ^
^var
^var
κ^
ℛ
φ
theoremTheorem[section]
corollary[theorem]Corollary
proposition[theorem]Proposition
lemma[theorem]Lemma
[1.6mm]|||
det
lim suplim sup
lim inflim inf
arg min
arg max
A
B
C
D
E
F
G
H
J
I
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z
#1[3]
#1 [4]
#1[3]
#1
#1[3]
#1
#1[3]
#1
E
f̅^
f̅^
A^
b^
f̅^
A^
b^
#1^#1
#1b^<ref>
#1ϱ^<ref>
ϕ
_s
p^
^
A^
#1#2
< g r a p h i c s >
#1Fig. <ref>
romannum(rmnum)rmnum
Re
definitionDefinition
X
:=
trace
_n
Cov
diag
I
R_+
R
Δ
^p
𝒯
⋆
*
L̅
b̅
θ̂
#1#2_#1|^(#2)
#1[3]
#1 [4]
#1[3]
#1[3]
#1[3]
#1
f̃
h̃
M
Z
#1#2_#1^(#2)
#1#2#1 |#2
X
ž
z̃
_∞
s
Σ_
θ^PR
θ̃^PR
Σ^PR_
L_∞^v
L^a
L^b
N
Stability of Q-Learning Through Design and Optimism
Sean Meyn
SPM is with the University of Florida, Gainesville, FL 32611.
Financial support from ARO award W911NF2010055
is gratefully acknowledged. This article was created in part to support the 2023 INFORMS Applied Probability Society lecture
— slides available at researchgate.net <cit.>
August 1, 2023
=============================================================================================================================================================================================================================================================================================================
Q-learning has become an important part of the reinforcement learning toolkit since its introduction in the dissertation of
Chris Watkins in the 1980s.
The purpose of this paper is in part a tutorial on stochastic approximation and Q-learning, providing details regarding the INFORMS APS 2023, inaugural Applied Probability Trust Plenary Lecture, presented in Nancy France, June 2023.
The paper also presents new approaches to ensure stability and potentially accelerated convergence for these algorithms, and stochastic approximation in other settings. Two contributions are entirely new:
1.
Stability of Q-learning with linear function approximation has been an open topic for research for over three decades. It is shown that with appropriate optimistic training in the form of a modified Gibbs policy, there exists a solution to the projected Bellman equation, and the algorithm is stable (in terms of bounded parameter estimates). Convergence remains one of many open topics for research.
2.
The new Zap Zero algorithm is designed to approximate the Newton-Raphson flow without matrix inversion. It is stable and convergent under mild assumptions on the mean flow vector field for the algorithm, and compatible statistical assumption on an underlying Markov chain. The algorithm is a general approach to stochastic approximation which in particular applies to Q-learning with “oblivious” training even with non-linear function approximation.
MSC 2020 Subject classifications: Primary 93E35 ; Secondary 68T05, 62L20, 93E20
top=1in,bottom=1in,right=1in,left=1in
§ INTRODUCTION
The article concerns Q-learning algorithms, motivated by the same objective as in the first formulation of Watkins <cit.>: the infinite-horizon optimal control problem, with state-action value function
Q^⋆(x,u) = min∑_k=0^∞^k [ c(X_k,U_k) | X_0 =x , U_0=u]
The state process { X_k : k≥ 0} evolve on a state space denoted , and the
action (or input) process { U_k : k≥ 0} evolves on a finite set ;
c×→ is the one-step reward function, and ∈(0,1) the discount factor.
The minimum in (<ref>) is over all history dependent input sequences.
Under standard Markovian assumptions reviewed in <Ref>, an optimal input is obtained by state feedback ^⋆→, with ^⋆ (x) ∈ Q^⋆(x,u) for each x <cit.>. Moreover,
the Q-function Q^⋆ solves the Bellman equation,
Q^⋆(x,u) = c(x,u) + [ ^⋆ (X_k+1) | X_k=x , U_k =u] ,, x ∈ , u∈ , k≥ 0
where throughout the paper an under-bar denotes a minimum: (x) min_u H(x,u), x∈, for any function H×→.
The objective of Q-learning is to obtain an approximate solution to (<ref>) among a parameterized class { Q^θ : θ∈^d }. Typical in theoretical analysis is linear function approximation,
{Q^θ = θ^ψ : θ∈^d} with ψ a vector of basis functions.
Given an approximation within this class, we obtain a policy (i.e. state feedback law)
^θ→:
^θ(x) ∈_u Q^θ(x,u)
with some fixed rule in place in case of ties.
Much of the present article focuses on a generalization of the original algorithm of Watkins:
For initialization θ_0∈^d, define the sequence of estimates recursively:
θ_n+1 = θ_n + α_n+1_n+1_n ,
_n = ∇_θ Q^θ(X_n, U_n) |_θ =θ_n
_n+1 = c(X_n,U_n) + ^θ_n (X_n+1) - Q^θ_n(X_n,U_n) .
in which {α_n} is a non-negative step-size sequence.
See <cit.> for a range of interpretations of the algorithm. The vectors {_n } are entirely analogous to the eligibility vectors used in the TD(0) algorithm <cit.>, and {_n+1} is known as the temporal difference sequence.
The recursion (<ref>) reduces to the original tabular Q-learning algorithm when using a tabular basis <cit.> (see <Ref>
for definitions).
The goal of Q-learning is to approximate the solution to the projected Bellman equation,
= [ { c(X_n,U_n) + ^θ^* (X_n+1) - Q^θ^*(X_n,U_n) }_n ]
in which the expectation is in steady-state.
Soon after Q-learning was introduced, it was recognized that the algorithm can be cast within the framework of stochastic approximation (SA) <cit.>. To explain the contributions and approach to analysis in this paper it is necessary to first explain why (<ref>) can also be cast as an SA recursion, subject to mild assumptions on the input used for training.
§.§ A few warnings
For readers with background in reinforcement learning, some notation may not be familiar, and some goals may not seem standard.
1. We use for invariant measures, following a long tradition in the theory of Markov chains <cit.>.
Apologies to those of you who prefer “pi” for “policy”.
2.
Finite-n bounds (sample complexity bounds) are valuable in the theory of bandits.
There has not been comparable success in reinforcement learning, in part because present bounds are very loose. Perhaps sample complexity theory will evolve to become more practical. This paper focuses on asymptotic statistics for comparing policies, as well as heuristics based on ODE techniques to gain insight on transient behavior of algorithms.
The most valuable tool from asymptotic statistics is the Central Limit Theorem.
For the basic SA recursion (<ref>), the CLT typically holds for the scaled error z_n = _n /√(α_n) with _n =θ_n -θ^*, along with convergence of the scaled mean-square error:
lim_n→∞1/α_n[_n _n^ ] =
We typically take {α_n} “big” to reduce transients, such as α_n = n^ρ with <ρ < 1. The limit above implies slow convergence for large n, but this is ameliorated via the averaging technique of Polyak and Ruppert yielding
lim_n→∞ n[_n _n^ ] =
in which is minimal in a matricial sense—see <Ref>
for definitions.
This covariance matrix can be estimated using the batch means method, which requires performing many relatively short runs with distinct initial conditions <cit.>.
§.§ Some history
Discussion concerning the challenges surrounding Q-learning is postponed to <Ref>, but we highlight a central open issue here: it is not known if the projected Bellman equation (<ref>) has a solution outside of very special cases.
Success stories surveyed in <cit.> include the special case of binning <cit.>, which is a generalization of the tabular setting, and
the criterion in <cit.> and its improvement in <cit.>, but the assumptions are not easily verified in practice. The progress report in <cit.> states that the only known convergence result is due to Melo et al. <cit.>.
See <cit.> for further discussion, and <cit.> for recent insight.
This open problem was a topic of discussion throughout the Simons program on reinforcement learning held in 2020, especially during the bootcamp lectures <cit.>.
<Ref> resolve this open problem for Q-learning with optimistic training. Following many preliminaries, the proof of <Ref>
is similar to the proof of convergence of TD(λ) learning from the dissertation of Van Roy <cit.>, and the assumptions are related to the assumptions in this prior work, even though the setting is very different.
The recent paper <cit.> considers Q-learning with linear function approximation and oblivious training.
With sufficiently large regularization they obtain a unique equilibrium for the algorithm that approximates the solution to the projected Bellman equation.
It is likely that their results can be improved using optimistic training as in the present work.
The lack of theory motivated Baird gradient descent approach <cit.> and GQ learning <cit.>, in which the root finding problem is replaced with the minimization of a loss function—see <Ref> for details.
Zap stochastic approximation was introduced to ensure convergence, and also provide acceleration <cit.>. While originally proposed for Q-learning with linear function approximation, it was later shown to be convergent even with nonlinear function approximation <cit.>, and the general technique applies to any application in which stochastic approximation is used.
A version of the Zap-Zero algorithm was introduced in <cit.>, whose form is motivated in part by the time-scale SA algorithm introduced in <cit.>.
The new Zap-Zero algorithm (<ref>) is entirely new, and convergent under far weaker conditions.
Much recent research has focused on linear MDPs, notably <cit.>, in which the system dynamics are partially known: for a known “feature map” ϕ×→^d and an unknown sequence of probability measure {μ_i : 1≤ i≤ d} on , a linear MDP is assumed to have a controlled transition matrix of the form P_u(x,x') =∑ϕ_i(x,u) μ_i(x'). There is now a relative complete theory for this special case, in which the algorithm is designed based on knowledge of the feature map.
The reader is encouraged to see <cit.>
for new approaches to Q-learning based on convex programming approaches to MDPs. It is hoped that the general techniques presented in this paper may be adapted to these new algorithms.
§.§ Overview
Following a summary of notation and key results from stochastic approximation theory in <Ref>, the paper sets out to survey results from the theory of Q-learning, including these highlights:
1. <Ref> reviews theory for Q-learning with linear function approximation. It is now well known that there are challenges even in the simplest tabular setting, in which convergence holds but is very slow. Methods are surveyed to accelerate convergence. The theory is restricted to oblivious training, meaning that the input during training is independent of the parameter estimates.
Consideration of optimistic policies is postponed to <Ref>, which contains entirely new theory: if a smooth approximations of the -greedy policy is used for training, then under mild conditions the parameter estimates are bounded, and there exists a solution to the projected Bellman equation (see <Ref>).
Unfortunately, convergence to θ^* remains one of many open problems for research
2. <Ref> contains a survey of the author's favorite approach known as Zap Q-learning; the theory is elegant and the approach is stable even with nonlinear function approximation. A major problem with this approach is the need for a matrix inversion in each iteration of the algorithm
A new algorithm and theory is presented here for the first time in <Ref>:
the Zap Zero algorithm is designed to avoid matrix inversion,
and complexity of matrix-vector multiplication
can be tamed (see <Ref>).
The theory in <Ref> is restricted to oblivious training. The extension to more efficient training techniques, such as -greedy or approaches based on Thompson sampling, is another topic for future research.
§ STOCHASTIC APPROXIMATION AND REINFORCEMENT LEARNING
This section is devoted to three topics: assumptions surrounding the
Markov Decision Process (MDP) model,
a brief summary of results from the theory of stochastic approximation,
followed by
assumptions surrounding the Q-learning algorithms to be considered.
§.§ Markov Decision Process
The first set of assumptions and notation concern the control system model.
While the search for an optimal policy may be restricted to static state feedback under the assumptions imposed below, in reinforcement learning it is standard practice to introduce randomization in policies as a way of introducing exploration during training. We restrict to randomized policies of the form,
U_k = (X_k,θ_k, I_k) , k≥ 0 ,
in which ={I_1,I_∞ ,…} is an i.i.d. sequence. Under the assumption that and are finite, we can assume without loss of generality that evolves on a finite set.
The input-state dynamics are assumed to be defined by a controlled Markov chain, with controlled transition matrix P defining the dynamics. For any randomized stationary policy,
{ X_k+1 = x' | X_k =x , U_k =u} = P_u(x,x') , x,x'∈ , u∈ , k≥ 0
Hence the dynamic programming equation (<ref>) may be expressed
Q^⋆(x,u) = c(x,u) + ∑_x'∈ P_u(x,x') ^⋆ ( x') , x∈ , u∈
§.§ What is stochastic approximation?
A fuller answer may be found in any of the standard monographs, such as <cit.> (see also <cit.> for a crash course).
The goal of SA is to solve the root finding problem (θ^*) =, where the function is defined in terms of an expectation, (θ) = [ f(θ,Φ)] for θ∈^d and with Φ a random vector.
The general SA algorithm is expressed in two forms:
θ_n+1 = θ_n + α_n + 1 f(θ_n , Φ_n+1 )
= θ_n +α [ (θ_n) + Δ_n+1 ] , n≥ 0.
where (<ref>) introduces the notation Δ_n+1 f(θ_n, Φ_n+1) - (θ_n).
It is assumed that the sequence of random vectors {Φ_n } converges in distribution to Φ.
The algorithm is motivated by ordinary differential equation (ODE) theory, and this theory plays a large part in establishing convergence of (<ref>) along with convergence rates. These results are obtained by comparing solutions (<ref>) to solutions of the mean flow,
_t = (_t).
In particular, θ^ is a stationary point of this ODE.
Averaging
A large step-size {α_n + 1} in (<ref>) is desirable to quick transient response, but this typically leads to high variance. There is no conflict if the “noisy” parameter estimates are averaged. The averaging technique of Polyak and Ruppert defines
_n = 1/n∑_k=1^n θ_k , n≥ 1.
<Ref> illustates the value of this approach.
Basic SA assumptions
The following are imposed in this section, and in some others that follow.
It is assumed that the step-size sequence {α_n: n ≥ 1} is deterministic, satisfies 0 < α_n ≤ 1, and
∑_n = 1^∞α_n = ∞ , ∑_n = 1^∞α_n^2 < ∞
Much of the theory in this paper is restricted to the special case: α_n = g n^ρ with <ρ≤ 1 and g>0.
We sometimes require two time-scale algorithms in which there is a second step-size sequence {_n: n ≥ 1} that is relatively large:
lim_n→∞α_n/_n = 0
SA1 The function is globally Lipschitz continuous
SA2 is a time-homogeneous Markov chain that evolves on a finite set, with unique invariant pmf .
SA3 The mean flow
(<ref>) is globally asymptotically stable, with unique equilibrium θ^*.
The final assumption is needed to obtain useful bounds on the rate of convergence, which requires the existence of a linearization (at least in a neighborhood of θ^*). Denote
A(θ) = ∂_θ (θ)
SA4
The derivative (<ref>) is a continuous function of θ, and
A^* A(θ^*) is a Hurwitz matrix (its eigenvalues lie in the strict left hand plane).
Assumptions (SA1)–(SA3) imply convergence of {θ_n} to θ^* almost surely from each initial condition, provided one more property is established:
.6
The parameter sequence {θ_n : n≥ 0 } is bounded with probability one from each initial condition.
Verification of (SA3) is typically achieved through a Lyapunov function analysis.
Lyapunov techniques also provide a means of establishing (<ref>). One approach is described next.
ODE@∞
The so-called Borkar-Meyn theorem of <cit.>
is one approach to establish (<ref>).
This result concerns the time-homogeneous ODE x = (x) (the `ODE@∞”) with
vector field,
(θ)
lim_r→∞ r^-1(rθ).
We always have (0) =0, which means that the origin is an equilibrium for the
ODE@∞. It is also radially homogeneous, (rθ) =r (θ) for any θ∈^d and r>0. Based on these properties it is known that local asymptotic stability of the origin implies global exponential asymptotic stability <cit.>.
The following is an alternative to the ODE@∞ criterion, which is equivalent whenever the limit (<ref>) exits for each θ:
(V4) For a globally Lipschitz continuous and C^1 function V^d→_+,
and a constant δ_v>0,
d/dt V(_t) ≤ - δ_v V(_t) , when _t≥δ_v^-1.
The use of the designation “V4” comes from an anolagous bound appearing in stability theory of Markov chains <cit.>.
It is shown in <cit.> that (<ref>) holds provided the ODE@∞ is locally asymptotically stable, and {Δ_n} appearing in (<ref>) is a martingale difference sequence.
This statistical assumption does not hold in many applications of reinforcement learning. Relaxations of the assumptions of <cit.> are given in <cit.>, but the story is far from complete.
A generalization appeared recently in <cit.> that requires minimal assumptions on the Markov chain (there is no need for a finite state space). Conclusions obtained under the assumptions imposed here are summarized in the following.
theorem]t:chedevborkonmey21
Suppose that (SA1) and (SA2) hold for the SA recursion (<ref>), and in addition that the origin is locally asymptotically stable for the ODE@∞, or that (V4) holds. Then,
(i)
The bound (<ref>) holds in a strong sense:
there is a fixed constant B_ such that for each initial condition (θ_0,Φ_0),
lim sup_n→∞θ_n ≤ B_ a.s..
(ii) If in addition (SA3) holds then lim_n→∞θ_n = θ^* almost surely from each initial condition.
(iii)
Suppose that (SA1)–(SA4) hold, and that α_n = g n^ρ, n≥ 1, with <ρ < 1 and g>0. We then have convergence in mean square, and the following limits exist and are finite:
lim_n→∞1/α_n[_n _n^ ] =
lim_n→∞ n[_n _n^ ] =
The covariance matrix is minimal in a matricial sense, made precise in <cit.>. It has the explicit form = G Σ_Δ^* G^ in which G = - ( A^ )^-1, the stochastic Newton-Raphson gain of Ruppert <cit.>, and Σ_Δ^* is
the asymptotic covariance
Σ_Δ^* = ∑_k=-∞^∞_π [ Δ_k^* {Δ_k^*} ^]
where {Δ_k^* f(θ^*, Φ_k) : k ∈}, with
a stationary version of the Markov chain
on the two-sided time interval.
A criterion for stationary points
The existence of a suitable Lyapunov function implies the existence of a stationary point.
proposition]t:ODEstableImpliesPBE
For an ODE (<ref>) with globally Lipschitz continuous vector field, suppose there is a function V^d→_+ with locally Lipschitz continuous gradient, satisfying for some t:ODEstableImpliesPBE,
∇ V(θ)^(θ) ≤ -1 , whenever θ≥t:ODEstableImpliesPBE.
Suppose moreover that V is convex and coercive. Then there exists a solution to (θ^*) =.
Let L_δ(θ) = θ + δ(θ) for θ∈^d, with δ>0 to be chosen.
For δ>0 sufficiently small we construct a convex and compact set S_δ for which L_δ(θ) ∈ S_δ for each θ∈ S_δ. It follows from Brouwer's fixed-point theorem that there is a
solution to L_δ(θ^*) = θ^*. This is equivalent to the desired conclusion (θ^*) =.
Denote b_δ = sup{ V( L_δ(θ) ) : θ≤t:ODEstableImpliesPBE}, and S_δ = {θ : V( θ )≤ b_δ}; a convex and compact set subject to the assumptions on V.
We next show that S_δ is invariant under L_δ if δ is small. We consider two cases, based on whether or not θ lies in the set S = {θ : θ≤t:ODEstableImpliesPBE}
1.
If θ∈ S_δ∩ S, then L_δ(θ) ∈
S_δ by construction of S_δ.
2.
If θ∈ S_δ∖ S then we apply convexity combined with the drift condition: denoting θ^+ = L_δ(θ),
V(θ) ≥ V ( θ^+ ) +∇ V ( θ^+ )^ (θ -θ^+)
= V ( θ^+ ) - δ∇ V ( θ^+ )^(θ)
Since the gradient is locally Lipschitz continuous and is globally Lipschitz continuous, there is b_v satisfying
V(θ) ≥ V ( θ^+ ) - δ∇ V ( θ)^(θ) -b_vδ^2 , θ∈ S_δ∖ S
The value of b_v can be chosen independent of δ∈ (0,1].
Under the assumed drift condition this gives V ( θ^+ ) ≤ V(θ) - δ + b_vδ^2. Choosing δ=1/b_v gives
V ( θ^+ ) ≤ V(θ) ≤ b_δ, in which the second inequality holds because θ∈ S_δ∖ S. Hence L_δ(θ) = θ^+∈ S_δ as desired.
§.§ Compatible assumptions for Q-learning
The basic Q-learning algorithm (<ref>) is an instance of stochastic approximation, for which we can apply general theory subject to assumptions on the input used for training (recall (<ref>)).
Two settings are considered:
Oblivious training
This means that (<ref>) simplifies to
U_k = (X_k , I_k) , k≥ 0 ,
in which it is always assumed that {I_k} is i.i.d..
It follows that the pair process (X_k , U_k) : k≥ 0} is a time homogeneous Markov chain. It is assumed to be uni-chain (i.e., the invariant pmf is unique).
In the expression f_n+1(θ_n) = f(θ_n , Φ_n+1 ) we take
{Φ_k = (X_k;X_k+1; U_k) : k≥ 0}, which is also a time homogeneous Markov chain, for which its invariant pmf is also unique and easily expressed in terms of , , and the controlled transition matrix.
If the function class is linear {Q^θ = θ^ψ : θ∈^d}, then the autocorrelation matrix is assumed full rank
R_0 = _[ ψ(X_n,U_n)ψ(X_n,U_n)^]
where the expectation is taken in steady-state
Optimistic training
In this non-oblivious approach the input sequence depends on the parameter sequence, and is designed to approximate the Q-greedy policy (<ref>). There are only a finite number of deterministic stationary policies, so ^θ is necessarily discontinuous in θ.
The region on which continuity holds is denoted
= {θ∈^d : there is >0 s.t. ^θ(x) = ^θ'(x) for all x when θ - θ'≤}
The approximations involve the construction of a family of randomized stationary policies {^θ (u | x) : θ∈^d}, in which ^θ (| x) is a pmf on for each x. We then take
{_k = u| X_i, U_i : i<k ; B_i : i≤ k ; X_k = x }
=
^θ_k (u | x)
In the following three special cases, the input may be expressed in the form
U_k = (1 - B_k) _k + B_k _k
in which { B_k } is an i.i.d. Bernoulli sequence with {B_k =1} =, and {_k } is an i.i.d. sequence taking values in and independent of { B_k }. The -valued random variable _k depends on the parameter θ_k, and is independent of
( B_k ;_k) for each k.
1. -greedy.
Recalling the definition of ^θ in (<ref>), the -greedy policy is defined by the choice _k = ^θ_k (X_k), so that
^θ (u | x) = (1-) {u = ^θ(x) } + _(u)
with _ the common pmf for {_k}.
With this choice, the mean flow has many attractive properties (see <Ref> in the Appendix).
However, because {^θ : θ∈^d} is a piecewise constant function of θ, it follows that
the vector field is not continuous in θ as required in <Ref>.
2. Gibbs approximation
Fix a large constant κ>0 and define
^θ (u | x)
= 1/^θ_κ (x)exp( -κ Q^θ(x,u) )
in which ^θ_κ (x) is normalization.
This is indeed an approximation of (<ref>):
for θ^0∈,
lim_r→∞1/^θ_κ (x)exp( - κ Q^θ^r(x,u) ) =
{ u = ^θ^0(x) }
The limit (<ref>) has two important implications. First is that the vector field for the ODE@∞ is unchanged whether we consider
(<ref>) or its smooth approximation (<ref>). Second is that discontinuity of implies that is not globally Lipschitz continuous, which violates an assumption of <Ref>.
3. Tamed Gibbs approximation
This is a modification of (<ref>) in which κ depends on θ:
^θ (u | x)
= 1/^θ_κ (x)exp( - κ_θ Q^θ(x,u) )
For analysis the following structure is helpful:
choose a large constant κ_0>0, and assume that
κ_θ = 1/θκ_0 θ≥ 1
≥κ_0 else
This will be called the (,κ_0)-tamed Gibbs policy when it is necessary to make the policy parameters explicit.
The equality in (<ref>) ensures the following identity holds all x,u:
^rθ (u | x) = ^θ(u | x) for all r≥ 1 and θ≥ 1.
The Q-learning algorithm (<ref>) can be cast as stochastic approximation when the input is defined using any of the training policies described above, in which we take Φ_n+1 = (X_n, X_n+1 , U_n) since these three variables appear in (<ref>).
It is assumed in <Ref> that is exogenous—its transition matrix does not depend on the parameter sequence.
Fortunately, there is now well developed theory that allows for parameter-dependent dynamics for in the SA recursion (<ref>)—see the recent paper
<cit.> for history and recent results. In particular, theory of convergence and asymptotic statistics is now mature.
The question is then, how can we apply SA theory to make statements about convergence and convergence rates?
§ TROUBLE WITH TABULAR
§.§ Linear function approximation
In this section we restrict to a linearly parameterized family {Q^θθ^ψθ∈^d }, where ψ×→^d is a vector of basis functions.
To avoid long equations we often use the shorthand notation,
n = c(X_n,U_n) , n = ψ(X_n,U_n) ,
f_n+1(θ_n) = f(θ_n , Φ_n+1 )
In the recursion (<ref>) we then have _n = ∇_θ Q^θ(X_n, U_n) = n.
When considering optimistic policies we encounter an additional complication in the description of the vector field for the mean flow. If the input is of the form (<ref>), then for each θ we consider the resulting transition matrix for the joint process { (X_k,U_k) : k≥ 0} defined by
T_θ(z,z') P_u(x,x') ^θ (u' | x') , z=(x,u) , z'=(x',u') ∈× .
where P is the controlled transition matrix. It is assumed that each admits a unique invariant pmf _θ.
Q-learning in the form
(<ref>) is an instance of stochastic approximation, with mean flow
(θ) = __θ [ n (X_n,U_n;θ)] ,
(x,u;θ) = c(x,u) - Q^θ(x,u) + ∑_x' P_u(x,x') ^θ (x')
An alternative formula is valuable for analysis,
(θ) = A(θ) θ - b(θ)
with
A(θ )= - __θ [ n{n - ψ(X_n+1,^θ(X_n+1) } ^] ,
b(θ) = - __θ [ nn ]
The projected Bellman equation (<ref>) is precisely the root finding problem, (θ^*) =.
Any choice of oblivious policy fits the standard SA theory, with globally Lipschitz continuous.
The tamed Gibbs approximation is the only choice among the optimistic training rules for which satisfies the smoothness conditions required in <Ref>.
add ref to vivek on discontinuous RHS. See his papers.
In the remainder of this section we restrict to oblivious training.
§.§ Tabular Q-learning, the good and the bad
In the tabular setting we have d=||×||: given an ordering of the state-action pairs { (x^i,u^i) : 1≤ i≤ d} we take for each i,
ψ_i(x,u) = { (x,u) = (x^i , u^i) } , x∈ , u∈
In view of (<ref>) we find that only one entry of the parameter is updated at each iteration. It is typical to use a diagonal matrix gain,
θ_n+1 = θ_n + α_n+1 G_n _n+1_n
in which G_n^-1 (i,i) indicates the number of times the pair (x^i , u^i) is visited up to time n (set to unity when this is zero).
Observe that by definition Q^θ_n(x^i,u^i) =θ_n(i). Adopting the notation q_t instead of _t for the ODE state in the mean flow (<ref>) associated with the matrix gain recursion (<ref>), we have
q_t = A(q_t) q_t - b
with b the d-dimensional vector with entries b_i = - c(x^i, u^i).
The matrix-valued function A is piecewise constant:
A(q) = - [I - T(q) ] ,
T^i,j (q) = P_u^i(x^i, x^j) { u^j = ^q(x^j) }
The good news:
The statistical properties of the algorithm are attractive because {Δ_n+1} appearing in (<ref>) is a martingale difference sequence in
the tabular setting.
The best news is stability:
The induced operator norm of Z(q) in ℓ_∞ is less than one, meaning max_i | ∑_j Z_i,j (q) v_j | ≤ v_∞max_i | v_i| for any vector v and any q. It follows that the ℓ_∞ norm serves as a Lyapunov function:
Letting _t = q_t-θ^* and V(q) = _t _∞,
V(q_t) ≤ -(1-) V(q_t)
This is how convergence is established for tabular Q-learning.
The bad:
The matrix I - Z_t has an eigenvalue at (1-) for all t, which is a reason for slow convergence when the discount factor is close to unity.
It is now known that the asymptotic covariance appearing in
(<ref>) is not finite if >1/2 <cit.> (see also the sample complexity analysis that followed in <cit.>).
A running example in this prior work and <cit.> is the stochastic-shortest-path problem
whose state transition diagram is shown on the left hand side of <Ref>. The state space = {1,…,6} coincides with the six nodes of the un-directed graph, and the action space is ={ e_x,x'},
x,x' ∈, indicating decisions on moves. Details on the description of disturbances can be found in <cit.>.
<Ref> shows results without the matrix gain. With =0.8 the output of the standard Q-learning algorithm is worthless after one million samples. The matrix gain does offer some benefit—see plots in the next section—but convergence remains very slow for > using α_n = 1/n.
ODE@∞
<Ref> may be applied to Q-learning (<ref>) in this tabular setting, and the theorem easily extends to the case of the matrix gain algorithm (<ref>).
It is clear that (SA1) holds, and (SA2) holds for oblivious training as assumed here. As already remarked, it is not difficult to establish stability of (<ref>) to establish (SA3).
The ODE@∞ associated with
(<ref>) is a minor modification:
x_t = A(x_t) x_t
We have x_t _∞≤ -(1-) x_t _∞, which implies that the ODE@∞ is stable as required in <Ref>.
§.§ Change your goals
Recall that the covariance defined in (<ref>) is not finite for Q-learning in the form (<ref>) with step-size α_n = g/n using g < (1-)^-1, which explains the poor performance illustrated in <Ref>.
A reader with experience in SA would counter that this is a poor choice of step-size. Use instead α_n = 1/n^ρ, with ρ∈ (, 1), and then average using
(<ref>) to obtain {_n }. It is found that averaging fails for this example for large discount factors, even though it is known that these estimates achieve the optimal asymptotic covariance <cit.>.
The observed numerical instability is a consequence of the eigenvalue at -(1-) for A^*.
This can be moved through a change in
objective. For example, construct an algorithm that estimates the relative Q-function,
H^*(x,u) = Q^*(x,u) - ⟨, Q^* ⟩
where is a fixed pmf on ×. Subtracting a constant doesn't change the minimizer over u, and has enormous benefits.
The function H^* satisfies a DP equation similar to (<ref>), which motivates relative Q-learning. It is shown in <cit.> that the eigenvalues of A^* remain bounded away from the imaginary axis uniformly for all 0≤≤ 1, resulting in much faster convergence.
See <cit.> for generalizations.
<Ref> is adapted from <cit.> for the six-state example. The plots show the span semi-norm error Q^θ - Q^*_S for three algorithms, and two very large discount factors:
= 0.999 and = 0.9999. The plots illustrate two important points:
1. Q-learning with a smaller gain converges quickly to Q^* when measured in the span norm
Q^θ - Q^*_S min_a sup_x,u | Q^θ(x,u) - Q^*(x,u) | , θ∈^d
2. Convergence of relative Q-learning is very fast in this example, even with discount factor close to unity.
It appears that the span norm difference between estimates obtained using Q-learning and relative Q-learning is very small. This observation may be anticipated by comparing the respective mean flows <cit.>.
§.§ Beyond tabular
Baird's 1995 paper <cit.> is the first to point out potential trouble with temporal difference methods outside of ideal settings. Although the focus is TD-learning, the message applies here since these algorithms may be interpreted as Q-learning with a singleton. Other examples of unstable Q-learning with linear function approximation may be found in Gordon <cit.>.
Baird's paper introduced the famous “star” counterexample in which the state space is finite, with a single absorbing state.
Exploration is achieved by restarting the initial condition randomly on reaching the absorbing state.
It is found that the TD learning algorithm is unstable from certain initial conditions for discount factor >.
The paper also proposes residual algorithms as alternatives to the projected Bellman error as a criterion of fit. Baird's proposal is similar to minimization of the mean-square Bellman error defined by
(θ) _[ ^θ( X_n,U_n)^2]
with expectation in steady state, and Bellman error ^θ defined in (<ref>).
The associated stochastic gradient descent algorithm is the version of SA,
θ_n+1 = θ_n - α_n+1{^θ( X_n,U_n) ∇_θ^θ( X_n,U_n) }|_θ=θ_n
See <cit.> for more recent theory for this approach.
It is pointed out in <cit.> that this algorithm is slow to converge when is close to unity. This is explained by an eigenvalue analysis, similar to the analysis of Watkin's algorithm.
However, the situation becomes worse: in the tabular setting, there is an eigenvalue near -(1-)^2 if the discount factor satisfies ∼ 1.
The GQ learning algorithm of <cit.> follows a similar approach, designed to minimize
(θ)= {(θ) }^ M(θ)
with M a positive definite matrix. The gradient flow is defined by
_t = -∇(θ)
= - A(θ)^ M(θ)
It again suffers from slow convergence because of an eigenvalue of ∇^2(θ^*) near (1-)^2 in the tabular setting.
It is likely that these numerical challenges can be resolved by adapting the techniques used to create relative Q-learning algorithms.
§ ZAP
Here the tabular setting is abandoned, and we do not even require linear function approximation.
We maintain the assumption that the input for training is oblivious.
If our goal is to ensure that (_t)→ 0 as t→∞ then we should design dynamics to ensure this. One approach, the focus of Devraj's dissertation <cit.> and a focus of the monograph <cit.>, is the
Newton-Raphson Flow:
(_t) = - (_t)
⟹ (_t) = e^-t(_0)
From the chain rule this results in the mean flow dynamics,
_t =- G_t (_t) , G_t = A(_t)^-1
where A(θ) is defined in (<ref>).
Zap stochastic approximation. This is a two time-scale algorithm introduced in <cit.>.
For initialization θ_0 ∈^d, and _0 ∈^d × d, obtain the sequence of estimates {θ_n: n ≥ 0} recursively:
θ_n+1 = θ_n - α_n+1_n+1^-1 f(θ_n , Φ_n+1)
_n+1 = _n + _n+1 [ A_n+1 - _n ] ,
A_n+1∂_θ f_n+1(θ_n) .
The two gain sequences {α_n} and {_n} satisfy (<ref>).
The original motivation was to optimize the rate of convergence, which it does under mild assumptions using α_n = 1/n:
lim_n→∞ n[_n _n^ ] =
This choice of gain g is critical with the choice α_n = g/n:
1.
is finite when g>, but optimal only if g = 1.
2.
()= ∞ if 0<g< and Σ_Δ^* is full rank.
It was discovered in later research that the positive results hold even for nonlinear function approximation
<cit.>.
Hence the greatest value of the matrix gain is the creation of a universally stable algorithm.
In the applications to Q-learning considered here, the accuracy of a parameter estimate θ may be measured in terms of the
Bellman error and its maximum
^θ(x,u) = θ(x,u) - r(x,u) - β∑_x'∈ P_u(x,x') max_u'θ(x',u')
^θ = max_x,u | ^θ(x,u) |
Given that the CLT holds for √(n)_n, the sequence {√(n) ^θ_n} also converges in distribution as n→∞.
<Ref>
is taken from
<cit.>
which contains full details on the experiments.
Plots show the empirical mean and 2 σ confidence intervals for = 0.8 in row 1, and β = 0.99 in row 2. The algorithms considered in the second column are explained here:
PR Estimates obtained from Q-learning and averaging (<ref>), with step-size
α_n = 1/n^0.85.
SNR The single time scale variant of Zap Q-learning using α_n =_n = 1/n (for which no theory is available).
Zap Zap Q-learning using α_n = 1/n and _n = 1/n^0.85.
f:6StateHistMaxBEPlot08+99 shows histograms of {_n^i, 1 ≤ i ≤ N}, n = 10^6, for all the six algorithms; this corresponds to the data shown in 6StateConfIntBEPlot1+2 at n = 10^6.
Once again, full details may be found in <cit.>.
§ ZAP ZERO
If we denote w_t = G_t (_t) in the notation of (<ref>), then equivalently A(_t) w_t - (_t) =0 for all t.
The Zap Zero algorithm of <cit.> is designed to achieve this constraint without matrix inversion.
The ODE method for design suggests the 2d-dimensional ODE
_t = - w_t
w_t = _t [ A(_t) w_t - (_t) ]
in which the time varying gain is introduced in anticipation of a two time-scale SA translation.
The time inhomogeneous ODE (<ref>) is stable provided _t↑∞ as t→∞, and in addition
(SA4) holds with A(θ) Hurwitz for each θ.
A universally stable algorithm
A third state variable is introduced for reasons to be explained when we consider the SA translation. Fix M>0, an arbitrary positive definite matrix, and consider the ODE
_t = - [_t + L_t w_t ]
w_t = - _t [ A(_t) {_t + z_t } - (_t) ]
z_t = - _t [ z_t - L_t w_t ] L_t = M A(_t) ^
Assuming once more that _t↑∞ as t→∞, singular perturbation theory (e.g. <cit.>) provides methodology for verification of stability of (<ref>), proceeding in two steps:
1. Consider the pair of ODEs with the slow variable frozen:
w^θ_t = -A(θ) {θ + z^θ_t } + (θ)
z^θ_t = - z^θ_t + L(θ) w^θ_t L(θ) = M A(θ) ^
The gain _t has been removed via a time transformation. For stability analysis it is more convenient to write,
[ w^θ_t; z^θ_t ]
=
[ 0 -A(θ); M A(θ) ^ -I ][ w^θ_t; z^θ_t ]
+
[ (θ) -A(θ) θ; 0 ]
This is a linear system with constant input. It is stable because any eigenvalue λ of the state matrix solves the equation
λ^2 + λ + μ_+ =0
for some eigenvalue μ_+>0 of the positive definite matrix A(θ) M A(θ) ^. Any solution to this equation lies in the strict left half plane of .
The equilibrium (w^θ_∞ ; z^θ_∞) of (<ref>)
satisfies
0 = - z^θ_∞ + L(θ) w^θ_∞ = - A(θ) {θ +z^θ_∞} + (θ)
2. The equilibrium for (<ref>) is substituted into the dynamics for the slow variable to obtain the approximation x_t≈_t with
x_t = - [ θ + L(θ) w_∞^θ]|_θ = x_t
The equilibrium equations (<ref>) imply that θ + L(θ) w^θ_∞ = [A(θ)]^-1(θ) for all θ, so that we recover the Newton-Raphson flow,
x_t = - [A(x_t)]^-1(x_t).
SA Translation
The 2023 version of the Zap Zero SA algorithm is defined by the 3d-dimensional recursion motivated by the ODE (<ref>).
For initialization θ_0,w_0,z_0 ∈^d, obtain the sequence of estimates recursively:
θ_n+1 = θ_n - α_n+1[ θ_n + L_n+1 w_n ]
w_n+1 = w_n - _n+1[ A_n+1{θ_n + z_n } - f_n+1(θ_n) ]
z_n+1 = z_n - _n+1[ z_n - L_n+1 w_n ]
L_n+1 = M A_n+1 ^
where as above A_n+1∂_θ f_n+1(θ_n).
The two gain sequences {α_n} and {_n} satisfy (<ref>).
Why is there a need for dimension 3d? Theory predicts that z_n ≈ M A(θ_n)^ w_n for large n, which motivates elimination of {z_n} to obtain,
θ_n+1 = θ_n - α_n+1[ θ_n + M _n+1^ w_n ]
w_n+1 = w_n - _n+1[ A_n+1{θ_n + M _n+1^ w_n } - f_n+1(θ_n) ]
A third recursion is required to construct the matrix sequence {_n+1}, which is why (<ref>) is a much simpler recursion.
The assumptions in the following are adapted from <cit.> in their treatment of Zap SA.
theorem]t:ZapZero
Suppose that the following hold:
(i) Assumptions (SA1) and (SA2).
(ii)
The derivative (<ref>)
is a continuous function of θ, satisfying (A(θ)) ≠ 0 for all θ.
(iii) The function (θ) is coercive: lim_θ→∞(θ) = ∞.
Then, there is a unique solution to (θ^*) =, and the Zap Zero algorithm (<ref>) is convergent for each initial condition:
lim_n→∞θ_n = θ^*
,
lim_n→∞ w_n = w^*_∞ ,
lim_n→∞ z_n = z^*_∞ a.s.,
with w^*_∞ = w^θ_∞, z^*_∞ = z^θ_∞ evaluated at θ = θ^*.
As in the Newton-Raphson flow,
the coerciveness assumption is imposed to ensure that convergence (_t) → 0 as t→∞, implies that {_t} is bounded.
We then conclude that any limit point is a root of , and uniqueness of θ^* quickly follows.
Convergence of (<ref>) then
follows from standard theory of two time-scale stochastic approximation <cit.>.
Many generalizations are possible. In particular, it is shown in <cit.> that invertibility of A(θ) for all θ is not required, but may be replaced with the following: it is assumed that A(θ) (θ) = holds only if (θ) =. This justifies a modification of the Newton-Raphson flow in which
A(_t)^-1(_t) uses a pseudo-inverse when the matrix is not invertible. It is also shown in this prior work that A need not be continuous in applications to Q-learning. Extending these generalizations to Zap Zero is a topic for research, but is not expected to be a big challenge.
Unfortunately we do not yet know how to verify all of the assumptions of <Ref> for Q-learning, in which defined in (<ref>),
even in the relaxed form described in the previous paragraph.
This means the existence of a solution to the projected Bellman equation remains an open topic of research when using oblivious training.
§ STABILITY WITH OPTIMISM
The theory surveyed in the preceding sections imposed oblivious training. In the case of Watkins' Q-learning this assumption was imposed in part for
historical reasons, though we will see that the analysis is somewhat more complex when we consider parameter dependent policies. The technical challenges for Zap Q-learning are far more interesting because the definition of the linearization A(θ) is not obvious.
§.§ Challenge with Zap
Based on theory surrounding the Actor-Critic method (see <cit.>), we have for a policy of the form (<ref>),
∂_θ (θ)
= __θ [ A_n(θ) ]
+
__θ [ _n (θ) Λ_n(θ)^ ]
in which A_n(θ) = ∂_θ f_n (θ). The expectations are in steady state under _θ (recall discussion surrounding (<ref>)).
The second expectation involves the score function associated with the randomized policy,
Λ_n(θ) = ∇_θlog^θ (u| x) |_ u=U_n , x= X_n
The function _n solves a certain Poisson equation. Alternatively, if the transition matrix T_θ is aperiodic, then for a stationary realization
of { X_n,U_n : n≥ 0} we have
__θ [ _n (θ) Λ_n(θ)^ ]
=
∑_k=0^∞__θ [ [f_n (θ) -(θ)] Λ_n(θ)^ ]
We have had great empirical success with Zap Q-learning using -greedy policies <cit.>.
However, the algorithm has been applied in the form (<ref>), so that the second term in (<ref>) is abandoned. A likely explanation is that for the -greedy policy
(<ref>) the randomized policy ^θ is piecewise constant over ^d, so that the score function is zero for almost every θ.
The challenge is that we require estimates of ∂_θ (θ_n) in any of the Zap algorithms.
This can be accomplished by adopting concepts from actor-critic algorithms, but this is beyond the scope of this paper.
§.§ Stability with linear function approximation
The main result of this section shows how exploration using a policy of the form (<ref>) encourages stability of
the Q-learning algorithm (<ref>). Analysis requires the family of autocorrelation matrices,
(θ)
=
__θ[ ψ(X_n, ^θ(X_n)) ψ(X_n, ^θ(X_n))^]
(θ) =
__θ[ ψ(X_n, _n) ψ(X_n, _n )^]
R(θ) = __θ[ nn^] = (1-) (θ) + (θ)
The expectations are in steady-state, with stationary pmf _θ induced by the randomized stationary policy with fixed parameter.
A special case is considered in the assumptions, in which we take =1, and the randomized policy is then denoted ^, giving
^(| x) = _(u) for all x,u. The (assumed unique) invariant pmf is denoted _, and
the autocorrelation matrix
= __[ ψ(X_n, _n) ψ(X_n, _n )^]
,
using U_k = _k for all k.
The following assumptions are required in the main results of this section:
.75
The randomized policy ^ gives rise to a uni-chain Markov chain, with unique invariant pmf _,
and the autocorrelation matrix defined in (<ref>) is positive definite.
.75
The inverse temperature
κ_θ is twice continuously differentiable (C^2) in θ, and the first and second derivatives of κ_θ are continuous and bounded.
We also require small >0 in specification of the policies. Denote
_ (1-)^2/ (1-)^2 + ^2
theorem]t:Qstable
Consider
the Q-learning algorithm (<ref>) with linear function approximation, and training policy (<ref>) defined using the tamed Gibbs policy (<ref>). Suppose moreover that (<ref>) holds.
Then, for any ∈ (0, _) there is κ_,>0 for which
the following hold using the (,κ_0)-tamed Gibbs policy, using κ_0≥κ_,:
(i) The parameter estimates {θ_n} are bounded: there is a fixed constant B_ such that (<ref>) holds
with probability one from each initial condition.
(ii) There exists at least one solution to the projected Bellman equation (<ref>).
See <Ref> for an extension of (ii) to the -greedy policy.
To see why (i) is plausible, consider an algorithm approximating (<ref>), in which the minimum defining ^θ_n (X_n+1) is replaced by substitution of the input used for training:
θ_n+1 = θ_n + α_n+1[ c(X_n,U_n) -Q^θ_n(X_n,U_n) + Q^θ_n (X_n+1 , U_n+1) ]_n .
The use of a soft-minimum instead of the hard minimum ^θ_n (X_n+1 )
is common in the RL literature <cit.>.
Stability of the ODE@∞ for (<ref>)
is relatively easy, from which we obtain the following:
proposition]t:approxStable
Consider
the Q-learning algorithm (<ref>) with linear function approximation, and training policy (<ref>) defined as the
(,κ_0)-tamed Gibbs policy (<ref>) with ∈ (0,1) and κ_0>0. Suppose moreover that (<ref>) holds.
Then,
we obtain the conclusions of <Ref>:
(i) The parameter estimates {θ_n} are bounded with probability one from each initial condition.
(ii) There exists at least one solution θ^* to (θ^*) =, with the mean flow for (<ref>).
We proceed with the proof of <Ref>. The proof of <Ref> is postponed to the Appendix.
The recursion (<ref>) may be expressed in a form similar to the TD(0) learning algorithm,
θ_n+1 = θ_n + α_n+1[ nn - n{n -n+1}^θ]
This motivates consideration of the family of autocorrelation matrices
R_k (θ) = __θ[ n+kn^] , n,k≥ 0 ,
so that R_0 (θ) = R (θ) in the notation (<ref>).
The vector field for the mean flow associated with (<ref>) is Lipschitz continuous and has an attractive form.
lemma]t:ODEsForQopti
Under the assumptions of <Ref>,
(i)
The vector field for the mean flow is
(θ) = A(θ)θ -b(θ) , in which b(θ) = -__θ[ nn ] and A(θ) = - R_0(θ) + R_-1(θ).
(ii)
The limit defining in (<ref>) exists and may be expressed
(θ) = A_∞(θ)θ , where A_∞(θ) = A(θ/θ) for θ≠.
Identification of follows immediately from (<ref>).
The representation of the ODE@∞ follows from structure of the policy highlighted in (<ref>), which implies
_rθ = _θ , A(rθ) = A(θ) , and b(rθ) = b(θ) for all r≥ 1 when θ≥ 1.
lemma]t:Rbdds
Suppose that (<ref>) holds. Then, for the recursion (<ref>) there exists δ_ψ>0, independent of θ such that
R_0(θ) ≥δ_ψ I for all θ∈^d
θ ^ A(θ ) θ ≤ - (1-) δ_ψ for all θ∈^d, θ≥ 1.
The proof of the lower bound on R_0(θ) is identical to the proof of <Ref> in the Appendix.
From <Ref> (i) we have for θ∈^d satisfying θ≥ 1,
θ ^ A(θ ) θ = - θ ^ R_0(θ ) θ + θ ^ R_-1(θ)θ≤ -(1-) θ ^ R_0(θ ) θ≤ - (1-) δ_ψθ^2
Let V_1(θ ) =θ^2 and apply <Ref> to obtain, whenever _t≥ 1,
V_1(_t) = _t^(_t) = _t^{ A(_t) _t - b( _t ) }≤ - δ_1 _t ^2 + _t b( _t )
with δ_1 = (1-) δ_ψ.
This gives, with =sup_θb( θ ) <∞,
V_1(_t) ≤ - δ_1 _t ^2 , _t ≥max(1 , 2 )
We then obtain (V4) using V(θ )=√(V(θ )) = θ for θ≥max(1 , 2 ) (modified in a neighborhood of the origin to impose the C^1 condition):
V(_t) ≤ - δ_v V (_t ) , _t ≥max(1 , 2 ) , with δ_v = δ_1/4.
Part (i) then follows from <Ref> (i) and part (ii) from
<Ref>.
§.§ Implications to the -greedy policy
A full analysis of Q-learning using the -greedy policy for training is beyond the scope of this paper due to discontinuity of the vector field. We find here that <Ref> admits a partial extension.
We consider here the corresponding mean flow (<ref>), and also the algorithm with matrix gain:
(θ) = - θ + [A(θ) ]^-1 b(θ) , θ∈
which is the dynamics expected when using the Zap Q-learning based on (<ref>).
The set defined in (<ref>) may be expressed as the disjoint union,
= ⋃_i _i
in which each _i is an open convex polyhedral, with ^θ = ^θ' for all θ,θ'∈_i.
Consequently, both and are constant on each set _i.
For each θ∈^d, denote by ^θ the set of all randomized Q^θ-greedy policies: if ∈^θ then
∑_u (u | x) Q^θ (x,u) = ^θ (x) , x∈ .
If θ∈ then ^θ ={^θ} is a singleton.
theorem]t:greedyStable
Suppose that (<ref>) holds. Then, the following hold for the mean flows associated with the Q-learning algorithm with -greedy training, provided 0< < _:
(i) There exists θ^*∈^d and ^* ∈^θ^* such that (θ^*) =, with defined in (<ref>) in which the expectation is taken in steady-state using _θ^* obtained from the randomized policy,
^θ^* (u | x) = (1-) ^* (u| x) + _(u)
(ii) If θ^*∈ then θ^* is locally asymptotically stable for the mean flow with vector field .
(iii) If θ^*∈_i for some i, then θ^* is locally asymptotically stable for the mean flow with vector field , with domain of attraction including all of _i.
The proof of (i) is contained in <Ref>.
If (θ^*) = 0 with θ^* ∈, it then follows from the definition of the vector field that θ^* = [A(θ^*) ]^-1 b(θ^*). Consequently, for θ in a neighborhood of θ^* contained in ,
(θ) = A(θ^*) ( θ - θ^* )
See <Ref> for a proof that A(θ^*) is Hurwitz, so that θ^* is locally asymptotically stable as claimed in (ii).
We have under the assumptions of (iii),
(θ) = - θ + θ^* , θ∈_i
If _0 ∈_i it follows that the solution to _t = (_t) is given by
_t = θ^* + [_t - θ^*] e^-t
Convexity of _i is used here to ensure that _t ∈_i for all t, which completes the proof of (iii).
§ CONCLUSIONS AND THOUGHTS FOR THE FUTURE
This article began as a companion to the INFORMS APS lecture delivered by the author in June, 2023.
The scope of the lecture and this article grew to include several significant new contributions, which invite many avenues for future research:
1. The new Zap Zero SA algorithm (<ref>) is only one possible approach to approximate the Newton-Raphson flow.
There may be approaches based on momentum—it will be worthwhile revisiting the NESA algorithm of <cit.>.
2. We now know that the existence of a solution to the projected Bellman equation exists under mild conditions, most important of which involves the choice of policy for training.
3. The extension to average cost optimal control will be possible through consideration of <cit.>. And for the discounted case, better algorithms and better bounds on might be obtained by adopting relative Q-learning algorithms <cit.>.
4.
Extensions of Zap Q-learning are unexplored for parameter dependent training
5.
We should consider other paradigms for algorithm design. The recent approaches <cit.>
are based on the linear programming formulation of optimal control due to of Manne, 1960 <cit.>.
Appendix
§ STABILITY WITH OPTIMISM
This section concerns analysis of Q-learning with optimistic training, so that the input is defined by a randomized policy ^θ. When θ is frozen, so that U_k ∼^θ(| X_k) for each k,
then is a time homogeneous Markov chain with transition matrix,
P_θ(x,x') ∑_u ^θ (u| x) P_u(x,x') , x, x' ∈ .
Recall that in this case the joint process { (X_k,U_k) : k≥ 0} is also Markovian, with transition matrix given in (<ref>).
We maintain the notation _θ for the unique invariant pmf for T_θ.
Of course, the parameter θ is never frozen in any algorithm. The transition matrices P_θ and T_θ are introduced for analysis.
It is assumed that the function class is linear, {Q^θ = θ^ψ : θ∈^d} with ψ×→^d.
§.§ A truly oblivious policy
We require structure of the truly oblivious policy defined by U_k ≡_k in the definition of in (<ref>).
The transition matrix for the joint process { (X_k,U_k) : k≥ 0} can be obtained from (<ref>), and is denoted
T_(z,z') = P_u(x,x') _(u') , z=(x,u) , z'=(x',u') ∈× .
The invariance equation _(z') = ∑_z _(z) T_θ(z,z') implies that the invariant pmf is product form:
_(z') = _(x') _(u') , z'=(x',u') ∈× .
in which _(x') = ∑_u _(x',u) is the steady-state marginal distribution of under this policy.
Similar notation is adopted for each of invariant pmfs,
_θ(x) = ∑_u _θ (x,u) , x∈ , θ∈^d .
These are the invariant pmfs for { P_θ} appearing in (<ref>).
lemma]t:AbsCts_pmf
Suppose that the Markov chain with transition matrix T_ is uni-chain, so that _ is the unique invariant pmf.
Consider any one of the three choices of {_k} used in (<ref>)
with <1 and any choice of κ in the case of (<ref>) or {κ_θ} in the case of (<ref>). The following conclusions then hold:
(i) T_θ is also uni-chain, so that _θ is unique for any θ.
(ii) There is a constant δ_ >0 such that _θ(z) ≥δ__(z) for all z and θ.
The constant δ_ii may depend on the policy parameters, but not θ.
(iii) _θ (x,u) ≥_θ(x) _(u) for all x,u, and θ.
(iv) (θ) ≥δ_ for all θ∈^d.
Let _0 denote the support of _ and _0 denote the support of _.
The uni-chain assumption is equivalent to the following reachability criterion:
there is N≥ 1 and δ_N>0 such that
∑_k=1^N T^k_(z,z') ≥δ_N , for any z ∈×,
and z' ∈_0×_0,
with T^k_ the k-step transition matrix.
This is a version of Doeblin's minorization condition that implies uniform ergodicity when the chain is aperiodic <cit.>.
In view of (<ref>) we have for any θ,
∑_k=1^N T^k_θ(z,z') ≥∑_k=1^N ^k T^k_(z,z') ≥^N δ_N
, for any z ∈×,
and z' ∈_0×_0,
Hence the family {T_θ : θ∈^d } satisfies a uniform Doeblin minorization. In particular, each transition matrix is uni-chain, which establishes (i).
Part (ii) follows from the bounds above and invariance:
_θ(z') = ∑_z _θ(z) ( 1/N∑_k=1^N T^k_θ(z,z') )
≥1/N^N
δ_N , z' ∈_0×_0
Part (iii) also follows from invariance in the following one-step form: we have from (<ref>),
and using the bound ^θ (u' | x') ≥_(u'),
_θ(z') = ∑_z _θ(z) T_θ(z,z')
= ∑_x,u_θ(x,u) P_u(x,x') ^θ (u' | x')
≥∑_x,u_θ(x,u) P_u(x,x') _(u')
= _θ(x') _(u')
For part (iv) consider the definition (<ref>), which gives
(θ) = ∑_x,u_θ(x) _(u)
ψ(x,u) ψ(x,u) ^
Applying (ii) gives _θ(x) ≥δ__(x) for all x, and hence
the desired bound:
(θ)
≥δ_∑_x,u_(x) _(u)
ψ(x,u) ψ(x,u) ^ = δ_
§.§ Mean flow for the -greedy policy
In this subsection the input is chosen to be the -greedy policy (<ref>). The motivation is in part the fact that establishing stability of the ODE@∞ in this case is far easier than the tamed Gibbs approximation.
The transition matrix (<ref>) becomes
T_θ(z,z') = P_u(x,x') { (1-) { u' = ϕ^θ(x') } + ν^(u') } , z=(x,u) , z'=(x',u') ∈× .
The family { T_θ : θ∈^d} is finite because there are only a finite number of deterministic stationary policies;
it takes on a constant value on each connected component of (recall (<ref>)).
Compact representations of f and are obtained with additional notation. For n≥ 0 denote
n^ = ψ(X_n, ^θ_n(X_n)) n^ = ψ(X_n, _n)
_n = c(X_n, ^θ_n(X_n))
_n = c(X_n, _n)
Based on this notation, we have under the -greedy policy (<ref>, <ref>),
f_n+1(θ_n)
= (1 - B_n) (
_n + [ n+1^ - n^]^θ_n ) n^
+B_n (
_n + [ n+1^ - ψ^_n ]^θ_n ) n^
lemma]t:psipsiThetaGreedy
__θ[ n{n+1^} ^] = R_-1(θ) + D(θ),
in which
D(θ) = __θ[ n{n+1^ - n+1^} ^]
Starting with the definition R_-1(θ) = __θ[ n{n+1} ^], we have under the -greedy policy,
R_-1(θ) = (1-) __θ[ n{n+1^} ^]
+__θ[ n{n+1^} ^]
= __θ[ n{n+1^} ^] + __θ[ n{n+1^ - n+1^} ^]
lemma]t:Aproperties1
The vector fields for the mean flow and the ODE@∞ for the -greedy policy are
(θ) = A(θ) θ - b(θ) _∞(θ) = A(θ) θ
in which
A(θ) = - [ R_0(θ) - R_-1(θ) ]
+ D(θ)
b(θ) = (1-)(θ) + (θ)
(θ) = - __θ[ n^n^ ] and
(θ) = -__θ[ n^ c(X_n, _n) ].
The representation (<ref>) is equivalently expressed f_n+1 (θ_n) = A_n+1θ_n - b_n+1, in which
A_n+1 = n[n+1^ - n]^
b_n+1 = (1 - B_n) n^ c(X_n, ^θ(X_n)) +B_n n^ c(X_n, _n)
The expression for b(θ) in the expression (θ) = __θ[f_n+1(θ ) = A(θ) θ - b(θ) is immediate.
We have A(θ) = - R_0(θ) +__θ[ n{n+1^} ^],
so that (<ref>) follows from <Ref>.
The expression for _∞ follows from the fact that A and b are invariant under positive scaling of their arguments: A(rθ) = A(θ) and b(rθ) = b(θ) for any θ and r>0.
The mean flow (<ref>) is a differential inclusion because the vector field is not continuous.
The form of the expression for A(θ) in (<ref>) is intended to evoke the similar formula (<ref>) obtained for (<ref>).
The following conclusions are based on arguments similar to what is used to obtain stability of on-policy TD-learning <cit.>.
Recall the definition (<ref>): _ (1-)^2 / [ (1-)^2 + ^2 ].
proposition]t:QstableODEGreedy
If < _, then there is is β_>0 such that
v^ A(θ)v ≤ -β_ v^2 for each v,θ∈^d.
Applying <Ref> gives for any v,θ,
v^ A(θ) v ≤ - (1-γ) v^ R_0(θ) v
+
γ v^ D (θ) v
The inequality follows from the bound v^ R_k(θ) v ≤ v^ R_0(θ) v, valid for any k.
We are left to bound the term involving D.
Write
v^ D(θ) v = __θ[
(v^n ) ( v^n+1^ )
]
-
__θ[ ( v^n )( v^n+1^)
]
Using the bound xy≤ [x^2 + y^2] for x,y∈, we obtain for any δ_, δ_ >0,
| __θ[ (v^n ) ( v^n+1^ ) ] ≤δ_^-1 v^ R_0(θ) v + δ_ v^ R_0^(θ) v
| __θ[ ( v^n )( v^n+1^) ] | ≤δ_^-1 v^ R_0(θ) v + δ_ v^ R_0^(θ) v
Recall from (<ref>) that R_0(θ) = (1-) R_0^(θ) + R_0^(θ). Set
δ_ =η, δ_ = (1- ) η, with η>0 to be chosen. Then,
v^ D(θ) v ≤[ ( δ_^-1 + δ_^-1) v^ R_0(θ) v
+ δ_ v^ R_0^(θ) v
+ δ_ v^ R_0^(θ) v ]
= [ ( 1/ + 1/1- ) 1/η + η] v^ R_0(θ) v
Minimizing the right hand side over η gives η^*_ = √(^-1 + (1- )^-1), and on substitution,
v^ D(θ) v ≤η^*_ v^ R_0(θ) v
Substitution into (<ref>) gives the final bound,
v^ A(θ) v ≤[ - (1-γ) +
γη^*_] v^ R_0(θ) v
The coefficient is negative for positive if and only if <_. We obtain the desired bound with
β_ = [ (1-γ) - γη^*_] min_θ(R_0(θ) )
<Ref>
implies that the minimum is strictly positive.
The extension of <Ref> to the tamed Gibbs policy requires approximations summarized in the next subsection.
§.§ Entropy and Gibbs bounds
Consider a single Gibbs (equivalently Boltzman) pmf on with energy E→ and inverse temperature κ>0:
p_κ(u) = 1/_κexp(-κ E(u) ) , u∈ .
The normalizing factor _κ is commonly called the partition function. The entropy of p_κ is denoted
H_κ = - ∑_u p_κ(u) log( p_κ(u) ) = ∑_u p_κ(u) [ κ E(u) + log (_κ) ]
It is well known that bounds on entropy lead to bounds on the quality of the softmin approximation.
Denote min_u E(u).
lemma]t:GibbsSoftMinBdd
≤∑_u p_κ(u) E(u) ≤ +1/κlog(||), for any κ>0.
The uniform distribution maximizes entropy, giving
∑_u p_κ(u) [ κ E(u) + log (_κ) ] ≤log(||)
The proof is completed on substituting the following bound for the log partition function:
log (_κ) = log∑_u exp(-κ E(u) ) ≥ -κ
An implication of the lemma to the policy (<ref>): for any initial distribution for (X_0, U_0),
^θ(X_k+1)
≤[ Q^θ(X_k+1, _k+1) | X_0^k+1, U_0^k]
≤^θ(X_k+1) + 1/κ_θlog(||) , k≥ 0 .
§.§ Proof of <Ref>
The proof of <Ref> closely follows the proof of <Ref>. We begin a companion to <Ref>:
lemma]t:psipsiTheta
We have for the (,κ_0)-tamed Gibbs policy,
__θ[ n{n+1^} ^]
= R_-1(θ) + D(θ) + (1-) E(θ)
in which
D(θ) = __θ[ n{n+1^ - n+1^} ^]
E(θ) = __θ[ n{n+1^ - n+1^} ^]
with n+1^ = ψ(X_n+1, _n+1).
We have a partial extension of <Ref>:
lemma]t:QstableODEGibbs
The following holds for the (,κ_0)-tamed Gibbs policy, subject to (<ref>) and < _:
there is is β_>0 such that
θ^ A(θ)θ≤ -β_θ^2
for all κ_0>0 sufficiently large, and all θ≥ 1.
Applying <Ref> to (<ref>), and following the same steps as in the proof of <Ref> we obtain
θ^ A(θ) θ≤ -β_^0 θ^ R_0(θ) θ + (1-) θ^ E(θ) θ
with β_^0 = [ (1-γ) - γη^*_] >0, with η^*_ = √(^-1 + (1- )^-1).
From the definition (<ref>) we have
θ^ E(θ) θ =
__θ[ Q^θ(X_n,U_n) {^θ(X_n+1 ) - Q^θ(X_n+1 , _n+1) }]
Applying (<ref>) and the expression for κ_θ in (<ref>), we obtain for θ≥ 1,
| θ^ E(θ) θ |
≤ 1 /κ_0θlog(||)
__θ[ | Q^θ(X_n,U_n) | ]
≤ 1 /κ_0θ^2 log(||) √()
with the maximum over all θ of the maximum eigenvalue of R_0(θ). Combining these bounds completes the proof.
Precisely as in the proof of <Ref> we obtain a solution to (V4) using V(θ) = θ (recall (<ref>)), which implies (<ref>) exactly as in the case when is exogenous.
find ref
The existence of θ^* is also identical to the proof in <Ref>.
Let θ^κ_0 denote the solution to the projected Bellman equation for the (,κ_0)-tamed Gibbs policy, in which <_ is fixed.
Observe that in <Ref> we obtain a uniform bound over all large κ_0. An examination of the proof of <Ref> shows that there is a constant b_ such that θ^κ_0≤ b_ for all sufficiently large κ_0.
Hence we can find a subsequence κ_0^n →∞ as n→∞, for which the following limits exist:
θ^* = lim_n→∞θ^κ_0^n , ^* = lim_n→∞_n ,
in which _n is the invariant pmf obtained from the policy using θ^κ_0^n.
The invariant pmfs have the form
_n (x,u) = _n(x) ^n (u | x)
with ^n defined in (<ref>) using κ_0^n, and _n the first marginal of _n.
It follows that the limiting invariant pmf has the same structure,
^*(x,u) = ^*(x) ^θ^* (u | x)
Since κ_0^n↑∞,
convergence implies that ^θ^* is of the form (<ref>) with ^* ∈^θ^*.
Letting _n denote the vector field obtained using θ^κ_0^n we must have convergence for each θ:
(θ) = lim_n→∞_n (θ) = _^* [ n (X_n,U_n;θ)] ,
in which U_n is defined using the randomized -greedy policy ^θ^*, and defined in (<ref>) is a continuous function of θ. Since _n (θ_n) =0 for each n, we conclude that
(θ^*) = as desired.
abbrv
''
10
aboberbor01
J. Abounadi, D. Bertsekas, and V. S. Borkar.
Learning algorithms for Markov decision processes with average
cost.
SIAM Journal on Control and Optimization, 40(3):681–698, 2001.
asmgly07
S. Asmussen and P. W. Glynn.
Stochastic Simulation: Algorithms and Analysis, volume 57 of
Stochastic Modelling and Applied Probability.
Springer-Verlag, New York, 2007.
avrbordolpat21
K. E. Avrachenkov, V. S. Borkar, H. P. Dolhare, and K. Patil.
Full gradient DQN reinforcement learning: A provably convergent
scheme.
In Modern Trends in Controlled Stochastic Processes:, pages
192–220. Springer, 2021.
bai95
L. Baird.
Residual algorithms: Reinforcement learning with function
approximation.
In A. Prieditis and S. Russell, editors, Proc. Machine
Learning, pages 30–37. Morgan Kaufmann, San Francisco (CA), 1995.
bascurkraneu21
J. Bas Serrano, S. Curi, A. Krause, and G. Neu.
Logistic Q-learning.
In A. Banerjee and K. Fukumizu, editors, Proc. of The Intl.
Conference on Artificial Intelligence and Statistics, volume 130, pages
3610–3618, 13–15 Apr 2021.
ber12a
D. P. Bertsekas.
Dynamic programming and optimal control. Vol. II.
Athena Scientific, Belmont, MA, fourth edition, 2012.
bha11
S. Bhatnagar.
The Borkar–Meyn Theorem for asynchronous stochastic
approximations.
Systems & control letters, 60(7):472–478, 2011.
chedevborkonmey21
V. Borkar, S. Chen, A. Devraj, I. Kontoyiannis, and S. Meyn.
The ODE method for asymptotic statistics in stochastic
approximation and reinforcement learning.
arXiv e-prints:2110.14427, pages 1–50, 2021.
bor20a
V. S. Borkar.
Stochastic Approximation: A Dynamical Systems Viewpoint.
Hindustan Book Agency, Delhi, India, 2nd edition, 2021.
bormey00a
V. S. Borkar and S. P. Meyn.
The ODE method for convergence of stochastic approximation and
reinforcement learning.
SIAM J. Control Optim., 38(2):447–469, 2000.
chedevbusmey20b
S. Chen, A. M. Devraj, F. Lu, A. Bušić, and S. Meyn.
Zap Q-Learning with nonlinear function approximation.
In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin,
editors, Proc. Conference on Neural Information Processing Systems
(NeurIPS), and arXiv e-prints 1910.05405, volume 33, pages 16879–16890,
2020.
dev19
A. M. Devraj.
Reinforcement Learning Design with Optimal Learning Rate.
PhD thesis, University of Florida, 2019.
devbusmey19a
A. M. Devraj, A. Bušić, and S. Meyn.
On matrix momentum stochastic approximation and applications to
Q-learning.
In Allerton Conference on Communication, Control, and
Computing, pages 749–756, Sep 2019.
devbusmey21
A. M. Devraj, A. Bušić, and S. Meyn.
Fundamental design principles for reinforcement learning algorithms.
In K. G. Vamvoudakis, Y. Wan, F. L. Lewis, and D. Cansever, editors,
Handbook on Reinforcement Learning and Control, Studies in Systems,
Decision and Control series (SSDC, volume 325). Springer, 2021.
devmey17a
A. M. Devraj and S. P. Meyn.
Fastest convergence for Q-learning.
ArXiv e-prints, July 2017.
devmey17b
A. M. Devraj and S. P. Meyn.
Zap Q-learning.
In Proc. of the Intl. Conference on Neural Information
Processing Systems, pages 2232–2241, 2017.
devmey22
A. M. Devraj and S. P. Meyn.
Q-learning with uniformly bounded variance.
IEEE Trans. on Automatic Control, 67(11):5948–5963, 2022.
goptho22
A. Gopalan and G. Thoppe.
Approximate Q-learning and SARSA(0) under the ϵ-greedy
policy: a differential inclusion analysis.
arXiv preprint arXiv:2205.13617, 2022.
gor95
G. J. Gordon.
Stable function approximation in dynamic programming.
In Proc. ICML (see also the full-length technical report,
CMU-CS-95-103), pages 261–268. Elsevier, 1995.
jaajorsin94a
T. Jaakola, M. Jordan, and S. Singh.
On the convergence of stochastic iterative dynamic programming
algorithms.
Neural Computation, 6:1185–1201, 1994.
jinyanwanjor20
C. Jin, Z. Yang, Z. Wang, and M. I. Jordan.
Provably efficient reinforcement learning with linear function
approximation.
In Conference on Learning Theory, pages 2137–2143, 2020.
kokorekha99
P. Kokotović, H. K. Khalil, and J. O'Reilly.
Singular Perturbation Methods in Control: Analysis and Design.
Society for Industrial and Applied Mathematics, 1999.
leehe19
D. Lee and N. He.
A unified switching system perspective and ODE analysis of
Q-learning algorithms.
arXiv, page arXiv:1912.02270, 2019.
limkimlee22
H.-D. Lim, D. W. Kim, and D. Lee.
Regularized Q-learning.
arXiv e-prints, pages arXiv–2202, 2022.
mehmeyneulu21
F. Lu, P. G. Mehta, S. P. Meyn, and G. Neu.
Convex Q-learning.
In American Control Conf., pages 4749–4756. IEEE, 2021.
lumehmeyneu22
F. Lu, P. G. Mehta, S. P. Meyn, and G. Neu.
Convex analytic theory for convex Q-learning.
In IEEE Conference on Decision and Control, pages 4065–4071,
Dec 2022.
maeszebhasut10
H. R. Maei, C. Szepesvári, S. Bhatnagar, and R. S. Sutton.
Toward off-policy learning control with function approximation.
In Proc. ICML, pages 719–726, USA, 2010. Omnipress.
man60a
A. S. Manne.
Linear programming and sequential decisions.
Management Sci., 6(3):259–267, 1960.
margardralyg22
A. Martinelli, M. Gargiani, M. Draskovic, and J. Lygeros.
Data-driven optimal control of affine systems: A linear programming
perspective.
IEEE Control Systems Letters, 6:3092–3097, 2022.
margarlyg22
A. Martinelli, M. Gargiani, and J. Lygeros.
Data-driven optimal control with a relaxed linear program.
Automatica, 136:110052, 2022.
mehmey09a
P. G. Mehta and S. P. Meyn.
Q-learning and Pontryagin's minimum principle.
In Proc. of the Conf. on Dec. and Control, pages 3598–3605,
Dec. 2009.
melmeyrib08
F. S. Melo, S. P. Meyn, and M. I. Ribeiro.
An analysis of reinforcement learning with function approximation.
In Proc. ICML, pages 664–671, New York, NY, 2008.
CSRL
S. Meyn.
Control Systems and Reinforcement Learning.
Cambridge University Press, Cambridge, 2022.
APS2023
S. Meyn.
Who is Q? a beginner's guide to reinforcement learning—slides for
the INFORMS APS lecture.
Online, DOI 10.13140/RG.2.2.24897.33127, July 2023.
MT
S. P. Meyn and R. L. Tweedie.
Markov chains and stochastic stability.
Cambridge University Press, Cambridge, second edition, 2009.
Published in the Cambridge Mathematical Library. 1993 edition online.
pol90
B. T. Polyak.
A new method of stochastic approximation type.
Avtomatika i telemekhanika (in Russian). translated in Automat.
Remote Control, 51 (1991), pages 98–107, 1990.
rambha17
A. Ramaswamy and S. Bhatnagar.
A generalization of the Borkar-Meyn Theorem for stochastic
recursive inclusions.
Mathematics of Operations Research, 42(3):648–661, 2017.
rup85
D. Ruppert.
A Newton-Raphson version of the multivariate Robbins-Monro
procedure.
The Annals of Statistics, 13(1):236–245, 1985.
rup88
D. Ruppert.
Efficient estimators from a slowly convergent Robbins-Monro
processes.
Technical Report Tech. Rept. No. 781, Cornell University, School of
Operations Research and Industrial Engineering, Ithaca, NY, 1988.
sutbar18
R. Sutton and A. Barto.
Reinforcement Learning: An Introduction.
MIT Press, Cambridge, MA, 2nd edition, 2018.
sut88
R. S. Sutton.
Learning to predict by the methods of temporal differences.
Mach. Learn., 3(1):9–44, 1988.
sze10
C. Szepesvári.
Algorithms for Reinforcement Learning.
Synthesis Lectures on Artificial Intelligence and Machine Learning.
Morgan & Claypool Publishers, 2010.
Simons_bootcamp2020
C. Szepesvari, E. Brunskill, S. Bubeck, A. Malek, S. Meyn, A. Tewari, and
M. Wang.
Theory of Reinforcement Learning Boot Camp. Aug 31 to Sep 4, 2020.
<https://simons.berkeley.edu/workshops/rl-2020-bc>.
tsi94a
J. Tsitsiklis.
Asynchronous stochastic approximation and Q-learning.
Machine Learning, 16:185–202, 1994.
tsivan97
J. N. Tsitsiklis and B. Van Roy.
An analysis of temporal-difference learning with function
approximation.
IEEE Trans. Automat. Control, 42(5):674–690, 1997.
royThesis98
B. Van Roy.
Learning and Value Function Approximation in Complex Decision
Processes.
PhD thesis, Massachusetts Institute of Technology, Cambridge, MA,
1998.
AAI0599623.
wai19a
M. J. Wainwright.
Stochastic approximation with cone-contractive operators: Sharp
ℓ_∞-bounds for Q-learning.
CoRR, abs/1905.06265, 2019.
wat89
C. J. C. H. Watkins.
Learning from Delayed Rewards.
PhD thesis, King's College, Cambridge, Cambridge, UK, 1989.
watday92a
C. J. C. H. Watkins and P. Dayan.
Q-learning.
Machine Learning, 8(3-4):279–292, 1992.
yanwan19
L. Yang and M. Wang.
Sample-optimal parametric Q-learning using linearly additive
features.
In International Conference on Machine Learning, pages
6995–7004, 2019.
yanwan19b
L. Yang and M. Wang.
Reinforcement learning in feature space: Matrix bandit, kernels, and
regret bound.
In International Conference on Machine Learning, pages
10746–10756, 2020.
faibor23
F. Zarin Faizal and V. Borkar.
Functional Central Limit Theorem for Two Timescale Stochastic
Approximation.
arXiv e-prints, page arXiv:2306.05723, June 2023.
|
http://arxiv.org/abs/2307.01403v1
|
20230703235105
|
Learning to Communicate using Contrastive Learning
|
[
"Yat Long Lo",
"Biswa Sengupta",
"Jakob Foerster",
"Michael Noukhovitch"
] |
cs.AI
|
[
"cs.AI",
"cs.LG"
] |
Sliding suffix trees simplified
Laurentius Leonard10000-0001-8477-7033
Shunsuke Inenaga20000-0002-1833-010X
Hideo Bannai30000-0002-6856-5185
Takuya Mieno40000-0003-2922-9434
==============================================================================================================================================================
Communication is a powerful tool for coordination in multi-agent RL. But inducing an effective, common language is a difficult challenge, particularly in the decentralized setting. In this work, we introduce an alternative perspective where communicative messages sent between agents are considered as different incomplete views of the environment state. By examining the relationship between messages sent and received, we propose to learn to communicate using contrastive learning to maximize the mutual information between messages of a given trajectory. In communication-essential environments, our method outperforms previous work in both performance and learning speed. Using qualitative metrics and representation probing, we show that our method induces more symmetric communication and captures global state information from the environment. Overall, we show the power of contrastive learning and the importance of leveraging messages as encodings for effective communication.
§ INTRODUCTION
Communication is a key capability necessary for effective coordination among agents in partially observable environments. In multi-agent reinforcement learning (MARL) <cit.>, agents can use their actions to transmit information <cit.> but continuous or discrete messages on a communication channel <cit.>, i.e., linguistic communication <cit.>, are more flexible and powerful because they can convey more complex concepts. To successfully communicate, a speaker and a listener must share a common language with a shared understanding of the symbols being used <cit.>. Emergent communication or learning a common protocol <cit.>, is a thriving research direction but most works focus on simple, single-turn, sender-receiver games <cit.>. In more visually and structurally complex MARL environments <cit.>, existing approaches often rely on centralized learning mechanisms by sharing models <cit.> or gradients <cit.>.
However, a centralized controller is impractical in many real-world environments <cit.> where agents cannot easily synchronize and must act independently i.e. decentralized. Centralized training with decentralized execution (CTDE) <cit.> is a middle-ground between purely centralized and decentralized methods but may not perform better than purely decentralized training <cit.>. A centralized controller suffers from the curse of dimensionality: as the number of agents it must control increases, the amount of communication between agents to process increases exponentially <cit.>. Furthermore, the fully decentralized setting is more flexible and requires fewer assumptions about other agents, making it more realistic in many real-world scenarios <cit.>. Hence, this work explores learning to communicate to coordinate agents in the decentralized setting. In MARL, this means each agent will have its own model to decide how to act and communicate, and no agents share parameters or gradients.
Typical RL approaches to decentralized communication are known to perform poorly even in simple tasks <cit.> due to the large space of communication to explore, the high variance of RL, and a lack of common grounding on which to base communication <cit.>. Earlier work leveraged how communication influences other agents <cit.> to learn the protocol. Most recently, <cit.> proposed agents that autoencode their observations and use the encodings as communication, using the shared environment as the common grounding. We build on this work in using both the shared environment and the relationship between sent and received messages to ground a protocol. We extend the <cit.> perspective that agents' messages are encodings and propose that agents in similar states should produce similar messages. This perspective leads to a simple method based on contrastive learning to ground communication.
Inspired by the literature in representation learning that uses different “views” of a data sample <cit.>, for a given trajectory, we propose that an agent's observation is a “view” of the environment state. Thus, different agents' messages are encodings of different incomplete “views” of the same underlying state. From this perspective, messages from the same state should be more similar to each other than to those from distant states or other trajectories. We visually show our perspective in Figure <ref>. We propose Communication Alignment Contrastive Learning (CACL), which each agent uses contrastive learning between sent and received messages to learn to communicate.
We experimentally validate CACL in three communication-essential environments and show how CACL leads to improved performance and speed, outperforming state-of-the-art decentralized MARL communication algorithms. To understand CACL's success, we propose a suite of qualitative and quantitative metrics. We demonstrate that CACL leads to more symmetric communication (i.e., different agents communicate similarly when faced with the same observations), allowing agents to be more mutually intelligible. By treating our messages as representations, we show that CACL's messages capture global semantic information about the environment better than baselines. Overall, we argue that contrastive learning is a powerful direction for multi-agent communication and has fundamental benefits over previous approaches.
§ RELATED WORK
Learning to coordinate multiple RL agents is a challenging and unsolved task where naively applying single-agent RL algorithms often fails <cit.>. Recent approaches focus on neural network-based agents <cit.> with a message channel to develop a common communication protocol <cit.>. To handle issues of non-stationarity, some work focuses on centralized learning approaches that globally share models <cit.>, training procedures <cit.>, or gradients <cit.> among agents. This improves coordination and can reduce optimization issues but results are often still sub-optimal in practise <cit.> and may violate independence assumptions, effectively modelling the multi-agent scenario as a single agent <cit.>.
This work focuses on independent, decentralized agents and non-differentiable communication. In previous work, <cit.> propose a loss to influence other agents but require explicit and complex models of other agents and their experiments focus on mixed cooperative-competitive scenarios. <cit.> add biases to each agent's loss function that separately encourage positive listening (i.e., the listener to act differently for different messages) and positive signaling (i.e., the speaker to produce diverse messages in different situations). Their method is simpler but requires task-specific hyperparameter tuning to achieve reasonable performance and underperforms in sensory-rich environments <cit.>. Our work is closest to <cit.>, who leverage autoencoding as their method to learn a message protocol in cooperative 2D MARL games. Agents learn to reconstruct their observations and communicate their autoencoding. It outperforms previous works while being algorithmically and conceptually simpler. Our method builds on this encoding perspective by considering other agents' messages to ground communication. Whereas agents in <cit.> can only learn to encode the observation, our approach leverages the relationship between different agents' messages to encode global state information. Empirically, our method is also more efficient as it requires no extra learning parameters whereas <cit.> learn and discard their decoder network.
Note that our setup uses continuous messages instead of discrete <cit.>, a standard choice in contrastive learning <cit.> and embodied multi-agent communication <cit.>.
Autoencoding is a form of generative self-supervised learning (SSL) <cit.>. We propose to use another form of SSL, contrastive learning <cit.>, as the basis for learning communication. We are motivated by recent work that achieves state-of-the-art representation learning on images using contrastive learning methods <cit.> and leverages multiple "views" of the data. Whereas negative samples are simply different images, positive samples are image data augmentations or “views” of the original image <cit.>. We treat agents' messages of the same state in a trajectory as positives of each other, so we base our method on SupCon (Supervised Contrastive Learning) <cit.> which modifies the classic contrastive objective to account for multiple positive samples. Relatedly, <cit.> use a two-agent discrete communication setup to do contrastive learning on images, we do the opposite and leverage contrastive learning to learn multi-agent communication in an RL environment.
§ PRELIMINARIES
We base our investigations on decentralized partially observable Markov decision processes (Dec-POMDPs) with N agents to describe a fully cooperative multi-agent task <cit.>. A Dec-POMDP consists of a tuple G = ⟨ S,
A, P, R, Z, Ω, n, γ⟩. s ∈ S is the true state of the environment. At each time step, each agent i ∈ N chooses an action a ∈ A^i to form a joint action a ∈ A ≡ A^1 × A^2 ...× A^N. It leads to an environment transition according to the transition function P(s' | s, a^1, ... a^N) : S × A × S → [0, 1]. All agents share the same reward function R(s, a): S × A →ℝ. γ∈ [0, 1) is a discount factor. As the environment is partially observable, each agent i receives individual observations z ∈ Z based on the observation function Ω^i(s): S → Z.
We denote the environment trajectory and the action-observation history (AOH) of an agent i as τ_t = s_0, a_0, .... s_t, a_t and τ^i_t = Ω^i(s_0), a^i_0, .... Ω^i(s_t), a^i_t∈ T ≡ (Z × A)^* respectively. A stochastic policy π(a^i | τ^i): T × A → [0, 1] conditions on AOH. The joint policy π has a corresponding action-value function Q^π(s_t, a_t) = 𝔼_s_t+1: ∞, a_t+1:∞[R_t | s_t, a_t], where R_t = ∑_i=0^∞γ^i r_t + i is the discounted return. r_t + i is the reward obtained at time t + i from the reward function R.
To account for communication, similar to <cit.>, at each time step t, an agent i takes an action a_t^i and produces a message m_t^i = Ψ^i(Ω^i(s_t)) after receiving its observation Ω^i(s_t) and messages sent at the previous time step m_t-1^-1, where Ψ^i is agent i's function to produce a message given its observation and m_t-1^-1 refers to messages sent by agents other than agent i. The messages are continuous vectors of dimensionality D.
§ METHODOLOGY
We propose a different perspective on the message space used for communication. At each time step t for a given trajectory τ, a message m_t^i of an agent i can be viewed as an incomplete view of the environment state s_t because m_t^i is a function of s_t as formulated in section <ref>. Naturally, messages of all the N agents are different incomplete perspectives of s_t. To ground decentralized communication, we hypothesize that we could leverage this relationship between messages from similar states to encourage consistency and similarity of the messages space across agents.
Specifically, we propose maximizing the mutual information using contrastive learning which aligns the message space by pushing messages from similar states closer together and messages of different states further apart.
Note that agents see a partial view of the state from their observation, so they will inherently communicate different messages to reflect their partial knowledge. However, aligning their message space enables them to communicate the specific parts of the state they observe in a more mutually-intelligible way.
As a heuristic for state similarity, we consider a window of timesteps within a trajectory to be all similar states i.e. positive samples of each other. To guarantee dissimilar negative samples <cit.>, we use states from other trajectories as negatives. Since each underlying state has multiple positive views (w steps, N agent messages each), we leverage the recent contrastive learning method SupCon <cit.>. We refer to the contrastive SupCon objective across multiple MARL trajectories as Communication Alignment Contrastive Learning (CACL).
Let M be all the messages in a batch of trajectories and M_τ be the messages in trajectory τ. Let m^i_t ∈ M_τ the message of agent i at time t.
Thus, positives P for a message m_t^i given a timestep window w are all other messages from the same trajectory τ sent within that timestep window P(m_t^i) ≡{m_t'^j ∈ M_τ∖{m_t^i} : t' ∈ [t - w, t + w] }. Let all other messages A from all trajectories in the batch be A(m_t^i)≡ M ∖{ m_t^i }. Formally, the contrastive loss L_CACL:
∑_m_t^i ∈ M-1/| P(m_t^i) |∑_m_p ∈ P(m_t^i)logexp(m_t^i · m_p / η)/∑_m_a ∈ A(m_t^i)exp(m_t^i · m_a / η)
Where η∈ℝ^+ is a scalar temperature and |P(m_t^i)| is the cardinality.
Practically, each agent has a replay buffer that maintains a batch of trajectory data collected from multiple environment instances. It contains messages received during training to compute the CACL loss. We use a timestep window of size 5 for all the environments based on hyperparameter tuning of different window sizes. Following <cit.>, messages are normalized before the loss computation and a low temperature (i.e. η = 0.1) is used as it empirically benefits performance and training stability. The total loss for each agent is a reinforcement learning loss L_RL using the reward to learn a policy (but not message head) and a separate contrastive loss L_CACL to learn just the message head, formulated as follows:
L = L_RL + κ L_CACL
where κ∈ℝ^+ is a hyperparameter to scale the CACL loss.
§ EXPERIMENTS AND RESULTS
§.§ Experimental Setup
We evaluate our method on three multi-agent environments with communication channels. Given the limited information each agent observes themselves, agents must meaningfully communicate in order to improve task performance.
Traffic-Junction: Proposed by <cit.>, it consists of A 4-way traffic junction with cars entering and leaving the grid. The goal is to avoid collision when crossing the junction. We use 5 agents with a vision of 1. Although not necessary, given the limited vision in agents, communication could help in solving the task. We evaluate each algorithm with success rate during evaluation episodes.
All results are averaged over 12 evaluation episodes and over 6 random seeds. More details of the environments and parameters can be found in appendix <ref>.
Predator-Prey: A variant of the classic game <cit.> based on <cit.> where 4 agents (i.e. predators) have the cooperative goal to capture 2 randomly-moving prey by surrounding each prey with more than one predator. We devise a more difficult variation where agents have to entirely surround a prey on all 4 sides to successfully capture it and they cannot see each other in their observations. Thus, agents must communicate their positions and actions in order to coordinate their attacks. We evaluate each algorithm with episodic rewards in evaluation episodes.
Find-Goal: Proposed by <cit.>, agents' goal is to reach the green goal location as fast as possible in a grid environment with obstacles. We use 3 agents, each observes a partial view of the environment centered at its current position. Unlike in <cit.>, we use a field of view of 3 × 3 instead of 5 × 5 to make the problem harder. Each agent receives an individual reward of 1 for reaching the goal and an additional reward of 5 when all of them reach the goal. Hence, it is beneficial for an agent to communicate the goal location once it observes the goal. As in <cit.>, we measure performance using episode length. An episode ends quicker if agents can communicate goal locations to each other more efficiently. Hence, a method performs better if it has shorter episode lengths.
§.§ Training Details
We compare CACL to the state-of-the-art independent, decentralized method, autoencoded communication <cit.>, which grounds communication by reconstructing encoded observations. We also compare to baselines from previous work: independent actor critic without communication (IAC) and positive listening loss <cit.> (See Appendix <ref>). We exclude the positive signalling loss <cit.> as extending it to continuous messages is non-trivial but note that AEComm outperforms it in the discrete case <cit.>. We also include DIAL <cit.> which learns to communicate through differentiable messages to share gradients so is decentralized but not independent.
All methods use the same architecture based on the IAC algorithm with n-step returns and asynchronous environments <cit.>. Each agent has an encoder for observations and received messages. For methods with communication, each agent has a communication head to produce messages based on encoded observations. For policy learning, a GRU <cit.> is used to generate a hidden representation from a history of observations and messages. Agents use the hidden state for their the policy and value heads, which are 3-layer fully-connected neural networks. We perform spectral normalization <cit.> in the penultimate layer for each head to improve training stability . The architecture is shown in Figure <ref> and hyperparameters are further described, both in Appendix <ref>.
§.§ Task Performance
We run all methods on the three selected environments and plot results in Figure <ref>.
Our proposed method CACL outperforms all baseline methods in both final performance and learning speed and, consistent with previous results <cit.>, AEComm is the strongest baseline. The largest performance increase from CACL is in FindGoal where partial observability is most prominent due to agents' small field-of-view which makes communication more necessary (hence why IAC performs worst). These results show the effectiveness of self-supervised methods for learning communication in the fully-decentralized setting, as they both outperform DIAL which, notably, backpropogates gradients through other agents. Furthermore, it demonstrates CACL's contrastive learning as a more powerful alternative to AEComm's autoencoding for coordinating agents with communication.
Improvement on Traffic-Junction is not as significant as others because communication is less essential for task completion, as shown by the strong performance of IAC. For Predator-Prey, results are clearly better than baselines but have high variance due to the difficulty of the task. The goal of Predatory-Prey is to capture two moving prey and requires coordinating precisely to surround and attack a prey at the same time. Any slight miscoordination leads to sharp drop in rewards. For another metric of success, we compute the percentages of evaluation episodes that capture no, one, or two preys. Averaging over 6 random seeds, we show results in Figure <ref>. CACL does significantly better on the task, outperforms all baselines, and solving the complete task more robustly while failing less frequently. Find-Goal requires the most communication among the environments because the gridworld is the largest and agents must clearly communicate the location of goal. Here, CACL significantly outperforms the baselines, demonstrating that as the communicative task gets harder, CACL outperforms more.
We confirm the effectiveness of CACL with an ablation study of the key design decisions: sliding window and SupCon. CACL leverages the temporal nature of RL to treat a sliding window of timesteps as positive views of each other. We plot results for a range of window sizes run on Predator-Prey in Figure <ref>. No sliding window (size 1) performs poorly demonstrating its necessity and that the choice of sliding window size is an important hyperparameter. Through the use of SupCon <cit.> we treat all sent and received messages in the sliding window as all positive views of each other, with many positives per batch. Creating a batch with just one positive view per message corresponds to SimCLR <cit.> and results in much worse performance (1.36 ± 9.46). We also run Predator-Prey and search across values of the CACL loss coefficient κ in Figure <ref>. We used the best values (5-step window, κ=0.5) across all the environments, demonstrating that the choice of CACL hyperparameters is robust. Overall, we show the issues in naively implementing contrastive learning for communication, and the clear, important design decisions behind CACL.
§.§ Augmenting CACL with RL
The contrastive loss in the communication head of CACL is very performant without optimizing for reward, so a natural question is whether we can achieve even better results if we learn the message using reward as well. To answer this, we add DIAL to both CACL and the next best method, AEComm, and evaluate in the three environments. This is equivalent to backpropogating L_RL from Equation 2 through agents to learn the message head. In this way, both RL and SSL (contrastive or autoencoding) signals are used to learn the message head.
Figure <ref> compares the performance of CACL and AEComm with their DIAL-augmented variants. Our findings are consistent with <cit.>, who find that mixing AEComm and RL objectives are detrimental to performance. We observe that augmenting either AEComm or CACL with DIAL performs generally worse, except in Find-Goal, where performances is similar but not better. We hypothesize that decentralized DIAL is a complex, and high-variance optimization that is difficult to stabilize. DIAL's gradient updates may clash with CACL and result in neither a useful contrastive representation, nor a strong reward-oriented one. It is also possible that CACL's messages would not be improved with reward-oriented gradients. As we show in Section <ref>, CACL already captures useful semantic information that other agents can effectively extract.
§.§ Protocol Symmetry
We hypothesize that CACL's improved performance over the baselines is because it induces a more consistent, mutually-intelligible language that is shared among agents. More specifically, we consider a language's consistency to be how similarly agents communicate (i.e., sending similar messages) when faced with the same observations. A consistent protocol can reduce the optimization complexity since agents only need to learn one protocol for the whole group and it also makes agents more mutually intelligible.
To evaluate consistency, we measure protocol symmetry <cit.> so if an agent swaps observations and trajectory with another agent, it should produce a similar message as what the other agent produced. We extend this metric from previous work to the continuous, embodied case and measure the pairwise cosine similarities of messages sent by different agents for the same observation. Let Nk denote the set of all k-agent subset of a set of N agents. Given a trajectory τ and {t ∈ T} as a set of time steps of τ, protocol symmetry (protocol_sym) is written as:
1/|T|∑_i ∈ T1/|N|∑_i ∈ N1/|Nk|∑_j, k ∈Nkm_j · m_j/m_jm_k
Therefore, a more consistent protocol has higher symmetry. We swap agent trajectory and observations and compute this metric over 10 evaluation episodes for 6 random seeds, and show results in Table <ref>.
The self-supervised methods (CACL and AEComm) clearly outperform the others (DIAL and PL) implying that SSL is better for learning consistent representations in decentralized MARL. Furthermore, CACL's protocol is very highly symmetric, clearly outperforming all others. Each AEComm agent autoencodes their own observation without considering the other agents' messages, leading to the formation of multiple protocols between agents. In contrast, CACL induces a common protocol by casting the problem in the multi-view perspective and implicitly aligning agents' messages. The possible correlation between protocol symmetry and overall performance and speed further indicates the benefits of learning a common language in the decentralized setting.
§.§ Protocol Representation Probing
To further investigate how informative our protocols are, we propose a suite of qualitative and quantitative representation probing tests based on message clustering and classification, respectively. We perform these tests on the protocols learned in the Find-Goal environment.
Similar to <cit.>, we cluster messages generated from 10 evaluation episodes to qualitatively assess how informative CACL's protocol is. The messages are first compressed to a dimension of 2 using t-SNE <cit.> and then clustered using DBSCAN <cit.>. We look at each cluster's messages and their corresponding observations to extract any patterns and semantics captured. As shown in Figure <ref>, we observe a cluster of messages for observations when the goal is visible and another one when another agent is visible. Two clusters correspond to agents seeing neither the goal nor another agent. Notably, the messages in these clusters can come from different agents in different episodes, demonstrating that agents can indeed communicate symmetrically. The clusters indicate that CACL learns to compress meaningful, global state information in messages, allowing other agents to reasonably learn this semantic information.
To quantitatively evaluate the informativeness of learned protocols, we propose to treat messages as representations and learn a classifier on top of the messages, following work in RL representation learning <cit.>. Since FindGoal is focused on reaching a goal, intuitively, agents should communicate whether they have found the goal and, if so, where other agents should go to reach the goal. Thus, we propose to probe the goal visibility and goal location. The former uses the messages to classify whether the goal is visible in observations or not (i.e. a binary classification). The latter uses messages where the goal is visible in the observations to classify the general location of the goal (i.e., a 5-class classification: Top-Left, Top-Right, Bottom-Left, Bottom-Right and Middle).
Whereas goal visibility is easy for egocentric communication, goal location requires detailed spatial information and communicating the absolute location from their relative position. This tests whether the communication protocol can consider other agents' perspectives and give global information from an egocentric observation. We use 30 evaluation episodes per method to generate messages for our experiments but different methods may have different numbers of acceptable messages for our probing task (e.g. a limited number of messages where the goal is visible for predicting goal location). To ensure fair comparison, we choose an equal number of samples per class (i.e., positive/negative, 5-class location) for all methods and use a 70%/30% random split for training and testing. We use a 2-layer fully-connected neural network to test each method, as this corresponds to the same network that agents use to encode each others' messages as part of their observations.
Table <ref> shows the classification results for the two probing tests. Goal visibility is an easier task and all methods' messages can be effectively used to determine it. In the more difficult goal location task, all methods perform above chance (20%) but CACL's protocol significantly outperforms baselines. Contrastive learning across different agents' messages can enable CACL to learn a more global understanding of location from their egocentric viewpoint. By encoding the goal's spatial information, CACL agents are more likely able to move directly towards it, and reduce episode length. If other methods simply communicate that a goal is found, agents know to alter their search but are not as precise in direction. This explains why AEComm, PL, and DIAL perform better than IAC but worse than CACL, which also learns much quicker as shown in Figure <ref>. For completeness, we also provide similar classification results with a one-layer (linear) probe in Appendix <ref>
§ LIMITATIONS
Our work investigates fully-cooperative environments but learning to communicate in less cooperative settings, such as those with adversaries <cit.>, is a harder optimization problem. CACL would likely need stronger regularization to be effective. Furthermore, our empirical testing has revealed that SSL objectives are ineffective with reward-oriented gradients, as demonstrated in section <ref>. Although this phenomenon is well known <cit.>, it is still not fully understood and future work should aim to combine the two objectives. Finally, this work evaluates agents that were trained together. A more challenging frontier is zero-shot communication, an extension of zero-shot cooperation <cit.>, in which agents must communicate effectively with novel partners, unseen during training. In appendix <ref>, we show how existing methods perform poorly in this settings and leave this challenging setup to future work.
§ CONCLUSION AND FUTURE WORK
This work introduces an alternative perspective for learning to communicate in decentralized MARL based on the relationship between sent and received messages within a trajectory. Drawing inspiration from multi-view learning, we ground communication using contrastive learning by considering agents' messages to be encoded views of the same state. We empirically show that our method leads to better performance through a more consistent, common language and learns to communicate more global state information. We believe this work solidifies contrastive learning as an effective perspective for learning to communicate and hope it invigorates research into contrastive methods for communication with a focus on consistency. Furthermore, by establishing the connection between multi-view SSL, which has traditionally focused on images, and communication in MARL, we hope to encourage more cross-domain research. Finally, we see contrastive learning as a potential method for simulating human language evolution, and hope to inspire research in this direction.
abbrvnat
§ APPENDIX
§.§ Environment Details
Figure <ref> provides a visual illustration of the environments used.
§.§.§ Predator-Prey
We modify the Predator-Prey implementation by <cit.>. Our Predator-Prey has a higher communication and coordination requirement than the original Predator-Prey environment. Specifically, for a prey to be captured, it has to be entirely surrounded (i.e. the prey cannot move to another grid position in any actions).
Here, we use an 7x7 gridworld. In each agent's observation, it can only see the prey if it is within the field of view (3x3) and cannot see where other agents are. A shared reward of 10 is given for a successful capture and A penalty of -0.5 is given for a failed attempt. A -0.01 step penalty is also applied per step. Each agent has the actions of LEFT, RIGHT, UP. DOWN and NO-OP. The prey has the movement probability vector of [ 0.175, 0.175, 0.175, 0.175, 0.3] with each value corresponding to the probability of each action taken.
All algorithms are trained for 30 million environment steps with a maximum of 200 steps per episode.
§.§.§ Find-Goal
We use the Find-Goal environment implementation provided by <cit.>. The agents have the goal to find where the goal is in a 15x15 grid world with obstacles.
Unlike in <cit.>, each agent has a 3x3 field of view (instead of 7x7) to make the task more difficult. Each agent receives a reward of 1 for reaching a goal and an additional reward of 5 if all agents reach the goal. We use a step penalty of -0.01 and an obstackle density of 0.15.
All algorithms are trained for 40 million environment steps with a maximum of 512 steps per episode.
§.§.§ Traffic-Junction
We use the Traffic-Junction environment implementation provided by <cit.>. The gridworld is 8x8 with 1 traffic junction. The rate of cars being added has a minimum and maximum of 0.1 and 0.3. We use the easy version with two arrival points and 5 agents. Agents are heavily penalized if a collision happens and have only two actions, namely gas and brake.
All algorithms are trained for 20 million environment steps with a maximum of 20 steps per episode.
§.§ Architecture and Hyperparameters
Figure <ref> illustrates the components of the architecture used in this work, similar to <cit.>. A message head is only used for algorithms with communication, namely CACL, AEComm, PL and DIAL. The Grounding Module refers to mechanisms to ground the messages produced by the message head, used in CACL and AEComm. Unless specified otherwise, we fix all hidden layers to be a size of 32.
We experimented with using the output of the GRU, or hidden state, to condition the message head. Empirically we found that directly conditioning on the observation encoding, as in <cit.>, lead to more stable learning dynamics.
The observation encoder output values of size 32. For Predator-Prey and Traffic Junction, a one-layer fully-connected neural network is used as observation encoder. For Find-Goal, same as <cit.>, we use a two-layer convolutional neural network followed by a 3-layer fully-connected neural network.
For the message encoder, it outputs values of size 8 in Predator-Prey and Find-Goal with one hidden layer. It outputs values of size 16 in Traffic-Junction with two hidden layers. These configurations are selected based on the best performance of the baseline communication learning algorithm used - DIAL. Messages received are concatenated before passing to message encoders. For all the methods with communication, they produce messages of length 4 (D=4) with a sigmoid function as activation. All models are trained with the Adam optimizer <cit.>.
Table <ref> lists out the hyperparameters used for all the methods.
§.§ Predator-Prey Capture Rate
Table <ref> shows each method's success rate in capturing preys for Predator-Prey. CACL outperforms the baselines by capturing the most preys out of the evaluation episodes.
§.§ Positive Listening
This section describes the loss function we implemented for positive listening, based on <cit.>. Given two policies π^i and π^i of agent i where the latter is the policy with messages zeroed out in the observations, and a trajectory τ of length T, the positive listening loss is written as:
L_PL = -1/|T|∑_j^T[∑_a ∈ A^i (|π^i(a | τ_j^i) - π^i(a | τ_j^i)|) + (π^i(a|τ_j^i)log(π^i(a | τ_j^i))]
where in inner summation, the first term is the L1 Norm and the second term is the cross entropy loss.
§.§ Protocol Representation Probing: 1-Layer
Table <ref> shows the same results for the two probing tests in section <ref> except here we use a 1-layer neural network instead of 2 layers. We observe significant dips in performance across all methods. Particularly, CACL becomes worse than the baselines in the easier Goal Visibility Test. However, CACL remains superior in the more difficult Goal Location test by an even bigger margin than the results in table <ref>.
§.§ Zero-shot Cross-play
An advanced form of coordination is working with partners you have not seen during training <cit.>. Previous work has focused on coordination through actions <cit.> or pre-test grounding with a common dataset <cit.> but to our knowledge, no previous work has succeeded in learning a linguistic communication protocol that is robust to zero-shot partners.
To assess this advanced robustness, we take trained agents from different methods and random seeds and evaluate them with each other (i.e., zero-shot cross-play) in Predator-Prey and Find-Goal. Given two communication learning methods, m_1 and m_2, we sample two agents from each method for Predator-Prey and for Find-Goal, we average over sampling two agents from one method and one agent from the other and vice-versa. For intra-method cross-play, m_1 = m_2, we evaluate agents that were trained with the same method but from different random seeds, so they have not been trained with each other. For inter-method cross-play, m_1 ≠ m_2, we sample agents from two different methods and pair them with each other. Each pairing is evaluated for 10 random seeds each with 10 evaluation episodes. Given that agents are trained in self-play <cit.> without regard for cross-play, we expect severe performance dips.
We show mean and standard deviation across random seeds for Predator-Prey and Find-Goal in Tables <ref> and <ref>, respectively. As expected, all pairings take a significant dip in performance when compared with the main results. Inter-method cross-play performance is particularly bad across all algorithms. However, notably, CACL outperforms other methods in intra-method cross-play, indicating that the protocols learned by CACL are generally more robust even across random seeds. In general, zero-shot linguistic communication is incredibly difficult and our results are far from optimal. Still, CACL shows promise and demonstrates that contrastive SSL methods can lead to better zero-shot communication and coordination.
§.§ Broader Impact
More multi-agent learning systems will be deployed in the real-world as further progress is made in fields of multi-agent learning like MARL. We expect communication to play an essential role in these systems given how real-world problems are inherently complex and partially observable in most cases. Our focus on the decentralized communication setting contribute to the capability of learning more effective and consistent communication protocols. Having more consistent protocols improve mutual intelligibility and pave the way to multi-agent systems in which agents can communicate with unseen agents or even humans.
On the other hand, increasing adoption of such multi-agent learning systems could exacerbate certain risks. For instance, this could increase unemployment in a significant scale if systems operated by multiple humans like warehouses are replaced with multi-robot learning systems. It could also contribute to more advanced automated weaponry. In particular, given that our method explicitly considers messages sent from other agents in our protocol learning algorithm, it could encourage adversarial attacks which would lead to harmful behaviors and miscommunication, especially in mission-critical systems.
|
http://arxiv.org/abs/2307.00923v1
|
20230703104931
|
Achieving Stable Training of Reinforcement Learning Agents in Bimodal Environments through Batch Learning
|
[
"E. Hurwitz",
"N. Peace",
"G. Cevora"
] |
cs.LG
|
[
"cs.LG",
"cs.AI"
] |
[
[
August 1, 2023
==================
Bimodal, stochastic environments present a challenge to typical Reinforcement Learning problems. This problem is one that is surprisingly common in real world applications, being particularly applicable to pricing problems. In this paper we present a novel learning approach to the tabular Q-learning algorithm, tailored to tackling these specific challenges by using batch updates. A simulation of pricing problem is used as a testbed to compare a typically updated agent with a batch learning agent. The batch learning agents are shown to be both more effective than the typically-trained agents, and to be more resilient to the fluctuations in a large stochastic environment. This work has a significant potential to enable practical, industrial deployment of Reinforcement Learning in the context of pricing and others.
Much of the work and literature within the field of Reinforcement Learning [RL] focuses on problem domains with well-regulated rewards <cit.> <cit.>. These rewards fall into expected ranges with similar orders of magnitude. This ignores a common problem of failure states, in which a very different reward, typically zero, would be received that is very dissimilar to the reward received in an episode resulting in a success state <cit.>. This results in a bimodal distribution of rewards, as shown in Figure fig:Bimodal. While typical methods of applying RL will have some degree of success, the bimodal nature of the reward distribution disrupts the learning capability of the agent considerably. This paper explores a novel batch-learning approach to the problem in order to circumvent this disrupted learning pattern, smoothing out the learning process and thus facilitating better performance.
A commercial pricing problem is an apt example to support our discussion: decision makers are commonly presented with difficult decisions surrounding what discount if any to their products will result in optimal revenue generation <cit.>. The bimodal distribution of rewards emerges as a result of binary consumer behaviors regardless of what discount is applied: consumers will either purchase or not. The resulting reward distribution therefore has two modes 1) for the purchases at varying discount levels, 2) zero when the customer doesn't purchase. Using conventional RL techniques this results in an unstable and unpredictable training process.
Our investigation was performed in an experimental manner, by first creating a parameterised model environment that produces the problematic behaviour and then creating agents that use RL to solve the environment across its parameterisations, both with and without the proposed batch learning innovation. These results are then contrasted in order to arrive at robust conclusions as to the effectiveness of the batch learning innovation and the conditions under which it is most impactful.
§ METHODS
In line with the commercial pricing example we construct a toy-problem to demonstrate the efficiency of batch learning. The experiments will allow an agent offering discounts and learning in a RL fashion to interact with simulated environment. A number of variants on the environment and agent will be explored.
§.§ Environment
Pricing problems are a simple yet relevant context to investigate bimodal rewards by virtue of only one decision stage occurring in the environment; a purchase or not. The environment is set up such that it contains a customer who has a base probability of purchasing a given product, and that the customer's probability of purchase will increase or decrease in a nonlinear manner dependant on the discount offered.
The bimodal distribution captures the probability of no purchase, or failure, corresponding to a return of $0, while also providing approximately normally distributed values, representing the returns in the instance of a purchase, or success as shown in Figure <ref>. Two distinct problems are captured in this broader RL problem domain. The first problem is to determine whether the result will be a success or a failure. The second problem is to map the degree of success if success occurs.
These two problems can result in significant challenges for a conventional RL solution, as the failure modes skew the q-values recorded in the q-table, particularly in instances where early training results in an outlier volume of sequential failures.
The agent interacts with the environment through offering discounts d from the base price π to individual customers thus providing reward R=π(1-d) to the agent.
Individual customers c vary in their base probability of considering a purchase at all P(β_c) but if they do consider a purchase, the probability of purchase γ increases as a function of discount d according to P(γ|β_c)=1-e^ζ d where ζ is an arbitrary constant.
This results in an overall probability of purchase τ for a given customer of Pτ_c=Pβ_c-P(γ| β_c, d) and the expected reward from offering a discount to customer 𝔼(R_c)=P(τ_c)R(π,d).
The essential quantities are summarised in figure <ref> for the sake of clarity.
While P(γ) increases as the discount increases, the revenue resulting from the purchase decreases with the discount increasing. This results in a non-monotonic relationship between discount and expected revenue 𝔼(R_c|d) shown in Figure fig:exp_returns. The optimum discount is therefore the value which maximises 𝔼(R_c). This value can be found in an analytical fashion for the environment defined here, but is unknown in the real world. The task of RL agents tested in this environment is to maximise the function without knowledge of the environment.
While the 𝔼(R_c|d,β) function shown in Figure <ref> is smooth, and with no local minima, the optimisation task is not trivial due to the bimodal nature of the rewards. To explore the characteristics of the task we set up two different versions of the environment:
* The initial consideration of purchase β varying in probability between 0.2 and 0.8 in 0.1 steps, called the sparse state-space. Resulting in P(β)∈[0.2; 0.3; 0.4; 0.5; 0.6; 0.7; 0.8].
* The initial consideration of purchase β varying in probability between 0.2 and 0.785 in 0.015 steps, called the granular state-space. Resulting in 40 different values of P(β).
§.§ Agent
The agent is required to make a single decision what discount to offer the customer at a single time-point. To accommodate a standard, tabular q-learning approach, this action-space is discretised. Two versions of the action-space discretisation are explored, each representing a different level of granularity:
* The agent may choose from 10 discount levels, the sparse action-space: d∈[0; 0.1; 0.2; 0.3; 0.4; 0.5; 0.6; 0.7; 0.8; 0.9].
* The agent may choose from 81 discount levels in 0.01 intervals between 0 and 0.79, the granular action-space.
The agent learns using standard tabular Q-learning with Bellman equation <cit.>, making decisions according to an an ϵ-greedy policy <cit.>. The learning rate used is 0.1, and the ϵ = 0.9.
§.§ Experimental setup
To demonstrate the benefit of batch learning for the task defined here the agent will either update the Q-table after each observation as is common in online RL or alternatively store a user-defined number (1000 in the experiments presented here) of observations in memory before updating the Q-table with average rewards over the whole batch. Factorial design is used to explore all four combinations of state- and action- spaces.
All experiments are run for 100,000 iterations allowing for learning until convergence. Convergence is defined as the agent achieving (on a rolling mean basis) 95% of the optimum performance under perfect knowledge of the environment. The optimum was calculated analytically for sparse state-space, but approximated using MCMC for the granular state-space.
§.§ Illustrative results from a simplified experiment
In order to demonstrate the shortfalls and merits of single-update and batch-update visually to the reader an additional experiment is run in a more simplistic environment. For this purpose the base purchase probability is fixed at 0.6 making 0.1 discount the optimum action for the agent.
As can be seen in Figure <ref> both standard and batch Q-learning results in a gradual convergence towards the analytically calculated expectation, which is inaccessible to the agent, then oscillate around that value in response to the stochasticity of the rewards. Compared to the standard approach, batch learning offers slower convergence, but is less sensitive to the stochastic oscillations, with higher overall reward achieved, making it much more stable and usable tool to be deployed. Convergence is clearly slower, but this is more than made up for in better converged performance which comes out to a 2.17% overall improvement in reward over the time period, as well as more stable converged performance.
§ RESULTS
Four unique experimental setups shown in figure <ref> were tested in eight experiments, with batch learning outperforming the standard approach in every single case as measured by the total reward accumulated over the experiments. The final reward at fully-trained state was improved by batch learning in all cases except for the largest state-space.
§ CONCLUSIONS
This paper proposes a batch training method for Reinforcement Learning [RL] agents. This update method has been shown to be superior compared to standard methods in environments with bimodal rewards such as offering discounts to customers with differing price sensitivity. The batch training method not only produces better performance in these challenging environments, but also produces more decisive agents acting on learned information. Convergence time is adversely affected, as is intuitive, which becomes the cost for a more stable and optimal converged state when operating in a bimodal manner.
Stable performance and better convergence in real-world scenarios like this is an important step on the path to industrialize RL.
plain
|
http://arxiv.org/abs/2307.03251v1
|
20230706184916
|
Numerically Unveiling Hidden Chaotic Dynamics in Nonlinear Differential Equations with Riemann-Liouville, Caputo-Fabrizio, and Atangana-Baleanu Fractional Derivatives
|
[
"Shahariar Ryehan"
] |
math.DS
|
[
"math.DS",
"34H10, 65P20",
"G.1.8"
] |
equationsection
ℝ
ℂ
|
http://arxiv.org/abs/2307.01958v1
|
20230704234148
|
Metallurgy, superconductivity, and hardness of a new high-entropy alloy superconductor Ti-Hf-Nb-Ta-Re
|
[
"Takuma Hattori",
"Yuto Watanabe",
"Terukazu Nishizaki",
"Koki Hiraoka",
"Masato Kakihara",
"Kazuhisa Hoshi",
"Yoshikazu Mizuguchi",
"Jiro Kitagawa"
] |
cond-mat.supr-con
|
[
"cond-mat.supr-con"
] |
inst1]Takuma Hattori
[inst1]organization=Department of Electrical Engineering, Faculty of Engineering, Fukuoka Institute of Technology,
addressline=3-30-1 Wajiro-higashi, Higashi-ku,
city=Fukuoka,
postcode=811-0295,
country=Japan
inst2]Yuto Watanabe
[inst2]organization=Department of Physics, Tokyo Metropolitan University,
city=Hachioji,
postcode=192-0397,
country=Japan
inst3]Terukazu Nishizaki
[inst3]organization=Department of Electrical Engineering, Faculty of Science and Engineering, Kyushu Sangyo University,
addressline=2-3-1 Matsukadai, Higashi-ku,
city=Fukuoka,
postcode=813-8503,
country=Japan
inst1]Koki Hiraoka
inst1]Masato Kakihara
inst2]Kazuhisa Hoshi
inst2]Yoshikazu Mizuguchi
inst1]Jiro Kitagawa
We explored quinary body-centered cubic (bcc) high-entropy alloy (HEA) superconductors with valence electron concentrations (VECs) ranging from 4.6 to 5.0, a domain that has received limited attention in prior research.
Our search has led to the discovery of new bcc Ti-Hf-Nb-Ta-Re superconducting alloys, which exhibit an interesting phenomenon of phase segregation into two bcc phases with slightly different chemical compositions, as the VEC increases.
The enthalpy of the formation of each binary compound explains the phase segregation.
All the alloys investigated were categorized as type-II superconductors, with superconducting critical temperatures (T_c) ranging from 3.25 K to 4.38 K.
We measured the Vickers microhardness, which positively correlated with the Debye temperature, and compared it with the hardness values of other bcc HEA superconductors.
Our results indicate that T_c systematically decreases with an increase in hardness beyond a threshold of approximately 350 HV.
Additionally, we plotted T_c vs. VEC for representative quinary bcc HEAs. The plot revealed the asymmetric VEC dependence.
The correlation between the hardness and T_c, as well as the asymmetric dependence of T_c on VEC can be attributed to the simultaneous effects of the electronic density of states at the Fermi level and electron-phonon coupling under the uncertainty principle, especially in the higher VEC region.
High-entropy alloys Superconductivity Hardness Valence electron concentration
§ INTRODUCTION
A high-entropy alloy (HEA) is a type of alloy that consists of multiple elements as its primary constituents, and it differs from conventional alloys that typically have a single principal element with the minor additive elements.
The high-entropy state of HEAs offers outstanding thermal stability owing to an increased configurational entropy<cit.>.
HEAs have received significant attention owing to their diverse functionalities, including high fracture toughness, energy storage, thermoelectricity, soft ferromagnetism, high-temperature structural stability, and biocompatibility<cit.>.
High-entropy states have been extensively studied in various materials such as oxides, chalcogenides, borides, carbides, and nitrides<cit.>.
The formation of solid solutions in typical HEAs with simple body-centered cubic (bcc) and face-centered cubic (fcc) structures significantly depends on the valence electron concentration (VEC), with VECs ranging from 5.0 to 6.87 for a single bcc phase and exceeding 8.0 for a single fcc phase<cit.>.
In addition, VECs occasionally affect the physical properties of HEAs. For instance, the Vickers microhardness in bcc and fcc HEAs exhibit a universal relationship with the VECs and form broad peak at the VECs ∼ 6.8<cit.>.
Furthermore, VEC plays a crucial role in understanding the superconductivity in bcc HEA superconductors, which is a focal point of investigation in the present study.
Superconductivity is an essential property of HEAs.
The bcc alloy Ta_34Nb_33Hf_8Zr_14Ti_11 was the first HEA superconductor reported in 2014<cit.>.
Since then, research on HEA superconductors have become a popular topic<cit.>.
HEAs exhibit several interesting features.
The robustness of superconductivity at extremely high pressures has been well established.
In (TaNb)_0.67(HfZrTi)_0.33, the superconducting critical temperature T_c of 8 K remains almost unchanged even under an extremely high pressure of 180 GPa<cit.>.
Additionally, the robustness of the superconductivity has been observed in the presence of magnetic elements <cit.>.
Enhancement of the diamagnetic signal in the high-entropy state was confirmed in BiS_2-based superconductors<cit.>.
Recently, Ta-Nb-Hf-Zr-Ti films have been reported to exhibit critical current densities exceeding those of high-field superconducting magnets <cit.>.
Moreover, these films exhibit extremely robust superconductivity under ion irradiation<cit.>.
HEA superconductivity is currently being investigated across various structural types, including bcc<cit.>, hexagonal close-packed (hcp)<cit.>, CsCl-type<cit.>, A15<cit.>, NaCl-type<cit.>, α (or β)-Mn-type<cit.>, σ-phase type<cit.>, CuAl_2-type<cit.>, W_5Si_3-type<cit.>, BiS_2-based, and YBCO-based<cit.> structures.
Most HEAs exhibit conventional s-wave BCS-type superconducting properties.
In bcc Nb-Re-Hf-Zr-Ti and Ta-Nb-Zr-Hf-Ti alloys, the electronic mean free path is considerably smaller than the BCS coherence lengths, categorizing these HEAs within the dirty limit regime, owing to significant atomic disorder<cit.>.
The extent of the atomic disorder can be quantified using configurational entropy.
Numerous researchers have explored the influence of configurational entropy on the superconducting properties of HEAs; however, systematic trends have not yet been established.
Recently, it was proposed that T_c exhibits a negative correlation with the Debye temperature in several bcc HEAs under a fixed configurational entropy <cit.>.
This behavior can be attributed to the weakened electron-phonon interactions in HEAs with higher Debye temperatures, arising from the uncertainty principle <cit.>.
Superconducting Cr_5+xMo_35-xW_12Re_35Ru_13C_20 has a non-centrosymmetric β-Mn-type structure<cit.>.
This indicates a significantly enhanced ratio between the upper critical field and Pauli limit in comparison to other β-Mn-type superconductors<cit.>.
CuAl_2-type TrZr_2 (Tr = Fe, Co, Ni, Cu, Rh, and Ir) HEA superconductors exhibit an intriguing phenomenon characterized by anomalous broadening of the specific heat jumps at T_c. This behavior is associated with the microscopic inhomogeneity in the Cooper pair formation<cit.>.
YBCO-type HEAs fall into the category of non-BCS superconductors.
This structure is characterized by a layered arrangement, and the high-entropy effect at rare-earth sites has been studied in many YBCO HEAs.
The rare-earth sites are located between the superconducting Cu-O layers.
An increase in the configurational entropy at rare-earth sites does not significantly affect T_c<cit.>.
The bcc HEA superconductors have been extensively studied, with the dependence of T_c on VEC being typically examined<cit.>.
VEC serves as the reflection of the total density of states of the valence band at the Fermi level, representing a vital factor in determining the T_c of BCS superconductors.
A strong correlation between the VEC and T_c has been observed in binary and ternary superconducting transition-metal alloys, commonly referred to as the Matthias rule<cit.>.
When plotting T_c against VEC in binary and ternary superconducting alloys, T_c exhibits broad peaks at distinct VEC values of approximately 4.6 and 6.6.
Consequently, we utilized VEC as a design parameter to explore new quinary bcc HEAs.
For quinary bcc HEAs, the T_c of typical superconducting alloys increases as the VEC increases from 4.1 to ∼ 4.6–4.7. This behavior is reminiscent of the Matthias rule observed in conventional binary or ternary transition metal alloys.
The T_c vs. VEC plot for binary or ternary superconducting alloys exhibits a broad peak at VEC of ∼ 4.6–4.7.
Thus, studying bcc HEA superconductors with VEC larger than 4.6 holds great interest, as this region remains relatively unexplored.
The systematic exploration of bcc HEA superconductors with VECs exceeding 4.6 is still in its early stages, with the only bcc HEA superconductors recently reported being Nb-Ta-Mo-Hf-W, Ti-Zr-Nb-Ta-W, and Ti-Zr-Nb-Ta-V superconductors (VEC:4.8 ∼ 5.11)<cit.>.
Transition metal-based quinary bcc HEA superconductors have been reported in various systems, including Ta-Nb-Hf-Zr-Ti, Nb-V-Hf-Zr-Ti, Ta-V-Hf-Zr-Ti, Nb-Re-Zr-Hf-Ti, Hf-Nb-Ti-V-Zr, Hf-Nb-Ta-Ti-V, Hf-Mo-Nb-Ti-Zr, Nb-Ta-Mo-Hf-W, Ti-Zr-Nb-Ta-W, and Ti-Zr-Nb-Ta-V<cit.>.
Thus, the VEC values of these elements appear to be limited to 4, 5, 6, or 7.
To conveniently achieve an average VEC greater than 4.6, we selected Re (VEC=7) as the constituent element.
The remaining four elements–Ti, Hf, Nb, and Ta–were employed because of their frequent utilization in bcc HEA superconductors, as demonstrated earlier.
Another crucial design parameter is the δ-parameter, which quantifies the extent of atomic size differences among the constituent elements<cit.>.
The δ-parameter can be calculated using the equation,
δ=100×√(∑^5_i=1c_i(1-r_i/r̅)^2)
Here, c_i and r_i represent the atomic fraction and radius of the ith element, respectively, and r̅ denotes the composition-weighted average atomic radius.
In bcc HEA superconductors, δ values range from 3.8–10.7<cit.>.
We confined the atomic fraction of each element between 5 % and 35 %, which aligns with one of the definitions of HEA<cit.>.
Within these restriction, we determined the alloy composition such that the VEC ranges from 4.6 to 5.0 and δ value smaller than 5.0.
In this study, we searched for quinary bcc HEA superconductors with VEC ranging from 4.6 to 5.0 and discovered bcc Ti-Hf-Nb-Ta-Re superconducting alloys.
These alloys are interesting systems in which phase segregation into two bcc phases occurs with increasing VEC.
Metallurgical analysis was conducted based on metallographic examination.
The fundamental superconducting properties were assessed by measuring the electrical resistivity, magnetization, and specific heat.
The Vickers microhardness, which contains phonon information, was also measured, as the BCS theory suggests that the electron-phonon interaction is one of the decisive factors affecting T_c.
We discuss the dependence of T_c on the hardness (or VEC) of several representative bcc HEAs, while considering the effects of the electronic density of states at the Fermi level and electron-phonon coupling under the uncertainty principle.
§ MATERIALS AND METHODS
Polycrystalline samples were prepared using the constituent elements, Ti (99.9 %), Hf (98 %), Nb (99.9 %), Ta (99.9 %), and Re (99.99 %), in a homemade arc furnace under an Ar atmosphere, as shown in Table <ref>.
Each sample underwent multiple flips and remelting to ensure homogeneity, and the samples were finally quenched on a water-chilled Cu hearth.
The samples received neither heat treatment nor deformation such as rolling.
The VEC was calculated using the following equation:
VEC=∑^5_i=1c_iVEC_i
Here, VEC_i is the VEC value associated with the ith element.
The VEC_i values were four for Ti and Hf, five for Nb and Ta, and seven for Re.
The δ-parameter values were as follows: 4.46 for VEC=4.6, 4.24 for VEC=4.7, 3.91 for VEC=4.8, 3.42 for VEC=4.9, and 2.7 for VEC=5.0.
For the calculation, atomic radii data were sourced from the literature <cit.>.
Room-temperature X-ray diffraction (XRD) patterns were obtained using a Shimadzu XRD-7000L X-ray diffractometer with Cu-Kα radiation.
Scanning electron microscopy (SEM) images were obtained using a JEOL JSM-7100F field-emission scanning electron microscope (FE-SEM).
An energy-dispersive X-ray (EDX) spectrometer equipped with FE-SEM was employed to evaluate the chemical composition in each sample area.
The EDX spectrometer was also used to obtain elemental mappings.
A quantum-design MPMS3 SQUID magnetometer was used to measure the temperature dependence of the dc magnetization M(T) and isothermal magnetization curve.
The electrical resistivity ρ was measured using a four-probe method with a quantum-design PPMS apparatus.
The specific heat was also measured using PPMS equipment.
The Vickers microhardness was measured using a Shimadzu HMV-2T microhardness tester under an applied load of 2.94 N, with a 10 s holding time under a diamond indenter.
§ RESULTS AND DISCUSSION
§.§ Structural and Metallographic Characterizations
Figure <ref>(a) shows the XRD patterns of the prepared alloys, which can be readily indexed to the bcc structure with no conspicuous impurity phases, as indicated by the Miller indices in the figure.
The lattice parameters obtained by the least-squares method are listed in Table <ref> and plotted against VEC in Fig.<ref>(b).
The systematic decrease in the lattice parameter with increasing VEC can be attributed to differences in the atomic radii of the constituent elements.
As VEC increases from 4.6 to 5.0, the concentration of Ti+Hf decreased instead of an increase in Ta or Nb atomic fraction.
The atomic radii of Ti, Hf, Nb, and Ta are 1.4615, 1.5775, 1.429, and 1.43 Å, respectively<cit.>.
Hence, the substitution of Ti+Hf with Nb or Ta results in a reduction in the lattice parameter.
Figures <ref>(a)–(e) show the SEM images and corresponding elemental maps for all alloys.
The SEM image of the sample with a VEC of 4.6 exhibits no noticeable secondary phase, and each elemental mapping demonstrates a homogeneous distribution, as shown in Fig.<ref>(a).
The chemical composition determined using EDX agreed with the initial composition (Table <ref>).
As discussed later, the sharp superconductive transition in the specific heat strongly supports the single-phase nature of the sample with a VEC of 4.6.
Increasing the VEC induces phase segregation, which is evident in samples with VEC = 4.9 and 5.0 (Figs.<ref>(d) and (e)).
In each sample, both the SEM image and elemental mapping revealed phase segregation in each sample. The bright phase in the SEM image of the sample with a VEC of 4.9 is rich in Nb and Ta and poor in Ti and Hf, whereas the dark phase exhibits the reverse tendency.
As shown in Table <ref>, the VEC value calculated using the chemical composition of the bright (dark) phase is higher (lower) than the nominal value of 4.9.
Similar results were obtained for the sample with a VEC of 5.0.
Although the samples with VEC = 4.7 and 4.8 exhibited no contrasted SEM images, weak inhomogeneous elemental distributions were detected in Ti and Hf (Figs.<ref>(b) and (c)).
We have verified the chemical compositions of the samples with VECs of 4.7 and 4.8, based on the elemental mappings (e.g., Hf-rich and Hf-poor regions noted in Fig.<ref>(b)).
In each sample, the results of EDX analyses suggested the segregation into Nb- and Ta-rich phases and Ti- and Hf-rich phases.
The weak manifestation of phase segregation in the sample with a VEC of 4.7 or 4.8 is consistent with the observation of a slightly rounded specific heat jump, as discussed below.
The XRD patterns of the phase-segregated samples was composed of two phases with slightly varying VEC values.
In these samples, the chemical compositions of both phases were comparable, and as a result, they constituted a bcc structure.
The average value of the lattice parameters for the samples with VEC = 4.7, 4.8, 4.9, or 5.0 is shown in Fig.<ref>(b).
Therefore, based on a linear approximation of the data plot shown in Fig.<ref>(b), the lattice parameters of the two phases for each phase-segregated sample can be estimated by utilizing the actual VEC values determined through EDX.
The XRD positions of the two phases, one with a VEC higher and the other with a VEC lower than the nominal VEC, are shown in Fig.<ref>(a) by the black and red bars, respectively.
This indicates that the XRD patterns of the phase-segregated samples can be explained by the superposition of the two phases.
Next, we examine the mechanism underlying the phase segregation phenomenon based on the enthalpy of formation (ΔH_f).
We utilized ΔH_f data<cit.> for each binary compound with an equimolar ratio of the constituent elements, as presented in Table <ref>.
Our analysis revealed that Ti and Nb exhibited attractive interactions with Hf and Ta, respectively, whereas Ti and Hf atoms exhibited repulsive interactions with Nb and Ta atoms.
These results explain the segregation observed in the (Ti, Hf)- and (Nb, Ta)-rich phases.
Furthermore, Re atoms tend to form alloys with all the other elements because of their significantly negative ΔH_f values.
Consequently, the inhomogeneity of the Re distribution is not as pronounced.
To elucidate the phenomenon of phase segregation in more detail, we performed calculations to determine the enthalpy of mixing (Δ H_mix) of the quinary alloy using the relationship in <cit.>
Δ H_mix=4∑^5_i=1, i≠ jΔ H_f(i,j) c_i c_j
Here, Δ H_f(i,j) denotes the enthalpy of the binary phase between the ith and jth elements, as listed in Table <ref>. The resulting Δ H_mix values for the samples with VECs of 4.6, 4.7, 4.8, 4.9, and 5.0 were found to be -163, -153, -142, -138, and -142 (meV/atom), respectively.
These findings indicate the diminishing stability of the equiatomic solid solution with increasing VEC.
As previously mentioned, Δ H_f(Ti,Hf) and Δ H_f(Nb,Ta) exhibit negative values, while Δ H_f(Ti,Nb), Δ H_f(Ti,Ta), Δ H_f(Hf,Nb), and Δ H_f(Hf,Ta) exhibit positive values. Consequently, within the context of an unstable equiatomic solid solution, segregation into (Ti, Hf)-rich and (Nb, Ta)-rich phases becomes energetically favorable. This segregation aligns with the observed phase segregation.
§.§ Superconducting Properties
Figure <ref>(a) illustrates the temperature dependence of ρ for all the alloys.
The order of magnitude of ρ for each sample indicated good metallicity, whereas atomic disorder was responsible for the weak temperature dependence.
Figure <ref>(b) shows the low-temperature ρ behaviors, which exhibit sharp drops to zero-resistivity states at T_c.
The resistivity drop of each alloy starts slightly above T_c, which is determined by the magnetization or specific heat, as reported for several HEA superconductors<cit.>.
The results of temperature-dependent M measured under zero-field cooled (ZFC) and field cooled (FC) conditions, under an external field μ_0H of 0.4 mT, are presented in Fig.<ref>.
Each ZFC data point shows a diamagnetic signal due to superconductivity, whereas the FC data point indicates flux pinning in the sample.
The samples with VECs of 4.8 and 4.9 possess additional small sharp drops in M at approximately 4.5 K (the solid ellipsoid in the inset of Fig.<ref>).
These drops are due to the parasitic superconducting phase resulting from phase segregation.
This parasitic phase differed from the two segregated phases in each sample, and the specific heat with no anomaly at 4.5 K indicates a negligible amount of the parasitic phase.
To evaluate the lower critical field H_c1, the M–H curves at lower H were measured at different temperatures, as summarized in Figs.<ref>(a)-(e).
H_c1 is defined as the H value where a negative peak is observed.
The samples with VECs of 4.8 and 4.9 show broad anomalies at H higher than H_c1, denoted by arrows.
These anomalies correlated with the appearance of a parasitic superconducting phase.
The sample with a VEC of 4.8 and relatively larger M drop at 4.5 K in the inset of Fig.<ref>, shows more obvious anomalies in the M-H curves (see Fig.<ref>(c)).
The temperature dependence of H_c1 can be reproduced using the Ginzburg-Landau equation given as follows:
H_c1(T)=H_c1(0)(1-t^2),
where t is the reduced temperature T/T_c (Fig.<ref>(f)).
The μ_0H_c1(0) values are listed in Table <ref>.
We also measured the temperature dependence of ZFC M for various μ_0H values to estimate the upper critical field H_c2, as shown in Figs.<ref>(a)-(e).
The onset of the diamagnetic signal is defined as T_c, and μ_0H_c2 for each sample is plotted as a function of T in Fig.<ref>(f).
The temperature dependence of μ_0H_c2 can be explained using the following formula:
H_c2(T)=H_c2(0)(1-t^2/1+t^2)
The obtained μ_0H_c2 values are summarized in Table <ref>.
For each alloy, the Ginzburg-Landau coherence length ξ_GL(0) is calculated as, ξ_GL(0)=√(Φ_0/2πμ_0H_c2(0)), where Φ_0 denotes the magnetic flux quantum of 2.07×10^-15 Wb.
The resulting values are 6.5, 6.9, 7.0, 7.5, and 7.5 nm for the VEC values of 4.6, 4.7, 4.8, 4.9, and 5.0, respectively.
The magnetic penetration depth λ_GL(0) can be obtained by using the relation, μ_0H_c1(0)=Φ_0/4πλ_GL(0)^2lnλ_GL(0)/ξ_GL(0).
The λ_GL(0) values extracted from the data are 370, 308, 380, 291, and 297 nm for the VEC values of 4.6, 4.7, 4.8, 4.9, and 5.0, respectively.
The Ginzburg-Landau parameter κ_GL=λ_GL(0)/ξ_GL(0) was then calculated. The resulting Ginzburg-Landau parameter values were 57, 45, 54, 39, and 40 for the VEC values of 4.6, 4.7, 4.8, 4.9, and 5.0, respectively.
All κ_GL values were found to be greater than 1/√(2), indicating that all the alloys are type-II superconductors.
Figure <ref>(a) shows the comparison of the temperature dependences of the C (C(T)) values for all alloys.
Each sample exhibited the bulk nature of superconductive transition, and T_c was defined as the midpoint of the transition at each C(T) (Table <ref>).
As VEC increases from 4.6 to 5.0, the superconductive transition gradually broadens owing to the progressive growth of the phase segregation.
In each phase-segregated sample, the chemical compositions of the two phases with a bcc structure were similar, and each phase underwent a superconductive transition.
In Fig.<ref>(b), C/T is plotted as a function of T^2, and each alloy data above T_c can be fitted by the expression,
C/T=γ+β T^2,
where γ and β are the Sommerfeld coefficient and lattice contribution, respectively.
The Debye temperature θ_D, was derived from the β value using the equation, θ_D=(12π^4RN/5β)^1/3, where the number of atoms per formula unit N = 1, and R is the gas constant.
The values of γ and θ_D for each alloy are presented in Table <ref>.
As the VEC values increases from 4.6 to 4.7, γ increases, and a further increase in VEC after a threshold of 4.7 tends to decrease γ.
However, θ_D systematically increases with increasing VEC.
The intriguing aspect lies in the broad superconducting transition of C(T) observed in the samples with VECs of 4.9 and 5.0, characterized by prominent phase segregation.
Certain bcc HEA superconductors exhibit phase segregation, and their C(T) data have been previously reported<cit.>.
For instance, the equiatomic ScHfNbTaTi compound decomposes into bcc superconducting and hcp non-superconducting phases<cit.>.
The superconducting phase displayed a distinct and sharp C(T) transition at T_c=6.1 K, whereas the hcp phase had no discernible effect on the sharpness of the specific heat jump at T_c.
Another example is the equiatomic HfNbTaTiV compound (VEC=4.6), which exhibits a dendritic structure<cit.>.
Both the dendritic phase (VEC=4.57) and inter-dendritic phase (VEC=4.48) manifested as bcc superconductors.
Although the difference between the VECs the two phases is similar to that of the present alloy (VEC=4.9 or 5.0), the C(T) value of HfNbTaTiV shows a sharp transition at T_c=4.37 K.
In comparison to ScHfNbTaTi and HfNbTaTiV, the broad superconducting transition of the C(T) value observed in the current Ti-Hf-Nb-Ta-Re alloy with a VEC of 4.9 or 5.0 appears to be a rare phenomenon.
This raises intriguing questions regarding the relationship between the microstructure and superconducting properties of HEAs.
§.§ Hardness
Table <ref> lists the Vickers microhardness values of the examined alloys.
Figure <ref>(a) depicts the relationship between the Vickers microhardness and θ_D, where θ_D represents the highest frequency of the lattice vibration, which reflects the interatomic bonding strength.
A material with a higher θ_D exhibits enhanced bonding forces between atoms, which results in increased hardness, as observed in Fig.<ref>(a).
Recent first-principles calculations conducted on transition metal binary or high-entropy alloys, which possess elemental combinations akin to Ti-Hf-Nb-Ta-Re, further support the positive correlation between hardness and θ_D<cit.>.
Considering that θ_D generally reflects the hardness of a material, it is reasonable to anticipate that an increase in θ_D corresponds to an increase in the Vickers microhardness.
Figure <ref>(a) reveals a positive correlation between hardness and θ_D, suggesting that hardness can serve as an alternative to θ_D in explaining the electron-phonon interactions.
Several reports have demonstrated that the Vickers microhardness of numerous bcc and fcc HEAs depends on the VEC values <cit.>.
In bcc HEAs with VECs of ∼ 4–5.5, an increase in VEC tends to result in a higher hardness because the hard elements are present at VEC values of ∼ 5–7.
This behavior is reinforced in Fig.<ref>(b), where the hardness is plotted as a function of VEC, along with our prior findings<cit.> on typical bcc HEA superconductors.
Figure <ref>(c) shows the plot of T_c vs. hardness for the alloys utilized in Fig.<ref>(b), to examine the impact of hardness on HEA superconductivity.
Investigations into the hardness characteristics of bcc HEA superconductors have not been undertaken by any other research group, to the best of our knowledge.
At hardness values greater than approximately 350 HV, T_c decreases as the hardness increases, owing to two factors.
According to the BCS theory, T_c depends on the electronic density of states at the Fermi level and electron-phonon interaction.
Thus, the first factor is the decrease in the electronic density of states at the Fermi level with increasing hardness, because an HEA with a hardness exceeding 350 HV is situated on a VEC larger than approximately 4.5, as depicted in Fig.<ref>(b).
The Matthias rule for conventional binary or ternary transition metal alloys proposes that T_c forms a broad peak at a VEC of 4.6.
The Matthias rule, which is widely recognized, emphasizes the significant influence of the electronic density of states at the Fermi level in determining T_c. This rule states that the electronic density of states at the Fermi level is systematically depressed as VEC increases beyond approximately 4.6.
Therefore, an HEA with hardness exceeding 350 HV may also have a reduced electronic density of states, leading to a lower T_c.
Ti-Hf-Nb-Ta-Re exhibits a positive correlation between T_c and γ, where γ reflects the magnitude of the electronic density of states at the Fermi level, thus supporting the first factor.
The second factor is the decline in T_c owing to the weakened electron-phonon interactions through the uncertainty principle.
A broad phonon band is typically expected in the disordered atomic state <cit.>.
An HEA with higher hardness possesses higher θ_D, which leads to a broader phonon band.
This results in a shorter phonon lifetime according to the uncertainty principle Δ EΔ t≥ħ/2, where Δ E is the energy uncertainty associated with band broadening, Δ t is the lifetime, and ħ is Planck’s constant.
Therefore, the electron-phonon coupling of an HEA with higher hardness is weakened<cit.>, leading to the lower T_c.
§.§ T_c vs. VEC plot
Here, we present a comparison of the superconducting properties of the quinary bcc HEA superconductors, with VECs ranging from 4.1 to 5.3.
The T_c data are plotted against VEC in Fig.<ref>.
The green, blue, and black symbols represent non-equimolar Hf-Nb-Ta-Ti-Zr, Al-Nb-Ti-V-Zr, and Hf_21Nb_25Ti_15V_15Zr_24, respectively<cit.>.
The orange symbols denote the equiatomic quinary bcc HEA superconductors<cit.> of HfNbTaTiZr, HfNbReTiZr, HfNbTa-TiV, and HfMoNbTiZr.
Studies corresponding the VECs of 5.0 or more are indicated by light blue (Nb-Ta-Mo-Hf-W, Ti-Zr-Nb-Ta-W, and Ti-Zr-Nb-Ta-V)<cit.> and red (this work) symbols.
The solid curve in the plot represents the Matthias rule for conventional binary or ternary transition-metal alloys.
According to this rule, the electronic density of states at the Fermi level, which is closely related to VEC, is the decisive factor for T_c.
While the solid curve is symmetric with respect to VEC, the quinary bcc HEA superconductors show an asymmetric trend.
This contrasted behavior means that a factor other than the electronic density of states at the Fermi level plays a role in the T_c determination of HEAs, especially at VECs larger than ∼4.6.
Considering that T_c depends on the electronic density of states at the Fermi level and electron-phonon interaction, the weight of the electron-phonon interaction in determining T_c increases in HEAs with VECs exceeding 4.6.
As mentioned in the previous subsection, the uncertainty principle in HEAs may contribute to a weakened electron-phonon coupling strength at VECs exceeding 4.6, leading to an additional reduction in T_c.
Therefore, we speculate that the simultaneous effect of the reduced electronic density of states at the Fermi level and weakened electron-phonon coupling through the uncertainty principle in the higher VEC region gives rise to the hardness dependence of T_c and asymmetric T_c vs. VEC plot.
We discuss the correlation between the Vickers microhardness and θ_D based on the T_c vs. VEC plot.
The correlations between Vickers microhardness and θ_D, presented in Fig.<ref>(a), suggests that a harder sample exhibits a higher θ_D.
Hardness is linked to VEC, and within the current alloy system, hardness increases with increasing VEC, as shown in Fig.<ref>(b).
The plot of T_c vs. VEC in Fig.<ref> illustrates that the T_c of Ti-Hf-Nb-Ta-Re decreases with increasing VEC.
Consequently, harder samples tend to exhibit a lower T_c, indicating that a higher θ_D results in a lower T_c. This negative correlation between θ_D and T_c contrasts with the positive correlation typically observed in conventional BCS superconductors.
A previous study<cit.> that compared quinary bcc HEA superconductors also revealed a negative correlation between θ_D and T_c, suggesting that this could be a common characteristic of bcc HEA superconductors.
§ SUMMARY
We have discovered the superconducting alloy of bcc Ti-Hf-Nb-Ta-Re with VEC values ranging from 4.6 to 5.0 are all type-II superconductors with T_c= ∼3.25–4.38 K.
A significant feature of this system is the emergence of phase segregation when the VEC surpasses 4.7, which can be explained by the enthalpy of the formation of each binary compound.
Because of this phase segregation, C(T) data exhibited a broad superconducting transition in the samples with VECs of 4.9 or 5.0.
We confirmed a positive correlation between the Vickers microhardness and θ_D.
Furthermore, we examined the hardness dependence of T_c for various quinary bcc HEAs and discovered a systematic reduction in T_c, when the hardness exceeded approximately 350 HV.
The plot of T_c vs. VEC for the representative quinary bcc HEAs reveals an asymmetric VEC dependence.
Contrary to the Matthias rule for conventional binary or ternary transition metal alloys, the T_c values of HEAs are significantly suppressed at high VEC values.
The reduction in the electronic density of states at the Fermi level and weakened electron-phonon coupling through the uncertainty principle could be responsible for the hardness dependence of T_c at hardness values exceeding 350 HV and substantial suppression of T_c in the high VEC region.
§ ACKNOWLEDGMENTS
T.N. acknowledges the support from a Grant-in-Aid for Scientific Research (KAKENHI) (Grant No. 20K03867) and the Advanced Instruments Center of Kyushu Sangyo University. Y.M. acknowledges the support from a Grant-in-Aid for Scientific Research (KAKENHI) (Grant No. 21H00151). J.K. acknowledges the support from a Grant-in-Aid for Scientific Research (KAKENHI) (Grant No. 23K04570) and the Comprehensive Research Organization of the Fukuoka Institute of Technology.
99
Biswas:book
K. Biswas, N.P. Gurao, T. Maiti, R.S. Mishra, High Entropy Materials, Springer, Singapore, 2022.
Li:PMS2021
W. Li, D. Xie, D. Li, Y. Zhang, Y. Gao, P.K. Liaw, Mechanical behavior of high-entropy alloys. Prog. Mater. Sci. 118 (2021), 100777.
https://doi.org/10.1016/j.pmatsci.2021.100777.
Marques:EES2021
F. Marques, M. Balcerzak, F. Winkelmann, G. Zepon, M. Felderhoff, Review and outlook on high-entropy alloys for hydrogen storage. Energy Environ. Sci. 14 (2021), 5191-5227.
https://doi.org/10.1039/D1EE01543E.
Yang:NM2022
B. Yang, Y. Zhang, H. Pan, W. Si, Q. Zhang, Z. Shen, Y. Yu, S. Lan, F. Meng, Y. Liu, H. Huang, J. He, L. Gu, S. Zhang, L. Q. Chen, J. Zhu, C. W. Nan, Y. H. Lin, High-entropy enhanced capacitive energy storage, Nat. Mater. 21 (2022), 1074–1080.
https://doi.org/10.1038/s41563-022-01274-6.
Jiang:Science2021
B. Jiang, Y. Yu, J. Cui, X. Liu, L. Xie, J. Liao, Q. Zhang, Y. Huang, S. Ning, B. Jia, B. Zhu, S. Bai, L. Chen, S.J. Pennycook, J. He, High-entropy-stabilized chalcogenides with high thermoelectric performance. Science 371 (2021), 830-834.
https://doi.org/10.1126/science.abe1292.
Kitagawa:APLMater2022
J. Kitagawa, M. Fukuda, S. Fukuda, K. Fujiki, Y. Nakamura, T. Nishizaki, Discovery of ferromagnetism in new multicomponent alloy Ti–Nb–Cr–Ru. APL Mater. 10 (2022), 071101.
https://doi.org/10.1063/5.0097770.
Kitagawa:JMMM2022
J. Kitagawa, Magnetic properties, electrical resistivity, and hardness of high-entropy alloys FeCoNiPd and FeCoNiPt. J. Magn. Magn. Mater. 563 (2022), 170024.
https://doi.org/10.1016/j.jmmm.2022.170024.
Xiong:JMST2023
W. Xiong, A.X.Y. Guo, S. Zhan, C.-T. Liu, S.C. Cao, Refractory high-entropy alloys: A focused review of preparation methods and properties, J. Mater. Sci. Technol. 142 (2023), 196-215.
https://doi.org/10.1016/j.jmst.2022.08.046.
Castro:Metals2021
D. Castro, P. Jaeger, A.C. Baptista, J.P. Oliveira, An overview of high-entropy alloys as biomaterials. Metals 11 (2021), 648.
https://doi.org/10.3390/met11040648.
Brahlek:APLMater2022
M. Brahlek, M. Gazda, V. Keppens, A.R. Mazza, S.J. McCormack, A. Milewczyk-Gryń, B. Musico, K. Page, C.M. Rost, S.B. Sinnott, C. Toher, T.Z. Ward, A. Yamamoto, What is in a name: Defining “high entropy” oxides. APL Mater. 10 (2022), 110902.
https://doi.org/10.1063/5.0122727.
Ying:JACS
T. Ying, T. Yu, Y.-S. Shiah, C. Li, J. Li, Y. Qi, H. Hosono, High-entropy van der Waals materials formed from mixed metal dichalcogenides, halides, and phosphorus trisulfides, J. Am. Chem. Soc. 143 (2021) 7042-7049.
https://doi.org/10.1021/jacs.1c01580.
Murchie:ACT2022
A.C. Murchie, J.L. Watts, W.G. Fahrenholtz, G.E. Hilmas, Room-temperature mechanical properties of a high-entropy diboride, Int. J. Appl. Ceram. Technol. 19 (2022), 2293-2299.
https://doi.org/10.1111/ijac.14026.
Wang:AAC2022
Y. Wang, Processing and properties of high entropy carbides, Adv. Appl. Ceram. 121 (2022), 57-78.
https://doi.org/10.1080/17436753.2021.2014277.
Lu:ASS2021
X. Lu, C. Zhang, C. Wang, X. Cao, R. Ma, X. Sui, J. Hao, W. Liu, Investigation of (CrAlTiNbV)N_x high-entropy nitride coatings via tailoring nitrogen flow rate for anti-wear applications in aviation lubricant. Appl. Surf. Sci. 557 (2021), 149813.
https://doi.org/10.1016/j.apsusc.2021.149813.
Tian:IM2015
F. Tian, L.K. Varga, N. Chen, J. Shen, L. Vitos, Empirical design of single phase high-entropy alloys with high hardness. Intermetallics 58 (2015), 1-6.
https://doi.org/10.1016/j.intermet.2014.10.010.
Kitagawa:JALCOM2022
J. Kitagawa, K. Hoshi, Y. Kawasaki, R. Koga, Y. Mizuguchi, T. Nishizaki, Superconductivity and hardness of the equiatomic high-entropy alloy HfMoNbTiZr. J. Alloys Compd. 924 (2022), 166473.
https://doi.org/10.1016/j.jallcom.2022.166473.
Kozelj:PRL2014
P. Koželj, S. Vrtnik, A. Jelen, S. Jazbec, Z. Jagličić, S. Maiti, M. Feuerbacher, W. Steurer, J. Dolinšek, Discovery of a Superconducting High-Entropy Alloy. Phys. Rev. Lett. 113 (2014), 107001.
https://doi.org/10.1103/PhysRevLett.113.107001.
Sun:PRM2019
L. Sun, R.J. Cava, High-entropy alloy superconductors: Status, opportunities, and challenges. Phys. Rev. Mater. 3 (2019), 090301.
https://doi.org/10.1103/PhysRevMaterials.3.090301.
Kitagawa:Metals2020
J. Kitagawa, S. Hamamoto, N. Ishizu, Cutting edge of high-entropy alloy superconductors from the perspective of materials research. Metals 10 (2020), 1078.
https://doi.org/10.3390/met10081078.
Guo:PNAC2017
J. Guo, H. Wang, F. von Rohr, Z. Wang, S. Cai, Y. Zhou, K. Yang, A. Li, S. Jiang, Q. Wu, R.J. Cava, L. Sun, Robust zero resistance in a superconducting high-entropy alloy at pressures up to 190 GPa. Proc. Natl. Acad. Sci. 114 (2017) 13144-13147.
https://doi.org/10.1073/pnas.1716981114.
Liu:JALCOM2021
B. Liu, J. Wu, Y. Cui, Q. Zhu, G. Xiao, S. Wu, G.-h. Cao, Z. Ren, Superconductivity and paramagnetism in Cr-containing tetragonal high-entropy alloys. J. Alloys Compd. 869 (2021), 159293.
https://doi.org/10.1016/j.jallcom.2021.159293.
Sogabe:SSC2019
R. Sogabe, Y. Goto, T. Abe, C. Moriyoshi, Y. Kuroiwa, A. Miura, K. Tadanaga, Y. Mizuguchi, Improvement of superconducting properties by high mixing entropy at blocking layers in BiS2-based superconductor REO_0.5F_0.5BiS_2. Solid State Commun. 295 (2019), 43-49.
https://doi.org/10.1016/j.ssc.2019.04.001.
Jung:NC2022
S.-G. Jung, Y. Han, J.H. Kim, R. Hidayati, J.-S. Rhyee, J.M. Lee, W.N. Kang, W.S. Choi, H.-R. Jeon, J. Suk, T. Park, High critical current density and high-tolerance superconductivity in high-entropy alloy thin films. Nat. Commun. 13 (2022), 3373.
https://doi.org/10.1038/s41467-022-30912-5.
Rohr:PRM2018
F. O. von Rohr, R. J. Cava, Isoelectronic substitutions and aluminium alloying in the Ta-Nb-Hf-Zr-Ti high-entropy alloy superconductor. Phys. Rev. Mater. 2 (2018), 034801.
https://doi.org/10.1103/PhysRevMaterials.2.034801.
Marik:JALCOM2018
S. Marik, M. Varghese, K. P. Sajilesh, D. Singh, R. P. Singh, Superconductivity in equimolar Nb-Re-Hf-Zr-Ti high entropy alloy, J. Alloys Compd. 769 (2018), 1059-1063.
https://doi.org/10.1016/j.jallcom.2018.08.039.
Ishizu:RINP
N. Ishizu, J. Kitagawa, New high-entropy alloy superconductor Hf_21Nb_25Ti_15V_15Zr_24. Res. Phys. 13 (2019), 102275.
https://doi.org/10.1016/j.rinp.2019.102275.
Harayama:JSNM2021
Y. Harayama, J. Kitagawa, Superconductivity in Al-Nb-Ti-V-Zr multicomponent alloy. J. Supercond. Nov. Magn. 34 (2021) 2787-2794.
https://doi.org/10.1007/s10948-021-05966-z.
Sarkar:IM2022
N.K. Sarkar, C.L. Prajapat, P.S. Ghosh, N. Garg, P.D. Babu, S. Wajhal, P.S.R. Krishna, M.R. Gonal, R. Tewari, P.K. Mishra, Investigations on superconductivity in an equi-atomic disordered Hf-Nb-Ta-Ti-V high entropy alloy. Intermetallics 144 (2022), 107503.
https://doi.org/10.1016/j.intermet.2022.107503.
Motla:PRB2022
K. Motla, P. K. Meena, D. S. Arushi, D. Singh, P. K. Biswas, A. D. Hillier, R. P. Singh, Superconducting and normal-state properties of the high-entropy alloy Nb-Re-Hf-Zr-Ti investigated by muon spin relaxation and rotation. Phys. Rev. B 105 (2022), 144501.
https://doi.org/10.1103/PhysRevB.105.144501.
Kitagawa:RHP2022
J. Kitagawa, T. Nishizaki, Materials Research on Superconducting or Magnetic High-Entropy Alloys, Rev. High Press. Sci. Technol. 32 (2022), 77–85.
https://doi.org/10.4131/jshpreview.32.77.
Lee:PhysicaC2019
Y. -S. Lee, R. J. Cava, Superconductivity in high and medium entropy alloys based on MoReRu. Phys. C 566 (2019), 1353520.
https://doi.org/10.1016/j.physc.2019.1353520.
Marik:PRM2019
S. Marik, K. Motla, M. Varghese, K. P. Sajilesh, D. Singh, Y. Breard, P. Boullay, R. P. Singh, Superconductivity in a new hexagonal high-entropy alloy. Phys. Rev. Mater. 3 (2019), 060602(R).
https://doi.org/10.1103/PhysRevMaterials.3.060602.
Browne:JSSC2023
A. J. Browne, D. P. Strong, R. J. Cava, Phase stability and possible superconductivity of new 4d and 5d transition metal high-entropy alloys. J. Solid State Chem. 321 (2023), 123881.
https://doi.org/10.1016/j.jssc.2023.123881.
Stolze:ChemMater2018
K. Stolze, J. Tao, F. O. von Rohr, T. Kong, R. J. Cava, Sc–Zr–Nb–Rh–Pd and Sc–Zr–Nb–Ta–Rh–Pd high-entropy alloy superconductors on a CsCl-type lattice. Chem. Mater. 30 (2018), 906-914.
https://doi.org/10.1021/acs.chemmater.7b04578.
Wu:SCM2020
J. Wu, B. Liu, Y. Cui, Q. Zhu, G. Xiao, H. Wang, S. Wu, G. Cao, Z. Ren, Polymorphism and superconductivity in the V-Nb-Mo-Al-Ga high-entropy alloys, Sci. China Mater. 63 (2020), 823-831.
https://doi.org/10.1007/s40843-019-1237-5.
Yamashita:JALCOM2021
A. Yamashita, T. D. Matsuda, Y. Mizuguchi, Synthesis of new high-entropy alloy-type Nb_3(Al, Sn, Ge, Ga, Si) superconductors. J. Alloys Compd. 868 (2021), 159233.
https://doi.org/10.1016/j.jallcom.2021.159233.
Mizuguchi:JPSJ2019
Y. Mizuguchi, Superconductivity in High-Entropy-Alloy Telluride AgInSnPbBiTe_5. J. Phys. Soc. Jpn. 88 (2019), 124708.
https://doi.org/10.7566/JPSJ.88.124708.
Yamashita:DalTran2020
A. Yamashita, R. Jha, Y. Goto, T. D. Matsuda, Y. Aoki, Y. Mizuguchi, An efficient way of increasing the total entropy of mixing in high-entropy-alloy compounds: A case of NaCl-type (Ag,In,Pb,Bi)Te_1-xSe_x (x = 0.0, 0.25, 0.5) superconductors, Dalton Trans. 49 (2020), 9118-9122.
https://doi.org/10.1039/D0DT01880E.
Stolze:JMCC2018
K. Stolze, F. A. Cevallos, T. Kong, R. J. Cava, High-entropy alloy superconductors on an α-Mn lattice, J. Mater. Chem. C 6 (2018), 10441-10449.
https://doi.org/10.1039/C8TC03337D.
Xiao:SM2023
G. Xiao, W. Yang, Q. Zhu, S. Song, G.-H. Cao, Z. Ren, Superconductivity with large upper critical field in noncentrosymmetric Cr-bearing high-entropy alloys. Scr. Mater. 223 (2023), 115099.
https://doi.org/10.1016/j.scriptamat.2022.115099.
Liu:ACS2020
B. Liu, J. Wu, Y. Cui, Q. Zhu, G. Xiao, H. Wang, S. Wu, G. Cao, Z. Ren, Formation and Superconductivity of Single-Phase High-Entropy Alloys with a Tetragonal Structure, ACS Appl. Electron. Mater. 2 (2020), 1130-1137.
https://doi.org/10.1021/acsaelm.0c00108.
Kasen:SST2021
M. R. Kasem, A. Yamashita, T. Hatano, K. Sakurai, N. Oono-Hori, Y. Goto, O. Miura, Y. Mizuguchi, Anomalous broadening of specific heat jump at Tc in high-entropy-alloy-type superconductor TrZr_2. Supercond. Sci. Technol. 34 (2021), 125001.
https://doi.org/10.1088/1361-6668/ac2554.
Liu:PRM2023
B. Liu, W. Yang, G. Xiao, Q. Zhu, S. Song, G.-H. Cao, Z. Ren, High-entropy silicide superconductors with W_5Si_3-type structure. Phys. Rev. Mater. 7 (2023), 014805.
https://doi.org/10.1103/PhysRevMaterials.7.014805.
Shukunami:PhysicaC2020
Y. Shukunami, A. Yamashita, Y. Goto, Y. Mizuguchi, Synthesis of RE123 high-T_c superconductors with a high-entropy-alloy-type RE site. Physica C 572 (2020), 1353623.
https://doi.org/10.1016/j.physc.2020.1353623.
Leung:SP2022
C.K.W. Leung, X. Zhang, F. von Rohr, R. Lortz, B. Jäck, Evidence for isotropic s-wave superconductivity in high-entropy alloys. Sci. Rep. 12 (2022), 12773.
https://doi.org/10.1038/s41598-022-16355-4.
Rohr:PNAS2016
F. von Rohr, M. J. Winiarski, J. Tao, T. Klimczuk, R. J. Cava, Effect of electron count and chemical complexity in the Ta-Nb-Hf-Zr-Ti high-entropy alloy superconductor, Proc. Natl. Acad. Sci. 113 (2016) E7144-E7150.
https://doi.org/10.1073/pnas.1615926113.
Matthias:PR1955
B.T. Matthias, Empirical relation between superconductivity and the number of valence electrons per atom. Phys. Rev. 97 (1955) 74-76.
https://doi.org/10.1103/PhysRev.97.74.
Sobota:PRB2022
P. Sobota, R. Topolnicki, T. Ossowski, T. Pikula, A. Pikul, R. Idczak, Superconductivity in the high entropy alloy (NbTa)_0.67(MoHfW)_0.33. Phys. Rev. B 106 (2022), 184512.
https://doi.org/10.1103/PhysRevB.106.184512.
Shu:APL2022
R. Shu, X. Zhang, S. G. Rao, A. le Febvrier, P. Eklund, Effects of alloying and deposition temperature on phase formation and superconducting properties of TiZrTaNb-based high entropy-alloy films. Appl. Phys. Lett. 120 (2022), 151901.
https://doi.org/10.1063/5.0091777.
Miracle:AM2017
D.B. Miracle, O.N. Senkov, A critical review of high entropy alloys and related concepts. Acta Mater. 122 (2017) 448–511.
https://doi.org/10.1016/j.actamat.2016.08.081.
Troparevsky:PRX2015
M.C. Troparevsky, J.R. Morris, P.R.C. Kent, A.R. Lupini, G.M. Stocks, Criteria for predicting the formation of single-phase high-entropy alloys. Phys. Rev. X 5 (2015), 011041.
https://doi.org/10.1103/PhysRevX.5.011041.
Krnel:materials2022
M. Krnel, A. Jelen, S. Vrtnik, J. Luzar, D. Gačnik, P. Koželj, M. Wencka, A. Meden, Q. Hu, S. Guo, J. Dolinšek, The Effect of Scandium on the Structure, Microstructure and Superconductivity of Equimolar Sc-Hf-Nb-Ta-Ti-Zr Refractory High-Entropy Alloys. Materials 15 (2022), 1122.
https://doi.org/10.3390/ma15031122.
Jiang:FED2021
D. Jiang, W. Xiao, D. Liu, S. Liu, Structural stability, electronic structures, mechanical properties and debye
temperature of W-Re alloys: A first-principles study. Fusion Eng. Des. 162 (2021), 112081.
https://doi.org/10.1016/j.fusengdes.2020.112081.
Ru:MTC2022
J. Ru, R. Ma, M. Wan, Q. Xie, First principles calculation on electronic structures and mechanical properties of TiCrTaV high–entropy alloy. Mater. Today Commun. 31 (2022), 103801.
https://doi.org/10.1016/j.mtcomm.2022.103801.
Kormann:npjCM2017
F. Körmann, Y. Ikeda, B. Grabowski, M.H.F. Sluiter, Phonon broadening in high entropy alloys. Npj Comput. Mater. 3 (2017), 36.
https://doi.org/10.1038/s41524-017-0037-8.
Mizuguchi:MTP2023
Y. Mizuguchi, H. Usui, R. Kurita, K. Takae, M.R. Kasem, R. Matsumoto, K. Yamane, Y. Takano, Y. Nakahira, A. Yamashita, Y. Goto, A. Miura, C. Moriyoshi, Glassy atomic vibrations and blurry electronic structures created by local structural disorders in high-entropy metal telluride superconductors. Mater. Today Phys. 32 (2023), 101019.
https://doi.org/10.1016/j.mtphys.2023.101019.
Vrtnik:JALCOM2017
S. Vrtnik, P. Koželj, A. Meden, S. Maiti, W. Steurer, M. Feuerbacher, J. Dolinšek, Superconductivity in thermally annealed Ta-Nb-Hf-Zr-Ti high-entropy alloys. J. Alloys Compd. 695 (2017), 3530-3540.
https://doi.org/10.1016/j.jallcom.2016.11.417.
|
http://arxiv.org/abs/2307.02691v1
|
20230705233633
|
SACHA: Soft Actor-Critic with Heuristic-Based Attention for Partially Observable Multi-Agent Path Finding
|
[
"Qiushi Lin",
"Hang Ma"
] |
cs.RO
|
[
"cs.RO",
"cs.AI",
"cs.MA"
] |
IEEE Robotics and Automation Letters. Preprint Version. Accepted June, 2023
Lin et al.: SACHA: Soft Actor-Critic with Heuristic-Based Attention for Partially Observable Multi-Agent Path Finding
SACHA: Soft Actor-Critic with Heuristic-Based Attention for Partially Observable Multi-Agent
Path Finding
Qiushi Lin and Hang Ma
Manuscript received: February 9, 2023; Revised May 11, 2023; Accepted June 14, 2023.
This paper was recommended for publication by Editor M. Ani Hsieh upon evaluation of the Associate Editor and Reviewers' comments.
This work was supported by the NSERC under grant number RGPIN2020-06540 and a CFI JELF award.
The authors are with the School of Computing Science, Simon Fraser University, Burnaby, BC, Canada {qiushi_lin, hangma}@sfu.ca
Digital Object Identifier (DOI): see top of this page.
August 1, 2023
======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Multi-Agent Path Finding (MAPF) is a crucial component for many large-scale robotic systems, where agents must plan their collision-free paths to their given goal positions. Recently, multi-agent reinforcement learning has been introduced to solve the partially observable variant of MAPF by learning a decentralized single-agent policy in a centralized fashion based on each agent's partial observation. However, existing learning-based methods are ineffective in achieving complex multi-agent cooperation, especially in congested environments, due to the non-stationarity of this setting. To tackle this challenge, we propose a multi-agent actor-critic method called Soft Actor-Critic with Heuristic-Based Attention (SACHA), which employs novel heuristic-based attention mechanisms for both the actors and critics to encourage cooperation among agents. SACHA learns a neural network for each agent to selectively pay attention to the shortest path heuristic guidance from multiple agents within its field of view, thereby allowing for more scalable learning of cooperation. SACHA also extends the existing multi-agent actor-critic framework by introducing a novel critic centered on each agent to approximate Q-values. Compared to existing methods that use a fully observable critic, our agent-centered multi-agent actor-critic method results in more impartial credit assignment and better generalizability of the learned policy to MAPF instances with varying numbers of agents and types of environments. We also implement SACHA(C), which embeds a communication module in the agent's policy network to enable information exchange among agents. We evaluate both SACHA and SACHA(C) on a variety of MAPF instances and demonstrate decent improvements over several state-of-the-art learning-based MAPF methods with respect to success rate and solution quality.
Path Planning for Multiple Mobile Robots or Agents; Reinforcement Learning; Deep Learning Methods
§ INTRODUCTION
Recent studies have demonstrated the practical success of Multi-Agent Path Finding (MAPF) <cit.> in various domains, such as warehouse and office robots <cit.>, autonomous aircraft-towing vehicles <cit.>, and other multi-robot systems <cit.>. MAPF aims to plan collision-free paths for a set of agents from their start positions to their goal positions in a shared environment while minimizing the sum of their completion times (i.e., the arrival times at their goal positions).
Although MAPF is NP-hard to solve optimally <cit.>, the AI community has developed many optimal and bounded-suboptimal MAPF planners for fully observable environments, where a centralized planner has complete information of the environment to plan joint paths for agents. These planners do not apply to agents with limited sensing capabilities and do not scale well to a large number of agents, as the complexity of coordinating the joint paths of agents grows exponentially with the number of agents in the systems. Learning-based methods with centralized training and decentralized execution have been proposed to develop scalable and generalizable learning-based MAPF methods for the partially observable setting. In this setting, each agent receives a partial observation of its surroundings. Learning-based MAPF methods aim to train a decentralized homogeneous policy that each agent will follow based on its local observation during execution. This policy can be distributed to any number of agents in any environment, as the dimension of the single-agent observation space depends only on the FOV size in the partially observable setting. However, the non-stationarity of environments from the perspective of any single agent poses a significant challenge for learning-based MAPF methods. The transitions of the global state are affected by the individual actions of other agents towards their local interests. Moreover, goal-oriented reinforcement learning with single-agent rewards makes the training process unstable and time-consuming, further incentivizing the selfishness of each agent that prioritizes its goal over collaborating with others. This could hinder coordination and teamwork among agents, negatively affecting overall performance.
To address the challenges posed by solving MAPF in the partially observable setting, we propose Soft Actor-Critic with Heuristic-Based Attention (SACHA), a novel approach for partially observable MAPF that leverages heuristic guidance through attention mechanisms to learn cooperation. SACHA builds upon the multi-agent actor-critic framework and, along with its communication-based alternative, SACHA(C), aims to learn a decentralized homogeneous policy that can be generalized to any number of agents in any arbitrary partially observable MAPF environment. To achieve this, we first allow each agent to access the goal-oriented heuristic guidance of multiple agents in the form of the shortest path distances to each of the goals, which can be computed efficiently before execution. We then employ a self-attention module in the policy network for each agent to locally select relevant information from the guidance and take actions towards better cooperation among agents.
We expect SACHA to make a significant algorithmic impact not only on MAPF solving but also on other similar multi-agent tasks in partially observable settings because its learning process of the homogeneous policy is also guided by a homogeneous critic for more stable learning and faster convergence. Unlike existing multi-agent actor-critic methods with one fully centralized critic or multiple decentralized critics, SACHA introduces a novel agent-centered critic network that uses an attention mechanism to approximate each agent's Q-function and performs credit assignment only based on the information within each partial observation. The input dimension of this critic is determined by the number of agents that each agent's Q-function should be based on, which is implicitly limited by the partial observation range (e.g., FOV size in MAPF). This partially centralized critic ensures that the Q-function is not biased towards any specific problem instance, resulting in a well-trained policy network that can generalize well to different numbers of agents and environments.
We experimentally compare SACHA and SACHA(C) with state-of-the-art learning-based and search-based MAPF methods over several MAPF benchmarks. Our results show that both versions of SACHA result in higher success rates and better solution quality than other methods in almost all test cases. The results thus indicate that our methods allow for better cooperation among agents than the other methods with and even without communication.
§ PROBLEM FORMULATION
In this section, we first present the standard formulation of MAPF and then dive into its partially observable variant.
§.§ Standard MAPF Formulation
In the standard formulation of MAPF, we are given a connected and undirected graph G=(V, E) and a set of M agents, indexed by i ∈{1, 2, ⋯, M}. Each agent has a unique start vertex s_i ∈ V and a unique goal vertex g_i ∈ V. Time is discretized into time steps, t = 0, 1, ⋯, ∞. Between two consecutive time steps, each agent can either move to one of its adjacent vertices or wait at its current vertex. A path for agent i contains a sequence of vertices that lead agent i from s_i to g_i, where each vertex indicates the position of the agent for every time step. The completion time T_i of agent i is defined as the length of its path, and it is the earliest time when agent i has reached and terminally stayed at its goal vertex. Collisions between agents are not allowed. A vertex collision occurs when two agents occupy the same vertex v at the same time t. An edge collision occurs when two agents traverse the same edge (u, v) in opposite directions from t to t+1. A MAPF solution is a set of collision-free paths for all agents. A commonly-used objective function is the average (equivalently, sum) of the completion times of all agents.
§.§ Partially Observable Environments
In this paper, we consider a more practical scenario where agents have only partial observation of the environment but still aim to fully cooperate to minimize the average completion time. We model this partially observable variant of MAPF as a decentralized partially observable Markov Decision Process (Dec-POMDP) <cit.>, defined as a 7-tuple ⟨𝒮, 𝒜_i, P, Ω_i, O, R, γ⟩. 𝒮 is the set of global states. 𝒜_i is the set of actions for agent i, and 𝒜 = ∏_i=1^M𝒜_i is the joint action space. Ω_i is the observation space of agent i, and Ω = ∏_i=1^MΩ_i is the joint observation space. 𝒪: 𝒜×𝒮→Ω is the observation function, denoting the probability P(o|a, s), whereas 𝒫: 𝒜×𝒮→𝒮 is the state-transition function for the environment, representing the probability P(s' | a, s), where o ∈Ω, a ∈𝒜, and s, s' ∈𝒮. ℛ: 𝒮×𝒜→ℝ is the reward function, and γ is the discount factor.
In MAPF, the observation and state-transition functions are deterministic, where each agent has full control of its next position and observation by taking one of the move actions or the wait action. To facilitate proper comparison with existing learning-based MAPF methods, we formalize the MAPF problem on two-dimensional grid maps with four neighbors, even though our method can also be generalized to other MAPF problems. The partial observability limits each agent's perception to its FOV, defined as a ℒ×ℒ square area centered on the agent. Agents take their actions based on their local observation and the history from the beginning to the current time step.
One of the key challenges for decentralized planners with limited access to global information is the occurrence of deadlocks and livelocks. Similar to time-windowed MAPF planners <cit.>, these issues arise due to the limited planning horizon of agents, either in time or space, that prevents them from reaching their goals. For instance, consider Fig. <ref>, where the red agent in a5 and the green agent in a6 are heading towards their respective goals a6 and a5. An optimal solution may require the red agent to move north and terminally stay at its goal while the green agent moves north and takes a detour since its direct path is blocked by the red agent. However, after a few moves, the green agent will no longer observe the blocking red agent and end up wiggling between a9 and a10 indefinitely. Symmetrically, if the green agent moves south and terminally stays at its goal, the red agent will eventually wiggle between a1 and a2.
Existing learning-based MAPF methods rely on two extra assumptions to alleviate the above issues: (1) Each agent has full visibility of the map (which is consistent with both standard MAPF and time-windowed MAPF), even though it does not know the global state. (2) Two agents are allowed to communicate when they are within each other's FOV. In this paper, SACHA and SACHA(C) utilize the same assumptions. Both methods give each agent access to the shortest path distances to its goal. SACHA(C) enables inter-agent communication, while SACHA only requires each agent to identify other agents in its FOV.
§ RELATED WORK
We now survey the related work on learning-based MAPF methods and multi-agent reinforcement learning methods.
§.§ Learning-Based MAPF Methods
Recent learning-based MAPF methods <cit.> have been proposed to solve MAPF in a partially observable setting. These methods aim to learn a decentralized policy that can be generalized to different MAPF instances. While centralized MAPF planners require full observation of the environment and must plan paths from scratch for each instance, the well-trained model can be applied to MAPF instances with any number of agents and environment size, without increasing the time complexity.
The most straightforward approach for tackling partial observability is to treat other agents as part of the environment and let each agent learn its policy independently, as in Independent Q-Learning (IQL) <cit.>. However, this approach results in non-cooperative behavior among the agents, and its training is not guaranteed to converge due to interference between the policies of different agents. State-of-the-art MAPF methods <cit.> enhance IQL with a communication mechanism to promote cooperation between agents. Other MAPF methods <cit.> use the actor-critic framework, with guidance from an expert demonstration. PRIMAL <cit.> combines on-policy asynchronous advantage actor-critic (A3C) <cit.> with behavior cloning from an expert demonstration generated by a centralized MAPF planner <cit.>. However, the centralized MAPF planner requires solving numerous MAPF instances, which slows down the training process. DHC <cit.> and DCC <cit.> have shown that using single-agent shortest path distances as heuristic guidance for goal-oriented learning of each agent is more effective than following a specific reference path in a multi-agent cooperative setting.
In Section <ref>, we compare SACHA against PRIMAL, DHC, and DCC experimentally. Table <ref> summarizes the comparison of properties of these methods, showing that SACHA improves over them by adopting a more stable training scheme, utilizing better heuristic guidance through more complex model design, and allowing for applicability to both communicating and non-communicating scenarios.
§.§ Cooperative Multi-Agent Reinforcement Learning
Multi-Agent Reinforcement Learning (MARL) is a well-established framework for coordinating multiple agents in a shared environment. A rich literature <cit.> on cooperative MARL has been dedicated to coordinating agents that work towards a common objective and take actions that benefit all agents as a whole. To deal with the nonstationarity of the environment, most existing actor-critic methods use one or more fully centralized critics that observe the entire environment. For example, Multi-Agent Deep Deterministic Policy Gradient <cit.> trains each actor with only its local observation using the DDPG algorithm <cit.>, while its corresponding fully centralized critic can access the observations and actions of all agents. Instead of using multiple fully centralized critics, Counterfactual Multi-Agent Policy Gradient <cit.> uses only one fully centralized critic that learns to assign credit to agents and estimate Q-functions for all agents based on a counterfactual baseline that marginalizes out the action of each individual agent. However, it becomes increasingly difficult to perform such credit assignments for cases with large numbers of agents. Therefore, Multiple Actor Attention-Critic<cit.> deploys an attention mechanism for the fully centralized critic to selectively pay attention to relevant information from all agents. SACHA also uses a similar attention mechanism but differs from existing actor-critic methods by using a novel homogeneous agent-centered critic that only takes the local information from each agent as input for generalizability to different MAPF instances instead of a fully centralized critic specific to only one MAPF instance.
§ SACHA
We now provide a detailed description of the main components of SACHA. First, we describe how we use the shortest path distances of multiple agents as the cooperative guidance for each individual agent. Next, we explain how we integrate the shortest path distances into the commonly applied reward design. We also elaborate on our new model design, which includes attention mechanisms applied to both the actors and the critic and an optional communication module. Lastly, we discuss SACHA in the context of the multi-agent actor-critic learning framework.
§.§ Multi-Agent Shortest Path Distance Heuristic Maps
Empirical studies <cit.> have shown that shortest path distances from all vertices to each agent’s goal vertex can greatly benefit goal-oriented learning for the agent in partially observable environments. SACHA utilizes multi-agent shortest path distances to guide not only the achievement of single-agent goals but also better cooperation between agents. Specifically, a backward uniform-cost search is run from each agent’s goal vertex to all vertices in the given graph to generate the shortest path heuristic maps for the agent. The heuristic maps for all agents can be calculated offline once the graph and all goal vertices are given. They remain unchanged for the same MAPF instance during training and can be efficiently generated in advance for a new MAPF instance during execution. The time complexity for calculating the distances for M agents on graph G=(V, E) is 𝒪(M |E| log |V|). Many search-based and some recent learning-based MAPF methods <cit.> also use heuristic maps of shortest path distance, but only as single-agent guidance. SACHA gives each agent access to not only its heuristic map but also the heuristic maps of other agents within its FOV, which enables better cooperation. Fig. <ref> visualizes the heuristic maps that the orange agent has access to at the current time step. Since there are three agents within its FOV, three corresponding heuristic maps are input to the agent's policy network that we will describe later.
§.§ Reward Design with Heuristic Maps
We design the reward function for each individual agent based on its heuristic map. We start with the individual reward function of DHC <cit.> as shown in Table <ref>. It follows the intuition that an agent is punished slightly for each time step before arriving at the goal, thereby encouraging it to reach its goal as quickly as possible. To improve the success rate of solving the global MAPF task, each agent is punished to a greater extent each time it collides with another agent or an obstacle. The agent receives a positive reward only when it arrives at its goal. Most existing learning-based MAPF methods follow the same design idea for their reward functions. However, the goal-conditioned reward in this reward design makes the training unstable and difficult to converge, especially for long-horizon tasks such as MAPF. Also, since an agent which stays further away from its goal generally has a larger potential to collide with other agents, the move actions for it should not be rewarded the same as for ones that are closer to their goals. Therefore, motivated by Heuristic-Guided Reinforcement Learning (HuRL) <cit.>, we reshape the reward function for each agent with an additional heuristic term. Assume we have a transition tuple, (s, a, r, s'), we reshape the reward as:
r̃_i(s, a) = r_i(s, a) + (1-λ) γ h_i(s'),
where h_i(s') is the negated normalized shortest path distance of the global state s' from the heuristic map of agent i. This heuristic term represents a priori guess of the desired long-term return of an agent from state s' and thus serves as a horizon-based regularization. HuRL has been proved both theoretically and empirically to be able to accelerate the learning process significantly by intrinsically reshaping the reward of every position for each agent. We set λ to 0.1 in the experiments.
§.§ Model Design with Attention Mechanisms
We propose a novel model architecture based on the multi-agent actor-critic framework. Our model aims to achieve generalization across different instances by restricting the actor and the critic to operate only within the observation of each agent. At each time step t, we define an undirected observation graph G_t = (V, E_t), where V is the set of all agents and each edge in E_t indicates that the corresponding agents can observe each other. The time-varying graph G_t captures the dynamic correlation of agents in partially observable environments. We denote the subgroup centered on agent i as {i ∪𝒩_t(i)}, where 𝒩_t(i) is the set of the nearest K - 1 neighbors of node i inside its FOV. The observation of each agent i consists of a set of K feature maps ℱ_i = {ℱ_i^j}_j ∈ i ∪𝒩_t(i) where ℱ_i^j ∈ℝ^ℒ×ℒ× 3. Each feature map in the set corresponds to an agent in the subgroup of agent i and contains three channels: (1) a binary matrix that identifies the obstacles and the free space, (2) a binary matrix that marks the positions of agents, and (3) a heuristic channel that shows the normalized shortest path distances for each empty cell. K is set to 3 in our experiments.
We present the learning framework of SACHA in Figure <ref>. Given the observation features, the policy network starts with the observation encoders with shared parameters. The encoders consist of several convolutional layers followed by a GRU <cit.> memory unit. The output set of K encoding is input into the Multi-Head Attention (MHA) <cit.> module that learns the interaction between agent i and its subgroup members by selectively attending to relevant information. The MHA module outputs a set of features the sum of which is used for the observation representation, denoted as o_i. It then will be passed to decoders to generate the corresponding action vector a_i ∈ℝ^5. Each element in a_i represents the probability of choosing one of the five discrete actions {up, down, left, right, stay}.
We propose a novel agent-centered critic, that evaluates each agent's action individually based on its local observation and information about its subgroup members. Unlike previous methods that use a centralized critic with global information, our critic leverages the attention mechanism to dynamically assign credit to each agent. We first pass the policy network's output through a linear function and then apply a multi-head attention module. The sum of the concatenated output vectors is then forwarded through the decoders to obtain the final Q-value, which is used to update the policy networks via the policy gradient method. Since our agent-centered critic takes only requires the local information of the central agent, it is not dependent on any specific environment information, and the learned policy network can thus generalize better.
§.§ Optional Communication Module
Furthermore, we propose a communication-based variant of our method, named SACHA(C). To encourage better cooperation, our method should be able to take advantage of communication when it is allowed. We add an optional communication module after the multi-head attention block, as shown in Fig. <ref>. We first gather all observation representation {o_i}_i=1^M as the initialization ,H^(0), and then feed it into a multi-layer Graph Convolution Network <cit.> (GCN). Recalled that M is the number of agents. Let A_t be the adjacency matrix of G_t and Ã_t = A_t + I_M. Define D̃_t as a diagonal matrix where D̃_ii = ∑_jÃ_ij. The output of the (l+1)-th layer would be:
H^(l+1) = σ(D̃_t^-1/2Ã_t D̃_t^-1/2 H^(l) W^(l)),
where σ(·) is the sigmoid function and W is a layer-specific trainable weight matrix. After l layers of GCN, we can decompose H^(l+1) to M corresponding vectors, {ô_i}_i=1^M, which will eventually be decoded by each network to their corresponding action vectors as usual. We choose a two-layer GCN in the SACHA(C). The communication module is optional, but it can make agents reach information outside their local observation and hence achieve better cooperation.
§.§ Soft Actor-Critic with Multi-Agent Advantage Function
SACHA updates the agent's policy network π parameterized by θ and the critic network parameterized by ψ simultaneously through the soft actor-critic framework. We let θ̅ and ψ̅ denote the moving average of θ and ψ (target parameters of the actor and the critic network), respectively. We first define the action-value temporal difference (TD) error for any experience, e = (s, a, r̃, s'), from the replay buffer D:
δ_i = Q_i^ψ(o_i, a_i) - r̃_i
- γ𝔼_a_i' ∼π_θ̅(o_i') [ Q_i^ψ̅(o
_i', a_i') - αlog (π_θ̅(a_i'|o_i'))) ]
where α is the temperature parameter that decides the weight of the entropy term in the soft actor-critic framework <cit.>. SACHA runs the critic network through every agent-centered subgroup and updates it by minimizing the mean square error loss function:
L_Q(ψ) = 𝔼_e ∼ D∑_i=1^M(e)δ_i^2/M(e),
where M(e) is the number of agents in e. On the other hand, the actors update their underlying policy networks by the policy gradient via the Q-values from the critic network:
∇_θ_i J (θ) = 𝔼_o_i ∼ D, a_i ∼π_θ_i(o_i) [ ∇_θ_ilog(π_θ_i(a_i | o_i))
(Q_i^ψ(o_i, a_i) - b(o_i, a_∖ i) -αlog(π_θ_i(a_i | o_i))) ]
where ∇_θlog(π_θ(a_i | o_i)) is the score function and Q_i^ψ(o_i, a_i) - b(o_i, a_∖ i) is the multi-agent advantage function. Inspired by COMA <cit.>, SACHA adopts the counterfactual baseline in a discrete action space as follows, which marginalizes out the specific action of agent i:
b(o, a_∖ i) = ∑_a'_i ∈𝒜_iπ(a'_i|o_i) Q_i^ψ(o_i, (a'_i, a_∖ i)),
where a_∖ i∈𝒜_∖ i = ∏_j ≠ i𝒜_j is the combination of actions from all agents except for i and 𝒜_i is each agent's action space. This baseline specifically compares an action to other actions of agent i by fixing the actions of all other agents and invoking the critic network for |𝒜_i| times. In each instance, we collect these M updated policies and aggregate them into the new policy by averaging all the locally updated policies: θ^(t+1) = ∑_i=1^Mθ_i^(t)/M, where θ_i^(t) = θ^(t) - ∇_θ_i J (θ^(t)). The policy and the critic network are updated together iteratively to reach fast and stable convergence.
§ ANALYSIS
In this section, we analyze the effectiveness of the policy gradient in our multi-agent actor-critic framework. Most of the existing learning-based MAPF methods use IQL, in which each agent treats others as part of the environment and updates the global policy θ as follows:
∇_θ J (θ) = 𝔼_s ∼ D, a ∼π_θ[∇_θlog(π_θ(a | s)) Q(s, a)],
where s and a denote the joint state and joint action, respectively. Now we show that this gradient is equivalent to Eq. (<ref>). We omit the entropy term here, but the proof can be easily extended to include it.
Since each agent acts on its policy independently, we have π_θ(a|s) = ∏_i π_θ_i(a_i|s). We represent the stationary distribution induced by π_θ as d_θ(s), meaning the probability of being in the state s by following π_θ. Following the proof in <cit.>, we get:
∇_θ_i J (θ)
= ∑_s ∈ D d_θ(s) ∑_a ∈𝒜π_θ(a | s) ∇_θ_i[ ∑_j=1^M log(π_θ_j(a_j | s)) ] Q(s, a)
= ∑_s ∈ D d_θ(s) ∑_a ∈𝒜π_θ(a|s) ∇_θ_ilog(π_θ_i(a_i | s)) Q(s, a)
= ∑_s ∈ D d_θ(s) ∑_a ∈𝒜[ ∏_j ≠ iπ_θ_j(a_j | s) ] ∇_θ_iπ_θ_i(a_i | s) Q(s, a).
To further the proof, we consider the following equation:
∑_a ∈𝒜[ ∏_j ≠ iπ_θ_j(a_j | s) ] ∇_θ_iπ_θ_i(a_i | s) F(s, a_∖ i)
= ∑_a_∖ i∈𝒜_∖ i[ ∏_j ≠ iπ_θ_j(a_j | s) ] F(s, a_∖ i) [ ∇_θ_i∑_a_i ∈𝒜_iπ_θ_i(a_i | s)_=1]
= 0.
This will stay true as long as F(s, a_∖ i) is a function independent of a_i. Let F(s, a_∖ i) = - Q(s, a_∖ i) - b(s, a_∖ i)] and combine it with the equation above:
∇_θ_i J (θ) = ∑_s ∈ D d_θ(s) ∑_a ∈𝒜[ ∏_j ≠ iπ_θ_j(a_j | s) ] ∇_θ_iπ_θ_i(a_i | s) ·
[Q(s, a) - Q(s, a_∖ i) - b(s, a_∖ i)]
= ∑_s ∈ D d_θ(s) ∑_a ∈𝒜[ ∏_j ≠ iπ_θ_j(a_j | s) ] ∇_θ_iπ_θ_i(a_i | s) ·
[Q_i(s, a_i) - b(s, a_∖ i)]
= ∑_s ∈ D d_θ(s) ∑_a ∈𝒜π_θ(a|s) ∇_θ_ilog(π_θ_i(a_i | s)) ·
[Q_i(s, a_i) - b(s, a_∖ i)].
Here, we prove that the policy gradient with respect to each θ_i can be obtained locally using the corresponding score function, ∇_θ_ilog(π_θ_i(a_i | s)). By averaging θ_i^(t) from all agents updated by Q_i(o_i, a_i) - b(o_i, a_∖ i), we obtain the same effect as updating the global θ based on Q(s, a). Therefore, our method is as effective as IQL, but with a faster convergence rate due to parallel updates among all agents. Moreover, our framework for learning a homogeneous policy from local observation in the multi-agent learning framework is not restricted to MAPF and can potentially be applied to other MARL tasks, especially ones in partially observable settings.
§ EXPERIMENTS
In this section, we implement our methods[The code is available at https://github.com/Qiushi-Lin/SACHA.] and experimentally evaluate them with other methods on a server equipped with an Intel 2.3GHz 16-Core CPU and an NVIDIA A40 GPU.
§.§ Environment Setups
Training Environments:
As mentioned above, our model is trained using the multi-agent actor-critic learning framework. Not only does each agent's policy network share parameters but also the critic networks applied to each subgroup of agents are homogeneous. We train our model over random grid maps of different sizes with randomly generated obstacles. The obstacle density is sampled from a triangular distribution between 0 and 0.5 with its peak value at 0.33. To fairly compare with other decentralized MAPF planners, our agent's policy network exclusively has 9 × 9 square-shaped FOV, the same as DHC and DCC, regardless of the environment size. Inspired by the curriculum learning <cit.>, we design a training pipeline that starts with only 2 agents on 10 × 10 grid maps and gradually increases the number of agents and the size of the map once the success rate reaches a certain threshold. More and more complicated tasks are constantly added to the training pool until the map size exceeds 100 × 100 or the number of agents exceeds 72.
Testing Environments:
We test our methods over a variety of maps all from the standard benchmark <cit.>. We first select two random maps (32 × 32 and 64 × 64) with uniformly distributed obstacles. Besides, we also use two game maps, den312d (65 × 81) and warehouse (161 × 63). The start-goal pairs of agent locations are randomly generated with the guaranteed existence of solutions. The number of agents is chosen from {4, 8, 16, 32, 64} respectively. The maximum time step is 256 for random32, random64, and den312d, and 512 for warehouse. For cases that cannot be solved successfully within the time horizon or the runtime limit, we count each agent's step as the maximum time step.
§.§ Baselines
Learning-Based Methods:
We compare our methods with several state-of-the-art decentralized learning-based MAPF methods summarized in Table <ref>. PRIMAL uses expert demonstration from centralized MAPF planners to train its model. The expert demonstration has a positive effect on speeding up the training process but is very time-consuming and requires global information about the specific environment, which limits its generalization to unseen instances. DHC adopts IQL along with single-agent heuristic guidance and broadcast communication mechanism. The resulting model performed better than PRIMAL without any experts. DCC improves on DHC by learning selective communication with a decision causal unit, which can filter out redundant messages and focus on relevant information. This also reduces the communication frequency significantly.
Search-Based Methods:
Furthermore, we also compare our method with the centralized planners. We use two optimal but computationally expensive search-based methods for comparison: Conflict-Based Search (CBS) <cit.> and ODrM^* <cit.>. CBS performs a best-first search that expands the constraint tree node by adding constraints to each agent involved in every conflict, while ODrM^* is a suboptimal planner based on M^* that applies Operator Decomposition (OD) to split agents into independent conflict sets and thus reduce the complexity of joint planning. We also use Priority-Based Search (PBS) <cit.> that searches for certain agent orders that can be used in the prioritized planning, which makes this solver incomplete but efficient. To simulate centralized planning in the same partially observable setting, we use windowed PBS (wPBS) <cit.>, which only avoids conflicts within a bounded time horizon. We set the time window length of wPBS to be equal to the caliber of the FOV to simulate the partially observable environments. We set the runtime limit of CBS and wPBS to 120 seconds and ODrM^* to 20 seconds.
§.§ Empirical Results
We evaluate the performance of the MAPF methods based on two widely-used metrics, success rate and average step per agent. Success rate measures the ability to solve the given instances within the runtime limit, whereas the average step per agent measures the quality of the solutions over a given set of instances. We test our approach along with multiple baselines in around 300 MAPF instances for each map with different numbers of agents.
Fig. <ref> shows the success rate of our methods compared with all other baselines over four different MAPF maps. Within the time limit, all decentralized planners have a remarkable advantage in success rate. Even by including the precomputing time of shortest path heuristics, decentralized planners find solutions much faster than centralized ones. Among decentralized planners, PRIMAL tends to result in solutions with the worst quality, especially in those two game maps, which indicates that learning from the expert data cannot be easily generalized to instances with different numbers of agents and on maps with different structures. DHC and DCC have their advantages over PRIMAL by allowing the shortest path heuristics and the communication mechanism, although our methods can both outperform them in most cases. The advantages are more obvious over maps with higher obstacle density and more agents where more cooperation is demanded.
Table <ref> reports the average step required to finish goals from each agent in multiple different instances. If planners exceed the runtime limit in some cases, we consider them failures and count them as spending the maximum time horizon. The stroke-out data with maximum time steps indicates zero success. When compared to search-based planners with relatively small numbers of agents, as expected, all learning-based methods cannot provide comparable results. However, as the number of agents grows, the search-based methods are significantly more time-consuming than learning-based methods and the success rate would drop rapidly due to the runtime limit. The two communication-based methods DHC and DCC have greater solution quality over PRIMAL in all cases. However, SACHA and SACHA(C) outperform them by a decent margin in most instances, with and without the communication block, which demonstrates their advantages over other learning-based methods. It is worth mentioning that, generally, SACHA(C) has better performance than SACHA in instances with larger numbers of agents where communication can be rather helpful.
As reported in <cit.>, DHC always has better performance than its alternative without the communication unit. Hence, it shows that our method can serve the non-communicating scenarios when SACHA can outperform DHC and thus DHC without communication. Besides, DHC can essentially be viewed as training our communication-based model without the attention block via the independent Q-learning. Therefore, the fact that SACHA(C) has greater performance than DHC demonstrates the strength of the heuristic-based attention mechanism and the multi-agent learning framework. Overall, our methods have a better chance to solve given MAPF instances, with and without communication, and among those solved instances, they result in solutions with better quality.
§ CONCLUSION
In this paper, we introduced SACHA, a novel approach for learning cooperative policies for MAPF and potentially other MARL problems in partially observable environments. SACHA combines the multi-agent soft actor-critic that maximizes both expected reward and entropy with heuristic-based attention mechanisms that enhance the network architectures of both the actor and the critic. Specifically, we proposed to augment each agent's local observation with heuristic guidance from other agents and to use an attention module that learns to focus on the most relevant information for each agent to avoid collisions and achieve the goal. To the best of our knowledge, we are the first to apply the soft actor-critic with a novel agent-centered critic to homogeneous MARL settings with partial observability and to incorporate heuristic guidance and attention in the agent's policy network. We evaluated our method on various MAPF benchmarks and showed that it outperforms existing baselines for almost all the cases in terms of success rate and solution quality.
IEEEtran
|
http://arxiv.org/abs/2307.00531v1
|
20230702095105
|
Muon collider signatures for a $Z'$ with a maximal $μ-τ$ coupling in $U(1)_{L_μ-L_τ}$
|
[
"Jin Sun",
"Fei Huang",
"Xiao-Gang He"
] |
hep-ph
|
[
"hep-ph"
] |
=15pt
^1Tsung-Dao Lee Institute, and School of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai 200240, China
^2National Center for Theoretical Sciences, and Department of Physics, National Taiwan University, Taipei 10617, Taiwan
^3Center for Theoretical Physics of the Universe, Institute for Basic Science, Daejeon 34126, Korea
^4School of Physics and Technology, University of Jinan, Jinan, Shandong 250022, China
The gauged U(1)_L_μ - L_τ model is a candidate model for explaining the muon g-2 anomaly because the Z' in the model has a natural normal coupling to muon. Due to other experimental data constraints the viable mass range for the usual Z' is constrained to be lower than a few hundred MeV. It has been shown that if the Z' has a maximal off-diagonal mixing,
(μ̅γ^μτ + τ̅γ^μμ) Z'_μ, a large mass for Z' is possible. This class of models has a very interesting signature for detection, such as μ^-μ^+ →τ^- τ^+ pair, μ^- μ^+ →μ^±μ^±τ^∓τ^∓ at a muon collider. In this work we study in detail these processes. We find that the in the parameter space solving the muon g-2 anomaly, t-channel τ^- τ^+ pair production can easily be distinguished at more than 5σ level from the s-channel production as that predicted in the standard model. The smoking gun signature of doubly same sign μ^±μ^± + τ^∓τ^∓ pairs production can have a 5σ sensitivity, at a muon collider of 3 TeV with 𝒪(fb^-1) luminosity.
Muon collider signatures for a Z' with a maximal μ - τ coupling in U(1)_L_μ - L_τ
Xiao-Gang He^1,2[hexg@sjtu.edu.cn]
August 1, 2023
===================================================================================
§ INTRODUCTION
Muon collider with a multi TeV energy and high luminosity of order 𝒪(fb^-1) will provide excellent chances for new particles and new interactions beyond the standard model (SM). Search for a new massive gauge boson, generically referred to as Z', is one of the interesting topics in this regard. Among many of Z' models, the gauged L_μ-L_τ model has received a lot of attention due to the possibility of exchanging the Z' to explain the muon g-2 anomaly <cit.>. Although this anomaly may be due to theoretical uncertainties from QCD contributions, the U(1)_L_μ-L_τ Z' can explain this discrepancy <cit.> in certain regions of parameter space. In this paper we study how a muon collider can provide useful information for Z' in one of such models from gauge U(1)_L_μ - L_τ.
It has been shown that in order to explain the muon (g-2)_μ, a strong constraint on the L_μ-L_τ comes from the neutrino trident process v_μ +N→ v_μ +N +μ^+ μ^- <cit.> with (μ̅γ^μμ- τ̅γ^μτ + ν̅^μ_L γ^μν^μ_L - ν̅^τ_L γ^μν^τ_L)Z'_μ, which severely constrains the Z' mass m_Z' with m_Z'<300 MeV <cit.>.
Recently a new mechanism has been proposed to widen Z' mass range for resolving the muon (g-2)_μ anomaly <cit.>, which transforms the original flavor-conserving Z' interaction into the fully off-diagonal one,
(μ̅γ^μτ+ τ̅γ^μμ + ν̅^μ_L γ^μν^τ_L + ν̅^τ_L γ^μν^μ_L)Z'_μ,
to evade constraint from neutrino trident process and therefore to lift the upper bound for Z' mass. In this case, Z' exchange can induce τ→μν̅ν with a larger branching ratio from experimental data. This conflict can be resolved by introducing type-II seesaw SU(2)_L triplet scalars.
Because of the model's special coupling with muon, there are very distinctive signatures at a muon collider, such as the t-channel production of μ^-μ^+ →τ^- τ^+ while in the SM it is an s-channel process, and a smoking gun signature of doubly same sign muon and tau pairs production, μ^- μ^+ →μ^±μ^±τ^∓τ^∓.
In the following sections, we provide our findings in carry detail.
§ THE U(1)_L_Μ-L_Τ MODEL FOR MAXIMAL Μ-Τ COUPLING
In the simplest U(1)_L_μ - L_τ model, the left-handed SU(3)_C× SU(2)_L× U(1)_Y doublets L_L i: (1, 2, -1/2) and the right-handed singlets e_R i: (1,1,-1) transform under the gauged U(1)_L_μ-L_τ group as 0, 1, -1 for the first, second and third generations, respectively. The Z^' gauge boson of the model only interacts with leptons in the weak interaction basis <cit.>
L_Z'=- g̃ (μ̅γ^μμ - τ̅γ^μτ + ν̅_μγ^μ L ν_μ - ν̅_τγ^μ L ν_τ) Z^'_μ ,
where g̃ is the U(1)_L_μ - L_τ gauge coupling, and L(R) = (1 - (+)γ_5)/2.
Introducing a scalar S transforming as a singlet under the SM gauge group, but with U(1)_L_μ - L_τ charge 1, after S develops a vacuum expectation value v_S/√(2), Z' will obtain a mass m_Z' = g̃ v_S.
Exchange Z' at one loop level can generate a non-zero anomalous muon g-2 which can explain the anomaly observed. However, Z'
exchange will produce a non-zero contribution to the neutrino trident process v_μ +N→ v_μ +N +μ^+ μ^-. Neutrino trident data then constrain the Z' mass to be less than 300 MeV <cit.>.
To avoid neutrino trident data constraint on the Z' interaction <cit.>, one introduces new scalar particles to make the Z' interaction to muon and tauon off diagonal so that the neutrino trident process will not happen at tree level. Such a model had been proposed a long time ago with the bits of help of three Higgs scalars <cit.>.
We briefly outline the steps to obtain such a model. One needs to introduce three Higgs doublets H_1,2,3: (1,2, 1/2) (<H_i> = v_i/√(2)) with U(1)_L_μ - L_τ charges (0, 2, -2)
and to impose an unbroken exchange symmetry Z^'→ - Z^', H_1 ↔ H_1 and H_2 ↔ H_3 with v_2=v_3=v, to do the job. In this case the Z^' interaction and Yukawa terms to leptons are given by
L _H= - g̃ (l̅_2 γ^μ L l_2- l̅_3 γ^μ L l_3 + e̅_2 γ^μ R e_2 - e̅_3 γ^μ R e_3) Z^'_μ
- [Y^l_11l̅_1 R e_1 + Y^l_22 (l̅_2 R e_2 +l̅_3 R e_3 ) ] H_1
-Y^l_23 (l̅_2 R e_3 H_2 +l̅_3 R e_2 H_3 ) + H.C.
The transformation between the charged lepton mass eigen-state and weak eigen-state basis is given by
(
[ μ; τ ] )
= 1√(2) (
[ 1 -1; 1 1 ] )
([ e_2; e_3 ] ),
Similar transformation applies to neutrinos. In the new basis, the Z^' interactions with leptons become the following form as desired,
L_Z'= - g̃ (μ̅γ^μτ + τ̅γ^μμ + ν̅_μγ^μ L ν_τ + ν̅_τγ^μ L ν_μ) Z'_μ .
The above Z' interaction will lead to a much larger τ→μν̅_μν_τ branching ratio, which is excluded by experimental value by more than 5σ if the muon g-2 anomaly is explained by Z' exchange at one loop level. Therefore one needs to reduce the branching ratio for this decay to satisfy the experimental constraint while adressing (g-2)_μ anomaly.
This problem can be solved by introducing three Y=1 triplet scalar Δ_1,2,3: (1,3,1) (<Δ_i> = v_Δ i/√(2)) with U(1)_L_μ-L_τ charges (0,-2,2) <cit.>.
This Δ field is the famous type-II seesaw mechanism providing small neutrino masses <cit.> with
the component fields
Δ = ([ Δ^+/√(2) Δ^++; Δ^0 - Δ^+/√(2) ] ) , Δ^0=v_Δ+δ+iη/√(2) .
Under the above exchange symmetry Δ_1 ↔Δ_1, Δ_2 ↔Δ_3 with v_Δ 2 = v_Δ 3, the Yukawa terms in the basis shown in Eq.(<ref>) are
L_Δ = - [l̅^c_μ L l_μ (Y^ν_22( Δ_2 +Δ_3) - 2 Y^ν_23Δ_1)
+ l̅^c_τ L l_τ (Y^ν_22 (Δ_2 +Δ_3) + 2 Y^ν_23Δ_1)
+ 2l̅^c_μ L l_τ (Y^ν_22( Δ_2 -Δ_3))
]/2 + H.C. .
Expanding out the above interaction in terms of the component fields Δ^0, +, ++, we obtain
L_Δ = - (ν̅_e^c, ν̅_μ^c, ν̅_τ^c ) M(Δ^0) L
( [ ν_e; ν_μ; ν_τ ] )
+ √(2) (ν̅_e^c, ν̅_μ^c, ν̅_τ^c ) M(Δ^+) L
( [ e; μ; τ ] )
+ (e̅^c, μ̅^c, τ̅^c ) M(Δ^++) L
( [ e; μ; τ ] )
,
M(Δ) = ( [ Y^ν_11Δ_1 0 0; ; 0 (Y^ν_22(Δ_2 +Δ_3) - 2 Y^ν_23Δ_1)/2 Y^ν_22(Δ_2-Δ_3)/2; ; 0 Y^ν_22(Δ_2-Δ_3)/2 (Y^ν_22(Δ_2 +Δ_3) + 2 Y^ν_23Δ_1)/2 ] ) .
Here we find that if assuming the degenerate case Δ_2=Δ_3 required by the exchange symmetry Δ_2 ↔Δ_3 and Y_11,23<<Y_22, M(Δ) is diagonal matrix with non-zero entries M_22≈ M_33. This is helpful for simplifying the our model annalysis.
Therefore, the scalar sectors include three Higgs doublets and three triplet scalars. To simplify the analysis, we will make the following assumptions: Y_11,23<<Y_22, degenerate case m_Δ 2=m_Δ 3 and other heavier new degrees of freedom. Under these assumptions, the new scalar effects on SM particles will be dominated by Δ_2 interaction terms. If further assuming the degenerate triplet components, we can obtain m_Δ^++= m_Δ^+= m_Δ^0
= m_Δ. In this case,
for Δ m=0 with large v_Δ∼ O(), doubly-charged scalar mass below 420 GeV has already excluded from the collider constraints <cit.>. Therefore, we will focus on the scenario with m_Δ > 420 GeV.
Note that our models contain the totally flavor changing Z' interactions and Δ mediated flavor conserving interactions. The relevant low-energy phenomenology has been studied in Ref. <cit.>. We will mainly focus on the muon collider aspects in the following part. We firstly focus on the two body case μ^-μ^+ →τ^- τ^+, then further expand to four body case μ^-μ^+ →μ^±μ^± + τ^∓τ^∓. The influences can be expressed in the four model parameters. The two ones are g̃, m_Z' from U(1)_L_μ-L_τ, and the remaining two are Y_22, m_Δ from triplet scalar. The total contributions for the above processes should contain the Z' effects by g̃ and m_Z', and triplet ones by Y_22 and m_Δ simultaneously. Therefore, the dominant contribution could come from the Z' effects or triplet ones which depends on the choice of the four parameters. In the following we carry out numerical analysis for these processes at a multi-TeV muon collider.
§ SIGNATURES IN Μ^-Μ^+ →Τ^-Τ^+ AND Μ^-Μ^+ →Μ^±Μ^± + Τ^∓Τ^∓
The maximal μ-τ interaction of Z' will affect μ^-μ^+ →τ^- τ^+ via t-channel, which is different from the SM s-channel contributions as shown in Fig. <ref>. Besides, the Z' interaction can produce distinctive signature μ^-μ^+ →μ^±μ^± + τ^∓τ^∓ as shown in Fig. <ref>, which serves as a smoking gun for the model studied here.
For a multi-TeV muon collider, the luminosity scaling with energy quadratically <cit.> as
L≥
(√(s)/10 )^2 × 2× 10^35^-2^-1 .
In particular, the benchmark choices of the collider energies and the corresponding integrated luminosities within 5 years is
√(s)= 3
⟶ L= 1
ab^-1 .
A very good identification of the leptons is a basic ingredient of many analyses at the colliders. In particular τ-leptons, which are the most difficult leptons to identify, are expected to be produced by the decay of several interesting physic channels. Tau tagging is performed to identify jets likely to originate from a tau lepton, which is conducted by the hadronic decay mode of the taus. In the SM, tau decays hadronically with a probability of 65%, producing a tau-jet mostly containing neutral and charged pions. In our case with a pair of taus in the final state, 42% of the events will contain two tau-jets. The hadronic tau decays have low charged track multiplicity (one or three prongs) and a relevant fraction of the electromagnetic energy deposition due to photons coming from the decay of neutral pions. Moreover, when the momentum of the tau is large compared to its mass, the tau-jets will be highly collimated and produce localized energy deposits in the electromagnetic and hadronic calorimeters. These characteristics can be exploited to enhance the identication of hadronic tau decays.
At the muon collider, the muon tagging efficiency is 100% with η<2.5 <cit.> and the τ tagging efficiency is 80% with p_T > 10 GeV, as defined in Delphes cards <cit.>.
To obtain simulation data for analysis, we implement interactions in the previous model
through means of FeynRules <cit.>, we generated a Universal Feynman rules Output(UFO) model <cit.> for the model Lagrangian. Then fed the model into MadGraph5-aMC@NLO <cit.> for all simulations, which are then fed to PYTHIA 8 <cit.> for showering and hadronization, and DELPHES <cit.> for a fast detector simulation.
§.§ μ^+ μ^- →τ^+ τ^- analysis
To extract signatures, we need to have a good understanding of the background. The SM backgrounds for μ^+ μ^- →τ^+ τ^- are shown in Table. <ref>. Here we use the following basic cuts <cit.>:
(i) transverse momentum p_T>10 GeV,
(ii) absolute pseudo-rapidity |η|<2.5,
(iii) the separation of the two leptons Δ R=√((Δη)^2 +(Δϕ)^2)>0.4.
In addition to the s-channel scattering process with SM mediators γ, Z, h, there also exist other backgrounds from the blind features of the detector,
τ^+τ^-γ, h/Z/γ (→τ^+ τ^-) νν̅, W^+(→τ^+ ν_τ)W^-(→τ^- ν̅_τ).
Therefore, the total backgrounds are 0.086304 pb as shown in Fig. <ref>.
This background value can be suppressed to 0.0144 pb by changing the cut P_T > 250 GeV as shown in Table. <ref>.
For the effects of the model under consideration, we can obtain the corresponding cross section using model parameters depicted earlier.
Based on the analysis in Ref. <cit.>, we use g̃/m_Z'=(0.55, 0.89)× 10^-3^-1 and Y_22/m_Δ=(0.26,1.42)× 10^-3^-1 for resolving the muon (g-2)_μ anomaly while satisfying the other experimental constraints.
The LHC search for a new Z' gauge boson by the four muon (4μ) final states, which excludes the coupling strength g̃ above 0.003-0.2 for Z' mass ranging from 5 to 81 GeV at ATLAS <cit.> and g̃ above 0.004-0.3 for Z' mass ranging from 5 to 70 GeV at CMS <cit.>. In fact, the above direct LHC constraints on Z' from simple resonance searches like pp → Z' → ll/jj are not applicable in our case, since the Z' does not couple to quarks at the tree level. Moreover, the flavor-violating Z' searches at the LHC have only focused on the eμ channel so far <cit.>. If assuming the constraints might also be used to analyze our case, we can choose m_Z'>81 GeV to evade the above bounds, which is actually the electroweak scale. At the electroweak scale U(1)_L_μ-L_τ Z' has been shown to be allowed by experimental data <cit.>. Therefore, we focus on m_Z'≥ 100 GeV.
In order to satisfy the muon (g-2)_μ and other experimental constraints, we choose the triplet scalar parameters to be <cit.> m_Δ=450 , |Y_22|=0.117. In this case, we find that the triplet effects only lead to the cross section σ=0.00944 pb, whose contribution is so small only with around 1% for the large Z' case.
We plot the cross section of flavor changing μ^+ μ^- →τ^+ τ^- processes shown in Fig. <ref>.
In this figure, the left panel is plotted for the cross section as a function of Z' mass and the right panel as a function of the ratio g̃/m_Z'.
The left panel shows the allowed ranges of the cross section for different parameters g̃ and m_Z' with the basic cuts. We found that the cross section highly depends on the Z' mass and the gauge coupling g̃. And in the case of the small Z' mass, the total cross section is smaller than the SM contribution so that the interference effects between SM and Z' will reduce the SM contribution σ=0.086304 pb. To further reduce the SM background, we impose the cut P_T>250 GeV as shown in Table. <ref>. Then we plot the right panel showing the cross section with the ratio g̃/m_Z' for two different Z' mass cases.
The influence of new physics is shown by the difference ratio factor (σ-σ_SM)/σ_SM, which is further translated into a necessary luminosity to discover a given scenario by defining a test statistic S/√(S+S_0). Here S = L× (σ-σ_SM) means the new physics signal, and S_0 = L×σ_SM means the background ( L is the luminosity).
Requiring S/√(S_0)>3 5, we can assign a rough discovery luminosity to each different case. The relevant information is shown in Table. <ref>.
Based on the values, we can obtain the cross section for U(1)_L_μ-L_τ with Y=1 triplet model models.
Afterward, we can change different parameters to obtain the corresponding cross section and events.
Note that the two values of g̃ for every fixed m_Z' correspond to the lower and upper bounds, respectively.
For example, for m_Z'=100 GeV case, the lower bound g̃=0.055 can generate 1400 events and the upper bound g=0.089 with 10000 events, when the luminosity is 1ab^-1 as required in Eq. <ref>.
The above analysis is based on the choice of m_Δ=450 , |Y_22|=0.117. Actually, we also study the different contributions from triplet effects as shown in Fig. <ref>.
We plot the cross section of flavor changing μ^+ μ^- →τ^+ τ^- processes with different m_Δ in basic cuts and P_T>250 GeV. Here we choose the lower bound Y_22/m_Δ=0.26× 10^-3^-1 and fix g̃/m_Z'=0.55× 10^-3^-1 with three different Z' masses. We found that the cross section varies slightly when increasing Δ mass.
The corresponding luminosity and events are shown in Table. <ref>. We found that the cross section is around 10^-3 pb for the above two lower bounds, which is so smaller than the large Z' mass effects.
This means that the choice of the triplet parameters will not affect cross section to a large extent.
Therefore, we choose the previous parameters m_Δ=450 , |Y_22|=0.117 to conduct the analysis in the following.
In order to further investigate the detection possibility of the flavoring changing process μ^+μ^-→τ^+τ^- at the future muon collider, we present the required luminosity for U(1)_L_μ-L_τ model with triplet scalar Y=1 as shown in Fig. <ref>. Here we consider two different cases with significance 3σ and 5σ. In the left panel, we show the trend of integrated luminosity with different Z' mass when fixing the lower bound g̃/m_Z'=0.55× 10^-3 GeV.
We found that the luminosity will drop rapidly with the original 𝒪(100) fb^-1 into the final 𝒪(0.02) fb^-1 when increasing m_Z' from 100 GeV to 600 GeV.
In the right panel, we show the relation between the luminosity and ratio g̃/m_Z'. Similarly, the luminosity shows the rapid fall trend when rising g̃/m_Z'.
If increasing the ratio to the upper bound g̃/m_Z'=0.89× 10^-3^-1, the cross section will increase so that the required luminosity will decrease with around an order of magnitude.
§.§ μ^-μ^+→μ^±μ^±τ^∓τ^∓ analysis
Note that our model can produce distinctive signature of the final states μ^±μ^±τ^∓τ^∓.
This signal is very clean and effectively background-free. Although the tau reconstruction poses some practical challenges, the four lepton final states with same sign could provide a `smoking gun' signal for our scenario.
This unique signature has two different sources, μ^+μ^- → h^*/γ^*/Z^*→μ^±τ^∓ + (Z'→μ^±τ^∓) and μ^+μ^- →Δ^++Δ^–→μ^±μ^±τ^∓τ^∓. The two sources both contribute the cross section of same sign lepton pair final states. We should consider the effects of two sources simultaneously to identify the dominant contribution.
We simulate the processes to estimate the sensitivity reach at √(s) = 3 TeV muon collider. Due to the negligible SM background, we can impose the basic trigger cuts in the previous μ^+ μ^- →τ^+ τ^- case rather than changing P_T additionally. Besides, we further impose the following cuts:
the leading lepton must satisfy the transverse momentum cut p_T > 20 GeV, while the sub-leading leptons are required to satisfy a milder cut p_T > 15 GeV.
These values are set to be as inclusive as possible for an optimistic analysis.
We study the triplet effects on cross section of flavor changing μ^±μ^±τ^∓τ^∓ processes as shown in Fig. <ref>. The relation between the cross section and different m_Δ is plotted for fixing the lower bound Y_22/m_Δ=0.26× 10^-3^-1. We found that the triplet effects will rise when increasing the Δ mass.
This is because that for fixed ratio Y_22/m_Δ, increasing m_Δ leads to a bigger coupling Y_22. Also since the amplitude is proportional to |Y_22|^2/(s_1-m_Δ^2)(s_2-m_Δ^2) for μ^+μ^- →Δ^++Δ^–→μ^±μ^±τ^∓τ^∓, while increasing m_Δ^2 makes (s_1-m_Δ^2)(s_2-m_Δ^2) smaller in the region of kinematics, therefore it results in a rise for the cross section as shown in Fig. <ref>.
This means that the previous triplet parameters m_Δ=450 , |Y_22|=0.117 will give comparatively small contribution. Next we will choose these two values to study the feasible detection sensitivity. If further increasing m_Δ, the cross section of μ^±μ^±τ^∓τ^∓ processes will improve several times, which is more feasible to be detected in the future muon collider.
Since the SM background is negligible for the doubly same sign dilepton pairs μ^±μ^±τ^∓τ^∓, we can simply estimate the signal sensitivity as N=S/√((S+S_0))≈√( Lσ_signal), where L is the integrated luminosity and σ_signal is the signal cross section, as obtained from our detector simulation. The corresponding cross sections and luminosity are shown in Table. <ref>.
Here we try to choose many different triplet parameters to achieve the aim of suppressing the triplet effects. According to our calculation, we found that when choosing Y_22=0.117 and m_Δ=450 GeV, the triplet contribution is around 0.00075 pb.
Compared to the values in Table. <ref>, the triplet effects will weaken gradually along with the increase of Z' mass, which makes Z' effects more dominant. For example m_Z'=500 GeV, the triplet effect is much smaller to be neglected only with 2% ratio proportion compared to Z' contribution.
Similarly, we obtain the required luminosity for μ^+μ^-→μ^±μ^± + τ^∓τ^∓ to further investigate detection possibility as shown in Fig. <ref>. We analyze the required luminosity with significance 3σ and 5σ in different cases at the muon collider, different Z' mass with the fixed lower bound g̃/m_Z'=0.55× 10^-3 GeV in the left panel and the ratio g̃/m_Z' in the right panel. We found that the required luminosity will tend to decrease. In the left panel, the required luminosity will gradually tend to abrupt when increasing m_Z'.
In the right panel, the luminosity shows the fall trend when rising g̃/m_Z' for two kinds of different Z' masses.
If increasing the ratio into the upper bound g̃/m_Z'=0.89× 10^-3 GeV, the required luminosity will decrease with different degrees depending on the Z' mass.
Therefore, we find that the smoking gun signature of doubly same sign μ^±μ^± + τ^∓τ^∓ pairs production can have a 5σ sensitivity, if required to solve the muon g-2 anomaly, at a muon collider of a 3 TeV with 𝒪(fb) luminosity.
If further changing the triplet parameters to enhance the effects as shown in Fig. <ref>, the cross section increases rapidly so that the discovery potential will become more obvious.
1.What needed luminosity to get 5σ significance?
2. plot d_σ/ d cosθ - cosθ?
forward-backward asymmetry
A_FB=σ(cosθ>0)-σ(cosθ<0)/σ(cosθ>0)+σ(cosθ<0) .
with θ being angle between the momentum of the outgoing τ^- and the incoming μ^-.
3. final states τ^+ τ^- γ
4. If the initial muon beam was polarized, a parity-violating left-right asymmetry
A_LR=σ(μ^+ μ_L^- →τ^+τ^-)-σ(μ^+ μ_R^- →τ^+τ^-)/σ(μ^+ μ_L^- →τ^+τ^-)+σ(μ^+ μ_R^- →τ^+τ^-) .
5. Asymmetry properties of the tau leptons can be understood using the asymmetry observables <cit.>, i.e. forward-backward asymmetry A_FB, polarization asymmetry P_τ, forward (backward) polarization asymmetry P^F_τ(P^B_τ) and forward-backward polarization asymmetry A^P_τ_FB.
The polarization asymmetry for the τ^+ and τ^- is defined as
P_τ^-=(σ_+++σ_+-)-(σ_-++σ_–)/(σ_+++σ_+-)+(σ^-++σ_–) ,
P_τ^+=(σ_+++σ_-+)-(σ_+-+σ_–)/(σ_+++σ_-+)+(σ^+-+σ_–) ,
where σ_++, σ_+-, σ_-+ and σ_– are the cross sections corresponding to the four allowed helicity combinations of the tau pair denoted as
h_τ^-h_τ^+=++,+-,-+,– .
In the ultra-relativistic limit, σ_++ and σ_– are negligible, which results in P_τ^-=-P_τ^+. The τ^- polarization asymmetry will hereafter be referred as P_τ.
The forward and backward polarization asymmetries are defined as
P^F_τ=σ^F(h_τ=+1)-σ^F(h_τ=-1)/σ^F(h_τ=+1)+σ^F(h_τ=-1) ,
P^B_τ=σ^B(h_τ=+1)-σ^B(h_τ=-1)/σ^B(h_τ=+1)+σ^B(h_τ=-1) ,
and finally the forward-backward polarization asymmetry is given by
A^P_τ_FB=[σ^F(h_τ=+1)-σ^F(h_τ=-1)]-[σ^B(h_τ=+1)-σ^B(h_τ=-1)]/σ^F(h_τ=+1)+σ^F(h_τ=-1)+σ^B(h_τ=+1)+σ^B(h_τ=-1) .
For these above four asymmetries, we analyze three different cases of the polarization of the initial muon beams, i.e. unpolarized beams, longitudinally polarized beams with +0.8 (-0.8) polarization for the μ^-(μ^+) beam and longitudinally polarized beams with +0.8 (-0.8) polarization for the μ^-(μ^+) beam (assuming that the beam polarization varies in the
range -1 to +1).
§ CONCLUSION
We have studied in detail the maximal μ-τ interaction of a Z' in U(1)_L_μ-L_τ model at a muon collider.
The maximal Z' off-diagonal mixing, (μ̅γ^μτ + τ̅γ^μμ) Z'_μ, will escape other constraints with Z' mass to be lower than a few hundred MeV while addressing the muon g-2 anomaly. In addition, a Z' with a large mass can result in very distinctive signatures, such as muon collider t-channel production of μ^-μ^+ →τ^- τ^+ pair, and doubly same sign muon and tau pairs production, μ^- μ^+ →μ^±μ^±τ^∓τ^∓.
With a muon collider of 3 TeV with 𝒪(fb^-1) luminosity, we find that within the muon g-2 anomaly constrained parameter spaces for the ratio of Z' coupling and mass, and triplet Higgs contributions,
for the μ^-μ^+ →τ^- τ^+ case, the t-channel pair production can be easily distinguished at more than 5σ level from the s-channel production as that predicted in the standard model.
For the μ^- μ^+ →μ^±μ^±τ^∓τ^∓ case, it can serve as the smoking gun signature for our model which can be discovered at 5σ level.
This work was supported in part by Key Laboratory for Particle Physics, Astrophysics and Cosmology, Ministry of Education, and Shanghai Key Laboratory for Particle Physics and Cosmology (Grant No. 15DZ2272100), in part by the NSFC (Grant Nos. 11735010, 11975149, and 12090064) and partially supported by the Fundamental Research Funds for the Central Universities. XGH was supported in part by the MOST (Grant No. MOST 106-2112-M-002-003-MY3).
99
Muong-2:2021ojo
B. Abi et al. [Muon g-2],
Phys. Rev. Lett. 126 (2021) no.14, 141801
doi:10.1103/PhysRevLett.126.141801
[arXiv:2104.03281 [hep-ex]].
Baek:2001kca
S. Baek, N. G. Deshpande, X. G. He and P. Ko,
Phys. Rev. D 64 (2001), 055006
[arXiv:hep-ph/0104141 [hep-ph]].
Ma:2001md
E. Ma, D. P. Roy and S. Roy,
Phys. Lett. B 525 (2002), 101-106
[arXiv:hep-ph/0110146 [hep-ph]].
Gninenko:2001hx
S. N. Gninenko and N. V. Krasnikov,
Phys. Lett. B 513 (2001), 119
[arXiv:hep-ph/0102222 [hep-ph]].
Pospelov:2008zw
M. Pospelov,
Phys. Rev. D 80 (2009), 095002
[arXiv:0811.1030 [hep-ph]].
Heeck:2011wj
J. Heeck and W. Rodejohann,
Phys. Rev. D 84 (2011), 075007
[arXiv:1107.5238 [hep-ph]].
Harigaya:2013twa
K. Harigaya, T. Igari, M. M. Nojiri, M. Takeuchi and K. Tobe,
JHEP 03 (2014), 105
[arXiv:1311.0870 [hep-ph]].
Altmannshofer:2014pba
W. Altmannshofer, S. Gori, M. Pospelov and I. Yavin,
Phys. Rev. Lett. 113 (2014), 091801
[arXiv:1406.2332 [hep-ph]].
Altmannshofer:2016oaq
W. Altmannshofer, M. Carena and A. Crivellin,
Phys. Rev. D 94 (2016) no.9, 095026
[arXiv:1604.08221 [hep-ph]].
CCFR:1991lpl
S. R. Mishra et al. [CCFR],
Phys. Rev. Lett. 66 (1991), 3117-3120
CHARM-II:1990dvf
D. Geiregat et al. [CHARM-II],
Phys. Lett. B 245 (1990), 271-275
NuTeV:1999wlw
T. Adams et al. [NuTeV],
Phys. Rev. D 61 (2000), 092001
[arXiv:hep-ex/9909041 [hep-ex]].
Cen:2021ryk
J. Y. Cen, Y. Cheng, X. G. He and J. Sun,
Nucl. Phys. B 978 (2022), 115762
[arXiv:2104.05006 [hep-ph]].
Cheng:2021okr
Y. Cheng, X. G. He and J. Sun,
Phys. Lett. B 827 (2022), 136989
[arXiv:2112.09920 [hep-ph]].
lmu-ltau1
X. G. He, G. C. Joshi, H. Lew and R. R. Volkas,
Phys. Rev. D 43 (1991), R22.
lmu-ltau2
X. G. He, G. C. Joshi, H. Lew and R. R. Volkas,
Phys. Rev. D 44, 2118-2132 (1991)
Foot:1994vd
R. Foot, X. G. He, H. Lew and R. R. Volkas,
Phys. Rev. D 50 (1994), 4571-4580.
Lazarides:1980nt
G. Lazarides, Q. Shafi and C. Wetterich,
Nucl. Phys. B 181 (1981), 287-300.
Mohapatra:1980yp
R. N. Mohapatra and G. Senjanovic,
Phys. Rev. D 23 (1981), 165.
Konetschny:1977bn
W. Konetschny and W. Kummer,
Phys. Lett. B 70 (1977), 433-435.
Cheng:1980qt
Cheng,T.P. and Li,L.F.,
Phys. Rev. D 22, 2860 (1980).
Magg:1980ut
M. Magg and C. Wetterich,
Phys. Lett. B 94 (1980), 61-64.
Schechter:1980gr
Schechter,J. and Valle,J.W.F.,
Phys. Rev. D 22, 2227 (1980).
Ashanujjaman:2021txz
S. Ashanujjaman and K. Ghosh,
JHEP 03 (2022), 195
[arXiv:2108.10952 [hep-ph]].
Delahaye:2019omf
J. P. Delahaye, M. Diemoz, K. Long, B. Mansoulié, N. Pastrone, L. Rivkin, D. Schulte, A. Skrinsky and A. Wulzer,
[arXiv:1901.06150 [physics.acc-ph]].
Li:2023ksw
T. Li, C. Y. Yao and M. Yuan,
JHEP 03 (2023), 137
[arXiv:2301.07274 [hep-ph]].
Li:2023lin
J. Li, W. Wang, X. Cai, C. Yang, M. Lu, Z. You, S. Qian and Q. Li,
[arXiv:2302.02203 [hep-ph]].
Frixione:2021zdp
S. Frixione, O. Mattelaer, M. Zaro and X. Zhao,
[arXiv:2108.10261 [hep-ph]].
Alloul:2013bka
A. Alloul, N. D. Christensen, C. Degrande, C. Duhr and B. Fuks,
Comput. Phys. Commun. 185 (2014), 2250-2300
[arXiv:1310.1921 [hep-ph]].
Degrande:2011ua
C. Degrande, C. Duhr, B. Fuks, D. Grellscheid, O. Mattelaer and T. Reiter,
Comput. Phys. Commun. 183 (2012), 1201-1214
[arXiv:1108.2040 [hep-ph]].
Alwall:2011uj
J. Alwall, M. Herquet, F. Maltoni, O. Mattelaer and T. Stelzer,
JHEP 06 (2011), 128
[arXiv:1106.0522 [hep-ph]].
Sjostrand:2014zea
T. Sjöstrand, S. Ask, J. R. Christiansen, R. Corke, N. Desai, P. Ilten, S. Mrenna, S. Prestel, C. O. Rasmussen and P. Z. Skands,
Comput. Phys. Commun. 191 (2015), 159-177
[arXiv:1410.3012 [hep-ph]].
deFavereau:2013fsa
J. de Favereau et al. [DELPHES 3],
JHEP 02 (2014), 057
[arXiv:1307.6346 [hep-ex]].
ATLAS:2023vxg
[ATLAS],
[arXiv:2301.09342 [hep-ex]].
CMS:2018yxg
A. M. Sirunyan et al. [CMS],
Phys. Lett. B 792 (2019), 345-368
[arXiv:1808.03684 [hep-ex]].
ATLAS
The ATLAS collaboration, ATLAS-CONF-2015-072.
CMS
CMS Collaboration, CMS-PAS-EXO-16-001.
|
http://arxiv.org/abs/2307.02227v1
|
20230705120856
|
MAE-DFER: Efficient Masked Autoencoder for Self-supervised Dynamic Facial Expression Recognition
|
[
"Licai Sun",
"Zheng Lian",
"Bin Liu",
"Jianhua Tao"
] |
cs.CV
|
[
"cs.CV",
"cs.AI",
"cs.HC",
"cs.MM"
] |
sunlicai2019.ia.ac.cn
School of Artificial Intelligence, University of Chinese Academy of Sciences
Institute of Automation, Chinese Academy of Sciences
Beijing
China
lianzheng2016@ia.ac.cn
Institute of Automation, Chinese Academy of Sciences
Beijing
China
liubin@nlpr.ia.ac.cn
Institute of Automation, Chinese Academy of Sciences
Beijing
China
jhtao@tsinghua.edu.cn
Department of Automation, Tsinghua University
Beijing
China
Dynamic facial expression recognition (DFER) is essential to the development of intelligent and empathetic machines. Prior efforts in this field mainly fall into supervised learning paradigm, which is restricted by the limited labeled data in existing datasets. Inspired by recent unprecedented success of masked autoencoders (e.g., VideoMAE), this paper proposes MAE-DFER, a novel self-supervised method which leverages large-scale self-supervised pre-training on abundant unlabeled data to advance the development of DFER. Since the vanilla Vision Transformer (ViT) employed in VideoMAE requires substantial computation during fine-tuning, MAE-DFER develops an efficient local-global interaction Transformer (LGI-Former) as the encoder. LGI-Former first constrains self-attention in local spatiotemporal regions and then utilizes a small set of learnable representative tokens to achieve efficient local-global information exchange, thus avoiding the expensive computation of global space-time self-attention in ViT. Moreover, in addition to the standalone appearance content reconstruction in VideoMAE, MAE-DFER also introduces explicit facial motion modeling to encourage LGI-Former to excavate both static appearance and dynamic motion information. Extensive experiments on six datasets show that MAE-DFER consistently outperforms state-of-the-art supervised methods by significant margins, verifying that it can learn powerful dynamic facial representations via large-scale self-supervised pre-training. Besides, it has comparable or even better performance than VideoMAE, while largely reducing the computational cost (about 38% FLOPs).
We believe MAE-DFER has paved a new way for the advancement of DFER and can inspire more relavant research in this field and even other related tasks.
Codes and models are publicly available at <https://github.com/sunlicai/MAE-DFER>.
MAE-DFER: Efficient Masked Autoencoder for Self-supervised Dynamic Facial Expression Recognition
Jianhua Tao
August 1, 2023
================================================================================================
§ INTRODUCTION
Facial expressions, as an important aspect of nonverbal communication, play a significant role in interpersonal interactions <cit.>.
In the past two decades, automatic facial expression recognition (FER) has drawn widespread attention due to its crucial role in developing intelligent and empathetic machines that can interact with humans in a natural and intuitive way <cit.>.
FER also has a wide spectrum of practical applications in areas such as healthcare <cit.>, education <cit.>, and entertainment <cit.>.
According to the input data type, FER can be divided into two categories, i.e., static FER (SFER) and dynamic FER (DFER) <cit.>. SFER takes static facial images as input, while DFER aims to recognize expressions in dynamic image sequences or videos.
Since SFER overlooks the critical temporal information for the interpretation of facial expressions,
this paper mainly focuses on DFER.
DFER is dominated by the supervised learning paradigm.
Researchers have developed various deep neural networks for this task, including 2D/3D convolutional neural networks (CNN) <cit.>, recurrent neural networks (RNN) <cit.>, and more advanced Transformer-based architectures <cit.>.
Although supervised methods have achieved remarkable success, the limited training samples in existing DFER datasets (typically around 10K, which is much smaller than those in other research areas such as general image/video classification and face recognition, see details in Table <ref>) severely restrict their further advancement (e.g., training large video Transformers). A straightforward idea to address this issue is to increase the dataset scale. However, collecting and annotating large-scale high-quality DFER datasets is pretty time-consuming and labor-intensive, which is mainly due to the sparsity of dynamic facial expressions in videos and the ambiguity and subjectivity in facial expression perception <cit.>. Considering that there are massive unlabeled facial videos on the Internet, a natural question arises in the mind: can we exploit them to fully unleash the power of deep neural networks for better DFER?
The recent progress of self-supervised learning in many deep learning fields <cit.> indicates that there is a positive answer. Notably, masked autoencoder (MAE) <cit.> in computer vision develops an asymmetric encoder-decoder architecture for masked image modeling. It successfully pre-trains the vanilla Vision Transformer (ViT) <cit.> in an end-to-end manner and outperforms the supervised baselines in many vision tasks. Subsequently, VideoMAE <cit.> extends MAE to the video domain and also achieves impressive results on lots of general video datasets. Motivated by this line of research, we present MAE-DFER (Fig. <ref>), a novel self-supervised method based on VideoMAE which leverages large-scale self-supervised pre-training on abundant unlabeled facial video data to promote the advancement of DFER. Although VideoMAE has made remarkable success in self-supervised video pre-training, we notice that it still has two main drawbacks: 1) The vanilla ViT encoder employed in VideoMAE requires substantial computation during fine-tuning due to the quadratic scaling cost of global space-time self-attention, which is unaffordable in many resource-constrained scenarios. 2) It only reconstructs video appearance contents during pre-training, thus might not be sufficient to model temporal facial motion information which is also crucial to DFER.
To tackle the above issues in VideoMAE, our MAE-DFER presents two core designs accordingly.
For the first issue, MAE-DFER develops an efficient local-global interaction Transformer (LGI-Former) as the encoder. Different from the global space-time self-attention in ViT, LGI-Former first constrains self-attention in local spatiotemporal regions and then utilizes a small set of learnable representative tokens to enable efficient local-global information exchange. Concretely, it decomposes the global space-time self-attention into three stages: local intra-region self-attention, global inter-region self-attention, and local-global interaction.
In this way, LGI-Former can efficiently propagate global information to local regions and avoid the expensive computation of global space-time attention.
For the second issue, MAE-DFER introduces joint masked appearance and motion modeling to encourage the model to capture both static facial appearance and dynamic motion information. Specifically, in addition to the original appearance content reconstruction branch, it simply utilizes the frame difference signal as another reconstruction target for explicit temporal facial motion modeling.
To verify the effectiveness of MAE-DFER, we perform large-scale self-supervised pre-training on the VoxCeleb2 dataset <cit.>, which has more than 1M unlabeled facial video clips collected from YouTube.
Then we fine-tune the pre-trained model on six DFER datasets, including three relatively large in-the-wild datasets (DFEW <cit.>, FERV39k <cit.>, and MAFW <cit.>) and three small lab-controlled datasets (CREMA-D <cit.>, RAVDESS <cit.>, and eNTERFACE05 <cit.>). The results show that MAE-DFER significantly outperforms the state-of-the-art supervised methods (e.g., +6.30% UAR on DFEW and +8.34% UAR on MAFW), indicating that it is capable of learning strong and useful dynamic facial representations for DFER. Moreover, compared with VideoMAE, MAE-DFER largely reduces ∼38% FLOPs while having comparable or even better performance.
The main contributions of this paper are summarized as follows:
* We present a novel self-supervised method, MAE-DFER, as an early attempt to leverage large-scale self-supervised pre-training on abundant unlabeled facial video data to advance the development of DFER.
* MAE-DFER improves VideoMAE by developing an efficient LGI-Former as the encoder and introducing joint masked appearance and motion modeling. With these two core designs, MAE-DFER largely reduces the computational cost while having comparable or even better performance.
* Extensive experiments on six DFER datasets show that our MAE-DFER consistently outperforms the previous best supervised methods by significant margins, which demonstrates that it can learn powerful dynamic facial representations for DFER via large-scale self-supervised pre-training.
We believe MAE-DFER will serve as a strong baseline and foster relevant research in this field.
§ RELATED WORK
§.§ Dynamic Facial Expression Recognition
The early studies on DFER primarily focus on designing various local descriptors and only several very small lab-controlled datasets are available for evaluation.
With the emergence of deep learning and the proliferation of relatively larger datasets (e.g., DFEW <cit.>, CREMA-D <cit.>, FERV39k <cit.>, and MAFW <cit.>), the research paradigm has undergone a transformative shift towards training deep neural networks in an end-to-end fashion.
In general, there are three trends.
The first trend directly utilizes 3D CNNs (such as C3D <cit.>, 3D ResNet <cit.>, R(2+1)D <cit.>, and P3D <cit.>) to extract joint spatiotemporal features from raw facial videos <cit.>.
The second trend uses the combination of 2D CNN (e.g., VGG <cit.> and ResNet <cit.>) and RNN (e.g., LSTM <cit.> and GRU <cit.>) <cit.>. In specific, 2D CNN is first used to extract spatial features from each static frame and then RNN is employed to integrate temporal information.
Recently, with the rise of Transformer <cit.>, several studies exploit its global dependency modeling ability to augment CNN/RNN for better performance, which forms the third trend <cit.>.
For instance, Former-DFER <cit.> employs a Transformer-enhanced ResNet-18 for spatial feature extraction and another Transformer for temporal information aggregation.
STT <cit.> improves Former-DFER by introducing factorized spatial and temporal attention for joint spatiotemporal feature learning.
IAL <cit.> further introduces the global convolution-attention block and intensity-ware loss to deal with expressions with different intensities.
However, all the above methods fall into the supervised learning paradigm, which is thus restricted by the limited training samples in existing DFER datasets. Unlike them, this paper proposes a self-supervised method that can learn powerful representations from massive unlabeled facial video data and achieves significant improvement.
§.§ Masked Autoencoders
Masked autoencoders (MAEs), as the representative of generative self-supervised learning, have recently achieved unprecedented success in many deep learning fields <cit.>. They are mainly inspired by the progress of masked language modeling (e.g., BERT <cit.> and GPT <cit.>) in natural language processing and typically adopt a mask-then-predict strategy to pre-train the vanilla ViT. Notably, iGPT <cit.> follows GPT to auto-regressively predict pixels and makes the first successful attempt. BEiT <cit.> follows BERT and adopts a two-stage training pipeline, i.e., first utilizing an off-the-shelf tokenizer to generate discrete visual tokens and then performing masked-then-predict training. MAE <cit.> improves BEiT by designing an asymmetric encoder-decoder architecture to enable efficient end-to-end pre-training. After that, many studies adopt the architecture of MAE to perform self-supervised pre-training on various tasks. For instance, VideoMAE <cit.> and its concurrent work MAE-ST <cit.> extends MAE to the video domain and achieve impressive results on lots of video benchmarks. Similarly, AudioMAE <cit.> successfully applies MAE on audio spectrograms to learn acoustic representations. Our proposed MAE-DFER is inspired by VideoMAE and develops two core designs to facilitate effective and efficient dynamic facial expression representation learning for DFER.
§ METHOD
In this section, we first revisit VideoMAE in Section <ref>, then discuss two key challenges it faces and give an overview of the proposed MAE-DFER in Section <ref>, finally we elaborate on two core designs of MAE-DFER in Section <ref> and Section <ref>, respectively.
§.§ Revisiting VideoMAE
VideoMAE <cit.> is a simple extension of MAE <cit.> in the video domain. It basically follows the asymmetric encoder-decoder architecture of MAE for self-supervised video pre-training.
The main difference is that a much higher masking ratio (i.e., 90% vs. 75%) and tube masking strategy (instead of random masking) are adopted, considering that large temporal redundancy and high temporal correlation in videos <cit.>.
In specific, VideoMAE mainly consists of four modules: cube embedding, tube masking, a high-capacity encoder Φ_e (i.e., the vanilla ViT-B), and a lightweight decoder Φ_d (i.e., ).
Given a raw video 𝐕∈ℝ^T × H × W × 3, VideoMAE first utilizes cube embedding with a cube size of 2 × 16 × 16 to transform 𝐕 into a sequence of tokens 𝐗∈ℝ^K × C, where K=T/2·H/16·W/16 and C is the channel size.
Then the tube masking module generates a mask 𝐌∈{0, 1}^K with a masking ratio of ρ=90% and the high-capacity encoder Φ_e only takes the unmasked tokens 𝐗⊙𝐌∈ℝ^L × C (L = (1-ρ)K) as input and simply process them with global space-time self-attention. Subsequently, the lightweight decoder Φ_d combines the encoded visible tokens with the learnable mask tokens (with a size of ρ K) to reconstruct the raw video data.
Finally, the mean square error between the original and reconstructed video in the masked positions are calculated to optimize the whole model. The above process can be generally formulated as follows:
ℒ_VideoMAE = MSE(Φ_d(Φ_e(𝐗⊙𝐌)), 𝐕⊙Ψ(1-𝐌))
where Ψ is a function used to obtain masked positions in the pixel space.
In downstream tasks, the lightweight decoder Φ_d is discarded and only the high-capacity ViT encoder Φ_e will be fine-tuned.
§.§ MAE-DFER: Overview
Although VideoMAE has made great success in self-supervised video pre-training, it still faces two major challenges. First, it only focuses on reconstructing raw appearance contents in the video, which thus lacks explicit temporal motion modeling and might not be sufficient to model temporal facial motion information. Second, although it enjoys high efficiency during pre-training through an asymmetric encoder-decoder architecture (i.e., dropping a large proportion of masked tokens to save computation), the computational cost of global space-time self-attention in the vanilla ViT is still extremely expensive during downstream fine-tuning since it cannot drop input tokens at this stage. To tackle these issues, as shown in Fig. <ref>, we propose MAE-DFER, a new self-supervised framework for DFER. For the first issue, MAE-DFER introduces joint masked appearance and motion modeling to encourage the model to excavate both static appearance and dynamic motion information (Section <ref>). For the second issue, it employs a novel Local-Global Interaction Transformer (LGI-Former) as the encoder to largely reduce the computational cost of ViT during downstream fine-tuning (Section <ref>).
§.§ MAE-DFER: Joint Masked Appearance and Motion Modeling
Temporal motion information matters for DFER since playing a video backwards may convey totally different emotions.
To explicitly incorporate this information in self-supervised pre-training, our MAE-DFER adds an additional temporal motion reconstruction branch in parallel with the original appearance reconstruction branch in VideoMAE to achieve joint facial appearance and motion structure learning. Specifically, we simply calculate the frame difference signal as the temporal motion target given that its computation is very cheap and it has shown effectiveness in video action recognition <cit.>. To ensure that the computational cost during pre-training similar to VideoMAE, we share the decoder backbone for appearance and motion branches and only use two different linear heads to predict their targets.
Besides, the decoder only outputs appearance predictions in the odd frames and motion predictions in the remaining even frames. Finally, the total loss is the weighted sum of mean square errors in two branches:
ℒ_MAE-DFER = λ ·MSE(Φ_d(Φ_e(𝐗⊙𝐌)), 𝐕_a ⊙Ψ(1-𝐌)) +
(1-λ) ·MSE(Φ_d(Φ_e(𝐗⊙𝐌)), 𝐕_m ⊙Ψ(1-𝐌))
where 𝐕_a = 𝐕[0 T 2] is the appearance target, 𝐕_m = 𝐕[1 T 2] - 𝐕[0 T 2] is the motion target, λ is a hyperparameter to balance the contribution of two branches and we empirically set it to 0.5.
§.§ MAE-DFER: Efficient LGI-Former
The architecture of LGI-Former is illustrated in Fig. <ref>.
Unlike the global space-time self-attention adopted in the vanilla ViT, LGI-Former constrains self-attention in local spatiotemporal regions to save computation. However, simply stacking multiple local self-attention layers does not permit inter-region information exchange.
Therefore, the core idea of LGI-Former is to introduce a small set of representative tokens to local regions.
On the one hand, these tokens take charge of summarizing critical information in local regions. On the other hand, they allow for long-range dependencies modeling between different regions and enable efficient local-global information exchange.
Thanks to the introduction of representative tokens, the expensive global space-time self-attention can be decomposed into three stages with much cheaper computation: 1) local intra-region self-attention, 2) global inter-region self-attention, and 3) local-global interaction.
In the following, for simplicity, we only describe the above three stages during fine-tuning. The process during pre-training is similar as MAE-DFER follows VideoMAE to adopt the tube masking strategy and applies the same masking ratio to each local region to ensure that all regions have an equal number of visible tokens.
Local Intra-Region Self-Attention.
As shown in Fig. <ref>, we first reshape the input 𝐗∈ℝ^K × C to 𝐗∈ℝ^T' × H' × W' × C and divide it into non-overlapped local spatiotemporal regions with equal size.
In each local region, apart from the original tokens, we also add a learnable representative token.
The local intra-region self-attention then operates on the concatenation of these two kinds of tokens to simultaneously promote fine-grained local feature learning and enable local information aggregation into the representative token.
Assume that the original local tokens and the associated representative token in the ith region (i ∈{1,2, ..., M}, M is the total number of regions) is 𝐗_i ∈ℝ^N × C and 𝐒_i ∈ℝ^1 × C respectively, the formulation of local intra-region self-attention is given as follows:
𝐗̂_i = Concat(𝐒_i, 𝐗_i)
𝐗̂_i = MHSA(LN(𝐗̂_i)) + 𝐗̂_i
where 𝐗̂_i ∈ℝ^(N+1) × C, MHSA is the multi-head self-attention in the vanilla ViT, and LN stands for layer normalization. In particular, the calculation of MHSA is formulated as follows:
MHSA(𝐗) = Concat(head_1, ..., head_h)𝐖^O
head_j = Attention(𝐗𝐖^Q_j, 𝐗𝐖^K_j, 𝐗𝐖^V_j)
Attention(𝐐, 𝐊, 𝐕) = softmax(𝐐𝐊^⊤/√(d))𝐕
where 𝐖^*_j ∈ℝ^C× d (* ∈{Q,K,V }), 𝐖^O∈ℝ^C× C, h is the number of attention heads, d=C/h is the feature dimension of each head.
Global Inter-Region Self-Attention.
After local intra-region self-attention, the representative token has extracted crucial information in each local region and can represent the original tokens to perform information exchange between different regions.
Since the number of representative tokens is typically small (e.g., 8), the computational cost for inter-region communication can be negligible.
Thus, we first aggregate all representative tokens and then simply utilize global inter-region self-attention on them to propagate information between different regions, i.e.,
𝐒 = Concat(𝐒_1, ..., 𝐒_M)
𝐒 = MHSA(LN(𝐒)) + 𝐒
where 𝐒∈ℝ^M × C is the aggregated representative tokens.
Local-Global Interaction.
After information propagation via global inter-region self-attention, the representative token in each local region has been consolidated by useful information from other regions, thus having a global view of the whole input tokens.
To enable the original tokens in each local region to access the global information, we further employ cross-attention between local tokens and representative tokens to achieve that goal:
𝐗_i = MHCA(LN(𝐗_i), LN(𝐒)) + 𝐗_i
𝐗_i = FFN(LN(𝐗_i)) + 𝐗_i
𝐒 = FFN(LN(𝐒)) + 𝐒
where MHCA is multi-head cross-attention and FFN denotes feed-forward network. Specifically, MHCA has the similar implementation with MHSA except that its query and key/value come from different inputs. Its calculation is given as follows:
MHCA(𝐗, 𝐘) = Concat(head_1, ..., head_h)𝐖^O
head_h = Attention(𝐗𝐖^Q_j, 𝐘𝐖^K_j, 𝐘𝐖^V_j)
Besides, FFN is two fully-connected layers, i.e.,
FFN(𝐗) = GELU(𝐗𝐖_1+𝐛_1) 𝐖_2 + 𝐛_2
where GELU is the activation function, 𝐖_1 ∈ℝ^C× 4C, 𝐛_1 ∈ℝ^4C, 𝐖_2 ∈ℝ^4C× C, and 𝐛_2 ∈ℝ^C.
Complexity Analysis.
We suppose that the flattened input is 𝐗∈ℝ^K × C, where K=MN is the number of total input tokens, M is the number of local regions and N is the number of original tokens in each region.
Since self-attention scales quadratically with the sequence length, the complexity of local intra-region self-attention is O(M(N+1)^2) ≈ O(MN^2) = O(K^2/M). Similarly, the complexity of global inter-region self-attention is O(M^2) = O(K^2/N^2). Moreover, local-global interaction has a complexity of O(MNM)=O(K^2/N). Putting them together, the complexity of an LGI-Former block is O((1/M + 1/N^2 + 1/N) K^2), while a standard Transformer block in the vanilla ViT has a complexity of O(K^2). In practice, M ≪ K and N ≪ K, thus the computational cost of LGI-Tranformer is largely reduced compared with ViT.
§ RESULTS
§.§ Datasets
Pre-training Dataset. We perform self-supervised pre-training on a large-scale audio-visual speaker recognition dataset VoxCeleb2 <cit.>. It has over 1 million video clips of more than 6,000 celebrities, extracted from around 150,000 interview videos on YouTube. It is divided into a development set and a test set. We only use the development set for pre-training, which contains 1,092,009 video clips from 145,569 videos.
DFER Datasets. We conduct experiments on 6 datasets, including 3 large in-the-wild datasets (i.e., DFEW <cit.>, FERV39k <cit.>, and MAFW <cit.>) and 3 small lab-controlled datasets (i.e., CREMA-D <cit.>, RAVDESS <cit.>, and eNTERFACE05 <cit.>). Their basic information is summarized in Table <ref>.
Detailed introductions can be found in the appendix.
Following previous studies <cit.>, we report both unweighted average recall (UAR, i.e., the mean class accuracy) and weighted average recall (WAR, i.e., the overall accuracy). For those datasets using cross-validation, we combine the predictions and labels from all folds to calculate the final UAR and WAR.
§.§ Implementation Details
MAE-DFER. For the high-capacity encoder, we adopt the LGI-Former which has 16 blocks and a hidden size of 512. The total number of parameters is 84.9M, which is similar to that (86.2M) of ViT base model. The local region size is set to 2×5×10 by default.
For the lightweight decoder, we follow VideoMAE to adopt four standard Transformer blocks with a hidden size of 384.
Pre-training. The original videos provided in VoxCeleb2 have a resolution of 224×224. Given that the speaker's face generally does not fill the entire frame, we only used a 160×160 patch located in the upper center of each video frame to reduce irrelevant background information.
During pre-training, we extract 16 frames from each video clip using a temporal stride of 4. This results in 8×10×10 input tokens after cube embedding, when using a cube size of 2×16×16.
Regarding hyperparameters, we mainly follow VideoMAE. Specifically, we use an AdamW optimizer with β_1=0.9 and β_2=0.95, an overall batch size of 128, a base learning rate of 3e-4, and a weight decay of 0.05. We linearly scale the base learning rate according to the overall batch size, using the formula: lr = base learning rate×batch size/256. In addition, we use a cosine decay learning rate scheduler. By default, we pre-train the model for 50 epochs, with 5 warmup epochs. When using 4 Nvidia Tesla V100 GPUs, the pre-training takes about 3-4 days.
Fine-tuning.
Same as pre-training, the input clip size is 16×160×160 and the temporal stride is 4 for most datasets (except 1 for FERV39k). To optimize the model, we use an AdamW optimizer with β_1=0.9 and β_2=0.999, with a base learning rate of 1e-3 and an overall batch size of 96. The other hyperparameters remain the same as in pre-training, and more details can be found in <cit.>. We fine-tune the pre-trained model for 100 epochs, with 5 warmup epochs. During inference, we sample two clips uniformly along the temporal axis for each video sample and then calculate the average score as the final prediction.
§.§ Ablation Studies
In this section, we conduct ablation experiments on DFEW and FERV39k to demonstrate the effects of several key factors in MAE-DFER. For simplicity, we only report results of fold 1 (fd1) for DFEW.
Pre-training Epochs.
As shown in Table <ref>, we observe that longer pre-training is generally beneficial and the performance saturation occurs at around 50 epochs. Besides, we also find that the performance of training from scratch (i.e., #Epochs=0) is very poor (nearly random guessing). This is largely attributed to the limited training samples in current DFER datasets since large vision Transformers are data-hungry and training them typically requires more than million-level labeled data <cit.>. This result also demonstrates the significance and superiority of large-scale self-supervised pre-training over traditional supervised learning.
Comparison of Different Model Architectures.
We then investigate the effect of three key modules in LGI-Former by evaluating the performance of the following variants: 1) only local intra-region self-attention (i.e., no global inter-region self-attention and local-global interaction), 2) no local-global interaction, 3) no global inter-region self-attention, and 4) using the global space-time self-attention in ViT. The results are presented in Table <ref>. We have the following observations: 1) The first variant has the worst performance, which is as expected since only utilizing local intra-region self-attention does not allow local tokens to access global information. 2) Either global inter-region self-attention or local-global interaction contributes to better performance, demonstrating the effectiveness of these two modules in local-global information propagation. Besides, the latter is generally more effective than the former but at the cost of more computation. It also should be noted that global inter-region self-attention only introduces negligible computation (∼0.1G FLOPs) thanks to the small number (i.e., 8) of representative tokens. 3) When combing the global inter-region self-attention with local-global interaction, LGI-Former achieves the best results. Besides, compared with the last variant which uses global space-time self-attention (i.e., ViT), we only observe slight performance drop (<0.7%) but large computation reduction (∼38% FLOPs), thus demonstrating the efficiency of LGI-Former.
Effectiveness of Joint Masked Appearance and Motion Modeling.
We study the effect of different loss weights in Equation <ref>, ranging from 1.0 (i.e., only the original appearance target) to 0.0 (i.e., only the motion target). As shown in Fig. <ref>, we find that the joint model outperforms the model with only one reconstruction target and it achieves the best performance when adopting a loss weight around 0.5. For instance, on DFEW fd1, the best joint model surpasses the standalone appearance model by 1.69% UAR and 0.92% WAR and its motion counterpart by 1.77% UAR and 1.02% WAR. These results indicate that joint masked appearance and motion modeling are indispensable to facilitate better spatiotemporal representation learning for DFER. In addition to our MAE-DFER, we apply it to VideoMAE, which can also bring further improvement (1.51% UAR with 0.30% WAR on DFEW fd1 and 0.39% UAR with 0.13% WAR on FERV39k).
Role of Local Region Size.
We evaluate the effect of different local region sizes in LGI-Former and report the results in Table <ref>. From the table, we notice that the model performance is not very sensitive to the region size. Moreover, the model computation with different region sizes are similar to each other. These results indicate that, no matter how to divide the input into local regions, LGI-Former can achieve effective and efficient local-global information exchange via the introduced representative tokens and its specialized designs (i.e., the three key modules). Besides, when using the region size of 2×5×10, the model achieves the best performance-computation trade-off.
Role of Classification Token Type.
We finally explore the effect of two different classification tokens (i.e., original tokens and representative tokens) for downstream fine-tuning. As shown in Table <ref>, we find that performing mean pooling on the representative tokens for final classification slightly outperforms that on the original tokens. We speculate that this is because the representative tokens are more compact and high-level than the original tokens.
§.§ Comparison with State-of-the-art Methods
Results on Large In-the-wild Datasets.
We first compare MAE-DFER with previous state-of-the-art supervised methods on DFEW, FERV39k, and MAFW in Table <ref>, Table <ref>, and Table <ref>, respectively.
As shown in Table <ref>, it is evident that our MAE-DFER surpasses the previous best methods (i.e., DPC-Net <cit.> and IAL <cit.>) on DFEW with a significant margin, achieving a noteworthy 6.30% UAR and 5.19% WAR improvement.
Besides, we also present fine-grained performance of each class in Table <ref> of Appendix, MAE-DFER also achieves remarkable improvement across most facial expressions, such as happy, sad, disgust, and fear. Notably, for the disgust expression, which only accounts for 1.2% of the entire dataset and is very challenging for all baselines, MAE-DFER improves the best accuracy by over 10%. This considerable improvement indicates that our method is capable of learning powerful representations for DFER via large-scale self-supervised pre-training.
As for the other two datasets, we have similar observations. On the current largest DFER dataset, FERV39k, MAE-DFER achieves the new state-of-the-art performance, exceeding the previous best methods (i.e., STT <cit.> and IAL <cit.>) by 5.36% UAR and 3.53% WAR. On the MAFW dataset, MAE-DFER outperforms the best-performing T-ESFL <cit.> by a considerable margin of 8.34% UAR and 6.13% WAR. Besides, large performance improvements on several rare expressions are also observed on FERV39k and MAFW from Table <ref> and Table <ref> in Appendix.
In summary, the promising results on three in-the-wild datasets demonstrate the strong generalization ability of MAE-DFER in practical scenarios.
Comparison with VideoMAE.
To verify the effectiveness and efficiency of MAE-DFER, we also show the results of VideoMAE <cit.> on three in-the-wild datasets, including both the original model pre-trained on Kinetics-400 <cit.> for 1600 epochs and the model pre-trained on VoxCeleb2 under the same setting as MAE-DFER.
From Table <ref>-<ref>, we have the following observations: 1) The original VideoMAE model pre-trained on general videos (i.e., action recognition) is largely inferior to its counterpart pre-trained on facial videos, indicating that the large-domain gap between self-supervised pre-training and downstream fine-tuning will severely hurt the performance. 2) Compared with VideoMAE pre-trained on VoxCeleb2, our MAE-DFER largely reduces the computational cost (∼38% FLOPs) during fine-tuning, while achieving comparable performance on DFEW and FERV39k (only 0.11%∼0.19% UAR and 0.17%∼0.32% WAR drop), and even better performance on MAFW (0.75% UAR and 0.80% WAR improvement). Thus, these results demonstrate the effectiveness and efficiency of the proposed method.
Results on Small Lab-controlled Datasets.
We show the comparison results on CREMA-D, RAVDESS, and eNTERFACE05 in Table <ref>.
Since these datasets are audio-visual datasets, we also present several multimodal methods on some of them for additional comparison.
Compared with in-the-wild datasets, we observe even larger performance improvement on three lab-controlled datasets. On CREMA-D, our MAE-DFER outperforms the best unimodal methods by over 12% UAR and 10% WAR. More surprisingly, it also shows slightly better performance than the state-of-the-art multimodal method, thus amply demonstrating the superiority of MAE-DFER. On RAVDESS, MAE-DFER improves the previous best by more than 12% WAR and also achieves comparable performance with the best audio-visual method. Finally, on eNTERFACE05, MAE-DFER surpasses the best-performing Graph-Tran <cit.> by about 7% WAR.
§.§ Visualization Analysis
Reconstruction Visualization.
We first visualize the reconstructed results of MAE-DFER in Fig. <ref>. The video is randomly selected from the VoxCeleb2 test set.
For better visualization, we use a gray-style background for frame difference images in even frames and also show the whole reconstructed video by adding the reconstructed frame difference images in even frames with the adjacent recovered odd frame images.
More reconstructed samples can be found in Fig. <ref>, Fig. <ref>, Fig. <ref> and Fig. <ref> of Appendix.
From Fig. <ref>, we see that under such a high masking ratio (75% or even higher 90%), MAE-DFER still can generate satisfactory reconstructed results for both the facial appearance content and temporal motion information.
Notably, despite the change in identity information (as the model does not see this person), the dynamic facial expression can be well restored by reasoning in limited visible contexts (e.g., the opening mouth).
These results imply that our model is able to learn meaningful dynamic facial representations that capture the global spatiotemporal structure.
Visualization of Embedding Space.
To further qualitatively show the superiority of MAE-DFER over traditional supervised methods, we visualize the learned embeddings using t-SNE <cit.> on DFEW. As can be seen in Fig. <ref>, the embeddings of our method are more compact and separable than those of two state-of-the-art supervised methods (i.e., IAL <cit.> and Former-DFER <cit.>), which demonstrates that MAE-DFER can learn more discriminative representations for different dynamic facial expressions through large-scale self-supervised pre-training. Besides, VideoMAE has similar embedding space with our MAE-DFER but at the cost of much larger computational cost.
§ CONCLUSION
In this paper, we have presented a new self-supervised framework, namely MAE-DFER, to exploit large amounts of unlabeled facial video data to address the dilemma of current supervised methods and promote the development of DFER. MAE-DFER is motivated by VideoMAE and improves it by incorporating two special designs: 1) It develops a novel and efficient LGI-Former as the encoder, which can largely reduce the computational cost of the vanilla ViT during downstream fine-tuning. 2) It introduces joint masked appearance and motion modeling to encourage the model to capture both static appearance and dynamic motion information.
To verify the effectiveness of MAE-DFER, we conduct comprehensive experiments on six benchmarks, including three large in-the-wild datasets and three small lab-controlled datasets.
The experimental results show that MAE-DFER consistently and significantly outperforms the previous best supervised methods, which demonstrates that our method is capable of learning powerful dynamic facial representations for DFER.
Furthermore, compared with VideoMAE, MAE-DFER largely reduces the computational cost (about 38% FLOPs), while achieving comparable or even better performance.
We believe MAE-DFER will serve as a strong baseline and foster relevant research in DFER.
In the future, we plan to explore the limit of MAE-DFER. Beside, it is also interesting to apply it to other related tasks (e.g., dynamic micro-expression recognition and facial action unit detection).
We would like to thank anonymous reviewers for their valuable suggestions.
ACM-Reference-Format
In the Appendix, we provide more information about six DFER datasets used in this paper, more ablation studies, detailed results with fine-grained performance of each class and confusion matrices, and additional visualization examples.
§ DATASETS
To comprehensively evaluate the effectiveness of the proposed methods, we conduct experiments on six DFER datasets, including three large in-the-wild datasets (DFEW <cit.>, FERV39k <cit.>, and MAFW <cit.>) and three small lab-controlled dataset (CREMA-D <cit.>, RAVDESS <cit.>, and eNTERFACE05 <cit.>). Their detailed introductions are given as follows.
Besides, we also show the label distribution on three large in-the-wild datasets (since they are more imbalanced than lab-controlled datasets) in Fig. <ref>.
DFEW comprises 16,372 video clips extracted from over 1,500 high-definition movies, presenting several challenging characteristics, such as extreme illumination and occlusion. Each of these video clips is annotated by ten well-trained annotators with seven basic emotions (i.e., happy, sad, neutral, anger, surprise, disgust, and fear). Most of them range from 2 to 5 seconds in duration. To be consistent with previous work <cit.>, we evaluate our proposed method using the default 5-fold cross-validation protocol on 11,697 single-labeled clips.
FERV39k is currently the largest real-world dynamic facial expression recognition dataset, comprising 38,935 video clips that belong to 22 representative scenes in four different scenarios (i.e., daily life, weak-interactive shows, strong-interactive activities, and anomaly issues). The video has an average length of 1.5 seconds and is annotated by 30 professional annotators with seven basic emotions. The entire dataset has been officially divided into 80% for training and the remaining 20% for testing.
MAFW is a multimodal compound in-the-wild affective dataset, consisting of 10,045 video clips annotated with 11 compound emotions (including contempt, anxiety, helplessness, disappointment, and seven basic emotions). Each video clip is also accompanied by a textual description of the subject's affective behavior. Similar to DFEW, the video length mostly falls into the interval of 2 to 5 seconds. In this paper, we only consider the video modality and conduct experiments on 9,172 single-labeled video clips. For model evaluation, we follow the original paper <cit.> and adopt the default 5-fold cross-validation protocol.
CREMA-D is a high-quality audio-visual dataset designed for studying the multimodal expression and perception of acted emotions. It comprises 7,442 video clips featuring 91 actors of various ethnicities, each labeled with 6 emotions, including happy, sad, anger, fear, disgust, and neutral. Since there is no official split, we employ a 5-fold subject-independent cross-validation protocol.
RAVDESS is an audio-visual dataset that includes emotional speech and song. It comprises 2,880 video clips featuring 24 professional actors, each labeled with 8 emotions (i.e., 7 basic emotions and calm). In this paper, we only use the speech part consisting of 1,440 video clips. Since this dataset does not have an official split, we adopt the same 6-fold subject-independent cross-validation protocol as in <cit.>.
eNTERFACE05 is another audio-visual emotion recognition dataset that contains approximately 1,200 video clips featuring more than 40 subjects, each simulating 6 emotions, including anger, disgust, fear, happy, sad, and surprise. To ensure a fair comparison with previous work <cit.>, we employ a 5-fold subject-independent cross-validation protocol.
§ MORE ABLATION STUDIES
Model Size.
We investigate the effect of different sizes of LGI-Former to downstream performance. In addition to the default base version (512-dim), we also design two smaller versions, i.e., small (384-dim) and tiny (256-dim). The small version has roughly half parameters and FLOPs of the base version and it is similar for tiny and small. As shown in Table <ref>, we find that the performance only degrades moderately when the model size becomes smaller, especially for FERV39k. It is worth noting that even the tiny version still largely outperforms the state-of-the-art supervised methods (such as DPCNet <cit.> and IAL <cit.> in Table <ref> and Table <ref>), despite that they has similar parameters and computational cost, which thus further demonstrates the superiority of our proposed method.
VideoMAE with Joint Masked Appearance and Motion Modeling.
Besides our MAE-DFER, we further introduce explicit temporal facial motion modeling to VideoMAE. The results are presented in Table <ref>. Similar to our MAE-DFER, we observe that joint masked appearance and motion modeling can further boost the performance of VideoMAE, although standalone motion modeling performs slightly worse than standalone appearance modeling in the original VideoMAE. This result verifies the effectiveness of joint masked appearance and motion modeling in self-supervised dynamic facial representation learning again.
§ DETAILED RESULTS
In this section, we first present more fine-grained results (i.e., accuracy of each class) on DFEW, FERV39k, and MAFW in Table <ref>, Table <ref>, and Table <ref>, respectively.
From three tables, we observe that MAE-DFER significantly outperforms the state-of-the-art supervised methods on most facial expressions, especially on some rare facial expressions (such as disgust, contempt, and disappointment).
For instance, on DFEW, our MAE-DFER surpasses the previous best supervised results by about 5% on happy, 9% on sad, 13% on disgust, and 8% on fear.
On MAFW, it improves the best-performing supervised methods by over 5% on anger, 7% on disgust, 8% on contempt, 8% on anxiety, 6% on helplessness, and 7% on disappointment.
Moreover, compared with VideoMAE pre-trained under the same setting, MAE-DFER has comparable or even better fine-grained performance while largely reduces the computational cost during fine-tuning.
We also note that the original VideoMAE pre-trained on Kinetics-400 does not perform well on some rare expressions (e.g., disgust on FERV39k), although it could achieve the best results on some dominated expressions (e.g., neutral on FERV39k).
These results indicate that our MAE-DFER can effectively and efficiently learn more robust and general representations for DFER via large-scale self-supervised training on abundant unlabeled facial videos, thus mitigating the unbalanced learning issue and achieving superior fine-grained performance.
We also show the confusion matrices on three large in-the-wild datasets in Fig. <ref>.
As can be observed, the neutral expression is more often mis-classified as other expressions (such as disgust on DFEW, surprise on FERV39k, and contempt on MAFW) since their boundaries are typically less clearer than that between other two distinct expressions (e.g., happy and sad). Besides, we also notice that fear is often confused with surprise on DFEW and MAFW but sad on FERV39k.
§ ADDITIONAL VISUALIZATION
We first show more reconstruction examples from the test set of VoxCeleb2 in Fig. <ref> and Fig. <ref>.
For better visual comparison, we do not show the frame difference images in even frames as in the main text.
We use the model pre-trained under the masking ratio of 90% and evaluate it under 75% or 90% masking. For simplicity, we use random tube masking in visualization (i.e., do not apply the same masking ratio to each local region to ensure that all regions have an equal number of masked tokens as in pre-training). From Fig. <ref> and Fig. <ref>, we observe that MAE-DFER can generate promising reconstruction results (especially the dynamic facial expressions) by reasoning high-level spatiotemporal semantics from limited visible contexts. In addition to VoxCeleb2, we also present the reconstruction results of several Chinese celebrities with different facial expressions randomly selected from FERV39k in Fig. <ref> and Fig. <ref>. Similar observations can also be found although the pre-training of MAE-DFER is dominated by English speakers, illustrating the strong generalization ability of our method.
|
http://arxiv.org/abs/2307.02823v1
|
20230706074417
|
A generalized Routh-Hurwitz criterion for the stability analysis of polynomials with complex coefficients: application to the PI-control of vibrating structures
|
[
"Anthony Hastir",
"Riccardo Muolo"
] |
math.OC
|
[
"math.OC",
"math.DS"
] |
anthony.hastir@unamur.be
Department of Mathematics & naXys, Namur Institute for Complex Systems, University of Namur, 5000 Namur, Belgium
Department of Mathematics & naXys, Namur Institute for Complex Systems, University of Namur, 5000 Namur, Belgium
Department of Systems and Control Engineering, Tokyo Institute of Technology, Tokyo 152-8552, Japan
The classical Routh-Hurwitz criterion is one of the most popular methods to study the stability of polynomials with real coefficients, given its simplicity and ductility. However, when moving to polynomials with complex coefficients, a generalization exists but it is rather cumbersome and not as easy to apply. In this paper, we make such generalization clear and understandable for a wider public and apply it. After having explained the method, we demonstrate its use to determine the external stability of a system consisting of the interconnection between a rotating shaft and a PI-regulator. The extended Routh-Hurwitz criterion gives then necessary and sufficient conditions on the gains of the PI-regulator to achieve stabilization of the system together with regulation of the output. This illustrative example makes our formulation of the extended Routh-Hurwitz criterion ready to be used in several other applications.
Keywords Routh-Hurwitz criterion, complex coefficients polynomials, vibrating structures, PI-control
A case study of the generalized Routh-Hurwitz criterion for the stability analysis of polynomials with complex coefficients, with applications to the PI-control of a rotating shaft
Riccardo Muolo
August 1, 2023
====================================================================================================================================================================================
§ INTRODUCTION
Let us consider the following n-th order polynomial
q(s)=s^n+(a_1+i b_1) s^n-1+⋯+(a_n-1+i b_n-1) s+(a_n+i b_n),
where i denotes the imaginary unit throughout this note.
We want to study its stability, i.e., all its roots need to have negative real part. If all the coefficients b_j=0 ∀ j∈{1,...,n}, meaning that we would be dealing with real coefficients, we would rely on the well-known Routh-Hurwitz criterion <cit.>, which provides a simple algorithm to verify the stability conditions. However, to study the stability of the polynomial (<ref>) is not trivial and the method to obtain the stability conditions is not as straightforward as its real analogous. In the literature there are some available tools, but they are often developed for specific cases and their applicability is not immediately clear for a general public. For instance, one finds the so-called Kharitonov's theorem, first introduced in <cit.> for polynomials with real coefficients and then extended in <cit.> in the complex case. This theorem consists in determining the region where the roots of a polynomial are located based on the same conclusion obtained for several upper- and lower-polynomials. That is, polynomials with coefficients encapsulating the coefficients of the nominal polynomial, sometimes called interval polynomials. One needs 4 polynomials in the real case, while 8 polynomials need to be used when the coefficients are complex. Few years later, these results have been revisited in <cit.> and <cit.> by taking an engineering oriented point of view, notably. The major drawback of such an approach is that the coefficients of the original polynomial are not used directly, making this method not systematic.
We found that the most general method is the one developed in <cit.> and then recalled in <cit.>, which is the one we will discuss pedagogically in the following. To the best of our knowledge, it is the most natural and direct extension of the classical Routh-Hurwitz criterion in the complex case. However, as a simple counter-example may highlight on a polynomial of degree 3, the main result that is presented in <cit.> is wrong when the degree of the considered polynomial is odd. This mistake has been considered again in <cit.> on a particular example in network theory. This constitutes one additional reason for this note to describe the method in a constructive and a rigorous way. With an eye on applications, such a criterion turns out to be useful for determining for instance the stability of a dynamical system whose dynamics exhibit complex coefficients. Such cases arise often in rotordynamics to describe the behavior of rotating shafts as highlighted in e.g. <cit.> and <cit.>. Complex coefficients appear also in the dynamics of electrical networks as is described in <cit.>. The criterion discussed in this note should then also be useful for analyzing the stability or developing control methods for such systems and beyond.
The paper is organized as follows: the extension of the classical Routh-Hurwitz criterion is highlighted in Section <ref> as an algorithm, in which we make the distinction between n odd and n even. The case of n=4 is developed in Section <ref>. In the obtained necessary and sufficient conditions, the imaginary parts are then set to 0 to show that the conditions imposed by the classical Routh-Hurwitz criterion are recovered. An example build from rotordynamics is then considered in Section <ref>. A Proportional-Integral (PI) action is applied to the system and the stability properties of the closed-loop system are analyzed with the results described earlier in this note. Some conclusions are addressed in Section <ref>.
§ GENERAL DESCRIPTION OF THE METHOD
Let us again consider the n-th order polynomial given in (<ref>).
As a matter of notation, let ℂ^-_ξ (resp. ℂ^+_ξ) denote the open subset {s∈ℂ, ℜ𝔢(s)<ξ} (resp. {s∈ℂ, ℜ𝔢(s)>ξ}), ξ∈ℝ. We use also the notation | A| for the determinant of the matrix A. The general algorithm that determines whether the roots of q(s) in (<ref>) are in ℂ_0^- is presented here below.
§ EXAMPLE FOR A 4-TH DEGREE POLYNOMIAL
Let us now show, as a pedagogical example, the table of coefficients for a 4-th order polynomial of the form
q(s)=s^4+(a_1+i b_1) s^3+(a_2+i b_2) s^2+(a_3+i b_3) s+(a_4+i b_4)
and explicitly find the stability conditions.
The coefficients obtained with the method described in the previous section are
s^4 1 b_1 a_2 b_3 a_4
a_1 b_2 a_3 b_4
s^3 b_1^(1)=a_1b_1-b_2 a_2^(1)=a_1a_2-a_3 b_3^(1)=a_1b_3-b_4 a_4^(1)=a_1a_4
a_2^(2)=a_1a_2^(1)-b_1^(1)b_2 b_3^(2)=a_1b_3^(1)-b_1^(1)a_3 a_4^(2)=a_1a_4^(1)-b_1^(1)b_4
s^2 b_2^(1)=a_2^(2)b_2-a_1b_3^(2) a_3^(1)=a_2^(2)a_3-a_1a_4^(2) b_4^(1)=a_2^(2)b_4
a_3^(2)=a_2^(2)a_3^(1)-b_2^(1)b_3^(2) b_4^(2)=a_2^(2)b_4^(1)-b_2^(1)a_4^(2)
s^1 b_3^(3)=a_3^(2)b_3^(2)-a_2^(2)b_4^(2) a_4^(3)=a_3^(2)a_4^(2)
s^0 a_4^(4)=a_3^(2)a_4^(3)-b_3^(3)b_4^(2)
tableCoefficients of the generalized Routh-Hurwitz criterion
The necessary and sufficient conditions for the stability of the polynomial (<ref>) are given by
a_1>0
a_2^(2)=a_1 a_2^(1)-b_1^(1)b_2>0
a_3^(2)=a_2^(2) a_3^(2)-b_2^(1)b_3^(2)>0
a_4^(4)=a_3^(2) a_4^(3)-b_3^(3)b_4^(2)>0
A cumbersome but straightforward algebraic computation, which can be found in Appendix <ref>, gives the expression for the generalized Routh-Hurwitz conditions
a_1>0
a_1^2a_2-a_1a_3-a_1b_1b_2+b_2^2:=β>0
β^2a_3-β a_1^3a_4+β(a_1b_4+a_3b_2)(a_1b_1-b_2)-β a_1b_2(a_1b_3-b_4)+a_1[a_1(a_1b_3-b_4)-a_3(a_1b_1-b_2)]^2:=γ>0
γ^2[a_1^2a_4-b_4(a_1b_1-b_2)]-η[γ a_1(a_1b_3-b_4)γ a_3(a_1b_1-b_2)-βη]>0
where
η=β^2b_3-[βb_2-a_1(a_1(a_1b_3-b_4)-a_3(a_1b_1-b_2))][a_1^2a_4-b_4(a_1b_1-b2)]
From the above conditions (<ref>), one can recover the classical Routh-Hurwitz conditions when the polynomial has real coefficients, i.e., b_j=0 ∀ j, as we show in Appendix <ref>.
§ APPLICATION TO THE PI-REGULATION OF A ROTATING SHAFT
An application arising from the theory of rotating shafts is considered in this section. We emphasize that this application has been chosen in order to highlight the efficiency of the generalized Routh-Hurwitz criterion. It is the reason why some physical concepts are omitted. The beginning of the study of the dynamics of rotating shafts started in 1869 by William John Macquorn Rankine. The interests for such dynamical systems has then increased by the years. For an overview on the topic, we refer to e.g. <cit.>. Therein, it is shown that such systems may be modeled by the following ordinary differential equation
ẍ(t) + (2kω + i 2Ω )ẋ(t) + (ω^2-Ω^2)x(t) = f(t),
where k, Ω and ω stand for a damping coefficient, an angular velocity and the frequency of undamped oscillations, respectively. The quantity f acts on the system as an external force. As a control objective for that system, let us consider the regulation of the position x(t) to a constant prescribed reference position denoted by x_ref. To achieve such an objective, one will rely on the well-established Proportional Integral (PI) control, see e.g. <cit.> and references therein. In that way, the forcing term will take the following form
f(t) = k_p(x(t)-x_ref) + k_I ℓ(t),
where the proportional and the integral gains denoted by k_p and k_I need to be determined for the closed-loop system to be stable. The quantity ℓ(t) is updated adaptively as
ℓ̇(t) = x(t) - x_ref.
By introducing the auxiliary variables x_1 = x and x_2 = ẋ, the closed-loop system composed of (<ref>), (<ref>) and (<ref>) reads as
(ẋ_1(t)
ẋ_2(t)
ℓ̇(t)) = (0 1 0
k_p+Ω^2-ω^2 -(2kω + i 2Ω ) k_I
1 0 0)(x_1(t)
x_2(t)
ℓ(t)) - (0
k_p x_ref
x_ref) =: A(x_1(t)
x_2(t)
ℓ(t)) - (0
k_p x_ref
x_ref).
From the above equations and in particular from the third one, if the gains k_p and k_I are chosen such that the matrix A is stable, then the control objective will be satisfied. In particular, the quantity x_1 will reach the equilibrium x_ref. One needs then to determine in which cases the matrix A is a stable matrix. First observe that the characteristic polynomial of that matrix is given by
q(s) = | sI - A| = s^3 + (2kω +i 2Ω )s^2 + (ω^2 - Ω^2 - k_p)s - k_I.
We shall therefore rely on the generalized Routh-Hurwitz criterion detailed in Algorithm <ref>. The consecutive arrays of numbers generated by this algorithm are given by
a_1^(1) = 2kω b_2^(1) = 0 a_3^(1) = -k_I
b_1^(1) = 4kωΩ a_2^(1) = 2kω(ω^2-Ω^2-k_p)+k_I b_3^(1) = 0
a_2^(2) = 2kω(2kω(ω^2-Ω^2-k_p)+k_I) b_3^(2) = k_I(4kωΩ)
b_2^(2) = -8k_I k^2 ω^2Ω a_3^(2) = -k_I a_2^(2)
a_3^(3) = -k_I(a_2^(2))^2 + 32 k_I^2 k^3ω^3Ω^2
According to Algorithm <ref>, the matrix A is stable if and only the following three conditions are satisfied
2kω > 0
2kω[2kω(ω^2 - Ω^2-k_p) + k_I] >0
8k_I^2kωΩ^2 - k_I[2kω(ω^2 - Ω^2-k_p) + k_I]^2 >0.
From a physical point of view, the constants k and ω have to be positive, which makes condition (<ref>) always satisfied. One may also observe that when k_I is negative, then condition (<ref>) is automatically satisfied. In that case, it remains to choose the coefficient k_p such that 2kω(ω^2-Ω^2-k_p)>-k_I. This in particular means that conditions (<ref>)–(<ref>) are solvable.
§ CONCLUSION
In this note, we clarified and explained in a constructive and pedagogical way an extension of the classical Routh-Hurwitz criterion to polynomials with complex coefficients. The general algorithm to determine whether the roots of such a polynomial are located in ℂ_0^- or not is given in Section <ref>. Then, this method is extensively developed for a 4-th order polynomial in Section <ref>. Finally, an application to the PI regulation of a rotating shaft whose own dynamics exhibit complex coefficients is detailed in Section <ref>, giving rise to the study of the stability of a 3-rd order polynomial. Our presentation of the algorithm and examples, make the method understandable and ready to use also for scholars and students outside the control community, paving the way for further advancements in applications where complex polynomials appear.
§.§.§ Acknowledgements
A.H. is supported by a FNRS Postdoctoral Fellowship, Grant CR
40010909. R.M. acknowledges funding from the Walloon Region and the FNRS, Grant FC 33443.
§ EXPLICIT DERIVATION OF THE GENERALIZED ROUTH-HURWITZ CONDITIONS
Let us now explicitly derive the conditions (<ref>).
Derivation of the second condition:
a_2^(2) =a_1 a_2^(2)-b_1^(1) b_2 =a_1(a_1 a_2-a_3)-b_2(a_1 b_1-b_2)=
=a_1^2 a_2-a_1 a_3-a_1 b_1 b_2+b_2^2 := β
Derivation of the third condition:
a_3^(2) =a_2^(2) a_3^(1)-b_2^(1) b_3^(2) =a_2^(2)(a_2^(2) a_3-a_1 a_4^(2))-(a_2^(2) b_2-a_1 b_3^(2))(a_1b_3^(1)-b_1^(1) a_3)=
=(a_2^(2))^2a_3-a_1 a_2^(2) a_4^(2)-a_1 b_2 a_2^(2) b_3^(1)+a_3 b_2 a_2^(2) b_1^(1)+a_1^2 b_3^(1) b_3^(2)-a_1 a_3 b_1^(1) b_3^(2)=
=β^2 a_3-β a_1 (a_1 a_4^(1)-b_1^(1) b_4)-β a_1 b_2 (a_1 b_3-b_4)+β a_3 b_2 (a_1 b_1-b_2)+
+a_1^2(a_1 b_3-b_4)(a_1 b_3^(1)-a_3 b_1^(1)) -a_1 a_3(a_1 b_1-b_2)(a_1 b_3^(1)-b_1^(1) a_3)=
=β^2 a_3-β a_1[a_1^2 a_4-b_4(a_1 b_1-b_2)]-β a_1 b_2(a_1 b_3-b_4)+β a_3 b_2(a_1 b_1-b_2)+
+a_1^2(a_1 b_3-b_4)[a_1(a_1 b_3-b_4)-a_3(a_1 b_1-b_2)] -a_1 a_3(a_1 b_1-b_2)[a_1(a_1 b_3-b_4)-a_3(a_1 b_1-b_2)]=
=β^2 a_3-β a_1(a_1^2 a_4-b_4(a_1 b_1-b_2))-β a_1 b_2(a_1 b_3-b_4) +β a_3 b_2(a_1 b_1-b_2)
+[a_1^2(a_1 b_3-b_4)-a_1 a_3(a_1 b_1-b_2)] [a_1(a_1 b_3-b_4)-a_3(a_1 b_1-b_2)]=
= β^2 a_3-β a_1^3 a_4+β a_1 b_4(a_1 b_1-b_2)-β a_1 b_2(a_1 b_3-b_4) +β a_3 b_2(a_1 b_1-b_2)+a_1[a_1(a_1 b_3-b_4)-a_3(a_1 b_1-b_2)]^2=
= β^2a_3-β a_1^3a_4+β(a_1b_4+a_3b_2)(a_1b_1-b_2)-β a_1b_2(a_1b_3-b_4)+a_1[a_1(a_1b_3-b_4)-a_3(a_1b_1-b_2)]^2:=γ
Derivation of the fourth condition:
a_4^(4) =γ a_4^(3)-b_3^(3) b_4^(2)
=γ(γ a_4^(2))-(γ b_3^(2)-β b_4^(2)) b_4^(2)
=γ^2(a_1 a_4^(1)-b_1^(1) b_4)-γ(a_1 b_3^(1)-a_3 b_1^(1))(β b_4^(1)-b_2^(1) a_4^(2))+β(b_4^(2))^2=
=γ^2[a_1^2 a_4-b_4(a_1 b_1-b_2)]-γ[a_1(a_1 b_3-b_4)-a_3(a_1 b_1-b_2)] η +β (b_4^(2))^2
Where we have defined
η :=β^2 b_4-(β b_2-a_1 b_3^(2))(a_1 a_4^(1)-b_1^(1) b_4) =
= β^2 b_4-[β b_2-a_1(a_1 b_3^(1)-a_3 b_1^(1))][a_1^2 a_4-b_4(a_1 b_1-b_2)]
Moreover, we can rewrite the coefficient b_4^(2) in terms of η, obtaining
b_4^(2) = β^2 b_4-b_2^(1) a_4^(2)= β^2 b_4-[β b_2-a_1(a_1 b_3^(1)-a_3 b_1)][a_1^2 a_4-b_4(a_1 b_1-b_2)]=η
Hence, the explicit expression of the fourth condition becomes
a_4^(4)=γ^2[a_1^2a_4-b_4(a_1b_1-b_2)]-η[γa_1(a_1b_3-b_4)γa_3(a_1b_1-b_2)-βη]
Lastly, let us derive the explicit expression for η:
η :=β^2 b_4-(β b_2-a_1 b_3^(2))(a_1 a_4^(1)-b_1^(1) b_4) =
= β^2 b_4-[β b_2-a_1(a_1 b_3^(1)-a_3 b_1^(1))][a_1^2 a_4-b_4(a_1 b_1-b_2)]=
= β^2b_3-[β b_2-a_1(a_1(a_1b_3-b_4)-a_3(a_1b_1-b_2))][a_1^2a_4-b_4(a_1b_1-b2)]
§ ATTAINMENT OF CLASSICAL ROUTH-HURWITZ CRITERION IN CASE OF REAL COEFFICIENT
Let us now show that the stability conditions are the same as the classical Routh-Hurwitz criterion in case of real coefficients, namely b_j=0 ∀ j∈{0,..,4}. For simplicity, let us again consider a 4-order polynomial
p(s)=s^4+a_1s^3+a_2s^2+a_3 s+a_4
The table of the coefficients is given by
s^4 1 a_2 a_4 0
s^3 a_1 a_3 0 0
s^2 a_1a_2-a_3/a_1 a_4 0 0
s^1 (a_1a_2-a_3)a_3-a_1^2a_4/a_1a_2-a_3 0 0 0
s^0 a_4 0 0 0
tableCoefficients of the classical Routh-Hurwitz criterion
and the necessary and sufficient stability conditions are given by
a_1>0
a_1 a_2-a_3>0
a_4>0
(a_1 a_2-a_3) a_3-a_1^2 a_4>0
When b_j=0 ∀ j∈{0,..,4}, table <ref> becomes
s^4 1 0 a_2 0 a_4
a_1 0 a_3 0
s^3 b_1^(1)=0 a_2^(1)=a_1a_2-a_3 b_3^(1)=0 a_4^(1)=a_1a_4
a_2^(2)=a_1a_2^(1)0 b_3^(2)=0 a_4^(2)=a_1a_4^(1)
s^2 b_2^(1)=0 a_3^(1)=a_2^(2)a_3-a_1a_4^(2) b_4^(1)=0
a_3^(2)=a_2^(2)a_3^(1) b_4^(2)=0
s^1 b_3^(3)=0 a_4^(3)=a_3^(2)a_4^(2)
s^0 a_4^(4)=a_3^(2)a_4^(3)
tableCoefficients of the generalized Routh-Hurwitz criterion for b_j=0
and the necessary and sufficient conditions for stability are given by Eq. (<ref>), which for real coefficients become
a_1>0
a_2^(2)=a_1 a_2^(1)>0
a_3^(2)=a_2^(2) a_3^(2)>0
a_4^(4)=a_3^(2) a_4^(3)>0
Let us remember that they all have to stand simultaneously, in order for the system to be stable. The equivalence between the first condition of Eq. (<ref>) and the first of (<ref>) is trivial. The second condition of Eq. (<ref>) gives us
a_2^(2)=a_1 a_2^(1)=a_1( a_1 a_2-a_3)>0
Since a_1>0, we have the second condition of Eq. (<ref>). The third condition of Eq. (<ref>) gives us
a_3^(2) =a_1(a_1 a_2-a_3)[a_1(a_1 a_2-a_3) a_3-a_1^3 a_4]>0
Given the first two conditions, namely a_1>0 and a_1 a_2-a_3>0, we obtain (a_1 a_2-a_3) a_3-a_1^3 a_4>0, which is exactly the fourth condition of Eq. (<ref>). Lastly, the fourth condition of Eq. (<ref>) gives us
a_4^(4)=(a_3^(2))^2a_1^2a_4>0
which reduces to a_4>0, i.e., the third condition of Eq. (<ref>).
Hence, from the generalized Routh-Hurwitz conditions for the case of real coefficients, we attained the classical Routh-Hurwitz conditions, proving the equivalence.
|
http://arxiv.org/abs/2307.03056v1
|
20230706151953
|
Generalizing Backpropagation for Gradient-Based Interpretability
|
[
"Kevin Du",
"Lucas Torroba Hennigen",
"Niklas Stoehr",
"Alexander Warstadt",
"Ryan Cotterell"
] |
cs.LG
|
[
"cs.LG",
"cs.AI",
"cs.CL"
] |
Hydrodynamic atmospheric escape in HD 189733 b: Signatures of carbon and hydrogen measured with the Hubble Space Telescope
[
==========================================================================================================================
Many popular feature-attribution methods for interpreting deep neural networks rely on computing the gradients of a model's output with respect to its inputs.
While these methods can indicate which input features may be important for the model's prediction, they reveal little about the inner workings of the model itself.
In this paper, we observe that the gradient computation of a model is a special case of a more general formulation using semirings.
This observation allows us to generalize the backpropagation algorithm to efficiently compute other interpretable statistics about the gradient graph of a neural network, such as the highest-weighted path and entropy.
We implement this generalized algorithm, evaluate it on synthetic datasets to better understand the statistics it computes, and apply it to study BERT's behavior on the subject–verb number agreement task (SVA).
With this method, we (a) validate that the amount of gradient flow through a component of a model reflects its importance to a prediction and (b) for SVA,
identify which pathways of the self-attention mechanism are most important.
§ INTRODUCTION[CODE AND DATA AVAILABLE AT <HTTPS://GITHUB.COM/KDU4108/SEMIRING-BACKPROP-EXPS>.=-1]
One of the key contributors to the success of deep learning in NLP has been backpropagation <cit.>, a dynamic programming algorithm that efficiently computes the gradients of a scalar function with respect to its inputs <cit.>.
Backpropagation works by constructing a directed acyclic computation graph[With due care, a computation graph can be extended to the cyclic case.] that describes a function as a composition of various primitive operations, e.g., +, ×, and exp(·), whose gradients are known, and subsequently traversing this graph in topological order to incrementally compute the gradients.
Since the runtime of backpropagation is linear in the number of edges of the computation graph, it is possible to quickly perform vast numbers of gradient descent steps in even the most gargantuan of neural networks.
While gradients are arguably most important for training, they can also be used to analyze and interpret neural network behavior. For example, feature attribution methods such as saliency maps <cit.> and integrated gradients <cit.> exploit gradients to identify which features of an input contribute most towards the model's prediction.
However, most of these methods provide little insight into how the gradient propagates through the computation graph, and those that do are computationally inefficient, e.g., <cit.> give an algorithm for computing the highest-weighted gradient path that
runs in exponential time.=-1
In this paper, we explore whether examining various quantities computed from the gradient graph of a network, i.e., the weighted graph whose edge weights correspond to the local gradient between two nodes, can lead to more insightful and granular analyses of network behavior than the gradient itself.
To do so, we note that backpropagation is an instance of a shortest-path problem <cit.> over the (+, ×) semiring.
This insight allows us to generalize backpropagation to other semirings, allowing us to compute statistics about the gradient graph beyond just the gradient, all while retaining backpropagation's linear time complexity.[
This is analogous to how, in the context of probabilistic context-free grammars, the inside algorithm can be modified to obtain the CKY algorithm <cit.>,
and, in the context of graphical models, how the sum-product algorithm for partition functions can be generalized to the max-product algorithm for MAP inference <cit.>.]
In our experiments, the first semiring we consider is the max-product semiring, which allows us to identify paths in the computation graph which carry most of the gradient, akin to lu-etal-inf-patterns-2021 influence paths.
The second is the entropy semiring <cit.>,[<cit.> refers to this as the expectation semiring.] which summarizes how dispersed the gradient graph is, i.e., whether the gradient flows in a relatively focalized manner through a small proportion of possible paths or in a widely distributed manner across most paths in the network.
With experiments on synthetic data, we validate that the max-product semiring results in higher values for model components we expect to be more critical to the model’s predictions, based on the design of the Transformer <cit.> architecture.
We further apply our framework to analyze the behavior of <cit.> in a subject–verb agreement task <cit.>.
In these experiments, we find that the keys matrix for subject tokens carries most of the gradient through the last layer of the self-attention mechanism.
Our results suggest that semiring-lifted gradient graphs can be a versatile tool in the interpretability researcher's toolbox.
§ GRADIENT-BASED INTERPRETABILITY
Neural networks are often viewed as black boxes because
their inner workings are too complicated for a user to understand why the model produced a particular prediction for a given input.
This shortcoming has spawned an active field of research in developing methods to better understand and explain how neural networks work.
For example, feature attribution methods
aim to measure the sensitivity of a model's predictions to the values of individual input features.
Many of these methods quantify feature attribution as the gradient of the model's output with respect to an input feature <cit.>.
We note that while the general reliability and faithfulness of gradient-based methods has been a contentious area of research
<cit.>, gradient-based methods have nonetheless continued to be widely used <cit.>.=-1
Other works have applied feature attribution methods to not only highlight sensitive input features but also uncover important internal neurons.
<cit.> define influence as the gradient of a quantity of interest with respect to a neuron, averaged across a collection of inputs of interest.
<cit.> further
define and analyze the notion of influence paths, i.e., paths in the computation graph between the neuron of interest and the output that on average carry most of the gradient.
By applying this method to analyze the behavior of gulordava-etal-2018-colorless LSTM language model on the SVA task, they draw conclusions about which internal components of the LSTM are most sensitive to the concept of number agreement based on the paths with the greatest amount of influence.
However, lu-etal-2020-influence method
exhaustively enumerates all paths in the computation graph and ranks them by the amount of influence along each one. As the number of paths in a computation graph is usually exponential in the depth of a neural network, this quickly becomes intractable for larger networks <cit.>. Therefore, this method is limited to computing influence paths for networks with very small numbers of paths. Indeed, while <cit.> computed the influence along 40000 paths for a 2-layer LSTM, follow-up work that attempted to apply this method to had to use an approximation which might not find the correct paths <cit.>.
The method we propose does not exhibit this issue and scales to any network one can train using backpropagation.
§ GENERALIZING BACKPROPAGATION
inputnode/.style=circle, draw=black, fill=blue!20,
intermediatenode/.style=circle, draw=black,
outputnode/.style=circle, draw=black, fill=red!20,
background rectangle/.style=draw=white!15,
show background rectangle,
In this section, we build toward our generalization of backpropagation as a semiring-weighted dynamic program.
At a high level, we observe that if we replace the addition and multiplication operations in the typical backpropagation algorithm with similar operations that satisfy the necessary properties, then the resulting algorithm will compute other useful statistics about the network's gradient graph in the same runtime as backpropagation.
In the remainder of this section, we make this notion of swapping operations precise by formulating backpropagation as a semiring algorithm, and later in <ref> we describe how different semirings yield different, useful, views of the gradient graph.
§.§ Computation graphs
Many classes of functions, e.g., machine learning models, can be expressed as compositions of differentiable functions.
Such functions can described by a computation graph <cit.>.
A computation graph is an ordered[We require that nodes be ordered since primitives might not be invariant to permutations of their arguments, e.g., x/y≠y/x in general.] directed acyclic graph (DAG) where every node is associated with the application of a primitive operation, e.g., +, ×, and exp(·), to the parents of that node.
These primitives all share the property that their gradients have a closed form and are assumed to be computable in constant time for the sake of analysis.
Source nodes in the graph are called input nodes, and every computation graph has a designated output node that encapsulates the result of the function.[For simplicity, we only consider scalar-valued functions, but extensions to vector-valued functions are possible and indeed commonplace in the literature.]
An example computation graph is shown in <ref>.
If all input nodes are assigned a value, then one can perform a forward pass, which calculates the value of the function at those inputs by traversing the graph in a topological order,[A topological ordering of a DAG is an ordering of its nodes such that node i precedes node j iff i is not a child of j.] evaluating the values of each node until we reach the output node.
This procedure is shown in <ref>.
§.§ Backpropagation
Encoding a function as a computation graph is useful because it enables the efficient computation of its gradients via automatic differentiation <cit.>.
Let G be a computation graph with topologically sorted nodes v_1, …, v_N, where v_N is its output node.
The goal of automatic differentiation is to compute v_Nv_i for some node v_i in G.
<cit.> shows that v_Nv_i can be expressed as:
v_Nv_i = ∑_p ∈(i, N)∏_(j, k) ∈ pv_kv_j
where (i, N) denotes the set of Bauer paths—directed paths in the computation graph G from node v_i to node v_N.[A directed path is an ordered set of node pairs, i.e., ⟨ (i_1, i_2), (i_2, i_3), …, (i_p - 1, i_p)⟩ where the second element of each pair matches the first element of the subsequent pair.]
That is, the gradient of the output v_N with respect to a node v_i equals the sum of the gradient computed along every path between v_i and v_N, where the gradient along a path is the product of the gradient assigned to each edge along that path.
The gradient of each edge is easy to compute, as it corresponds to the gradient of a primitive.
To distinguish the original, unweighted computation graph from its gradient-weighted counterpart, we call the latter the gradient graph (·) of a function;
an example is shown in <ref>.
Note that this is a function of the input nodes, since the edge gradients are dependent on the input nodes.
In general, naïvely computing <ref> term by term is intractable since (i, N) can be exponential in the number of nodes in the computation graph.
By leveraging the distributivity of multiplication over addition, backpropagation[Also known as reverse-mode automatic differentiation.] uses dynamic programming and the caching of intermediate values from the forward pass to compute <ref> in O(|E|) time, where |E| is the number of edges in G <cit.>.
Backpropagation can be seen as traversing the computation graph in reverse topological order and computing the gradient of the output node with respect to each intermediate node until v_i is reached.[Another efficient algorithm for computing <ref> is forward-mode automatic differentiation, which is most useful when one has more output nodes than input nodes in the network <cit.>. Since our formulation assumes a single output node, we focus solely on backpropagation.]
§.§ Semiring backpropagation
The crucial observation at the core of this paper is that backpropagation need not limit itself to addition and multiplication: If, instead, we replace those operations with other binary operators that also exhibit distributivity, say ⊕ and ⊗, then this new algorithm would compute:
v_Nv_i⊕_p ∈(i, N)⊗_(j, k) ∈ pv_kv_j
Clearly, the interpretation of this resulting quantity depends on how ⊕ and ⊗ are defined.
We discuss different options in <ref>, and in the remainder of this section we focus on how ⊕ and ⊗ have to behave to make them suitable candidates for replacement.
To make this notion more rigorous, we first need to introduce the notion of a semiring.
A semiring (over a set ) is an algebraic structure (, ⊕, ⊗, 0̅, 1̅) such that:
-0.3em
* ⊕: ×→ is a commutative and associative operation with identity element 0̅;
* ⊗: ×→ is an associative operation with identity element 1̅;
* ⊗ distributes over ⊕;
* 0̅ is an annihilator, i.e., for any k ∈, k ⊗0̅ = 0̅ = 0̅⊗ k.
If we replace the operations and identity elements in backpropagation according to the semiring identities and operations, we obtain semiring backpropagation, shown in <ref>.
Regular backprogation amounts to a special case of the algorithm when run on the sum-product semiring (, +, ×, 0, 1).
Aggregated derivative.
<Ref> defines v_Nv_i
for a single node v_i.
However, often it is useful to aggregate this quantity across a set of nodes.
For example, when a token is embedded into a d-dimensional vector, each of its dimensions corresponds to a node in the computation graph, say = {v_1, …, v_d}.
Then, v_Nv_j for the j^th component of the representation does not capture the semiring-derivative with respect to the entire representation of the token.
Hence, we define the aggregated derivative with respect to a set of nodes 𝒱 as:[This is equivalent to adding a dummy source node v_0 with outgoing edges of weight 1̅ to each node v ∈𝒱 to the gradient graph and computing v_Nv_0.]=-1
v_N𝒱⊕_v ∈𝒱v_Nv
§ INTERPRETING SEMIRING GRADIENTS
In <ref>, we showed how to generalize backpropagation to the semiring case.
For any semiring of our choosing, this modified algorithm will compute a different statistic associated with a function's gradient.
We begin by motivating the standard (+, ×) semiring which is common in the interpretability literature, before discussing the implementation and interpretation of the max-product and entropy semirings we focus on in this work.
§.§ What is a (+, ×) gradient?
We start by reviewing the gradient interpretation in the (+, ×) semiring, which corresponds to the standard definition of the gradient.
We explain why and how the gradient can be useful for interpretability.
Let f : ^D → be a function differentiable at ∈^D (e.g., a neural network model).
The derivative of f at , f(), can be interpreted as the best linear approximation of the function at <cit.>, viz., for any unit vector ∈^D and scalar ϵ > 0, we have:
f( + ϵ) = f() + f()^⊤ (ϵ) + o(ϵ)
As such, one can view gradients as answering counterfactual questions:
If we moved our input in the direction for some small distance ϵ, what is our best guess (relying only on a local, linear approximation of the function) about how the output of the model would change?[Indeed, this locality is a common source of criticism for gradient-based interpretability metrics as discussed in <ref>.]
Gradient-based methods (as discussed in <Ref>) are useful to interpretability precisely because of this counterfactual interpretation.
In using gradients for interpretability, researchers typically implicitly consider = _i, i.e., the i^th natural basis vector, which approximates the output if we increment the model's i^th input feature by one.
We can then interpret the coordinates of the gradient as follows: If its i^th coordinate is close to zero, then we can be reasonably confident that small changes to that specific coordinate of the input should have little influence on the value of f.
However, if the gradient's i^th coordinate is large in magnitude (whether positive or negative), then we may conclude that small changes in the i^th coordinate of the input should have a large influence on the value of f.
The subsequent two sections address a shortcoming in exclusively inspecting the gradient, which is fundamentally an aggregate quantity that sums over all individual Bauer paths.
This means, however, that any information about the structure of that path is left out, e.g., whether a few paths' contributions dominate the others.
The semiring gradients that we introduce in the sequel offer different angles of interpretation of such counterfactual statements.
§.§ What is a (max, ×) gradient?
While the (+, ×) gradient has a natural interpretation given by calculus and has been used in many prior works <cit.> to identify input features that are most sensitive to a model's output, it cannot tell us how the gradient flows through a gradient graph, as discussed in <Ref>.
One way to compute a different quantity is to change the semiring.
The max-product semiring (∪{-∞, ∞}, max, ×, -∞, 1) is an enticing candidate:
In contrast to the (+, ×) semiring, computing the gradient with respect to the (max, ×) semiring can help illuminate which components of the network are most sensitive or critical to the model's input.
The (max, ×) gradient specifically computes the gradient along the Bauer path that has the highest value.
We term this path the top gradient path in the sequel.
Formally, the (max, ×) gradient between v_i and v_N
is:
v_Nv_imax_p ∈𝒫(i, N)∏_(j, k) ∈ pv_kv_j
Note that variants of this definition are possible, e.g., we could have considered the absolute values of the gradients |v_kv_j| if we did not care about the overall impact as opposed to the most positive impact on the output v_N.
The top gradient path can be used to examine branching points in a model's computation graph.
For example,
in Transformer <cit.> models, the input to an attention layer branches when it passes through both the self-attention mechanism and a skip connection.
The input further branches within the self-attention mechanism between the keys, values, and queries (see <ref> for an illustration).
By examining the top gradient path at this branching point, we can identify not only whether the skip connection or self-attention mechanism is more critical to determining input sensitivity, but also which component within the self-attention mechanism itself (keys, queries, or values) carries the most importance.
Implementation.
By using the max-product semiring in the backpropagation algorithm, we can compute the top gradient path in O(|E|) time, where |E| is the number of edges in the computation graph <cit.>.
See <ref> for more details.
§.§ What is an entropy gradient?
In addition to identifying the single top gradient path, it is also helpful to have a more holistic view of the gradient paths in a graph.
In particular, we may be interested in the path entropy of the gradient graph, i.e., the dispersion of the magnitudes of the path weights.
Formally, for an input and its corresponding gradient graph () with nodes v_1, …, v_N, the entropy of all paths between v_i and v_N is defined as:
v_Nv_i -∑_p ∈𝒫(i, N)|(p)/Z| log|(p)/Z|
where (p) ∏_(j, k) ∈ pv_kv_j is the gradient of path p and Z = ∑_p ∈𝒫(i, N)|(p)| is a normalizing factor.
Intuitively, under this view, the gradient graph (·) encodes an (unnormalized) probability distribution over paths between v_i and v_N where the probability of a given path is proportional to the absolute value of the product of the gradients along each edge.
The entropy then describes the dispersion of the gradient's flow through all the possible paths in the graph from v_i to v_N.
For a given graph, the entropy is greatest when the gradient flows uniformly through all possible paths, and least when it flows through a single path.
Implementation.
<cit.> proposed to efficiently compute the entropy of a graph by lifting the graph's edge weights into the expectation semiring (×, ⊕, ⊗, 0̅, 1̅) where 0̅ = ⟨ 0, 0 ⟩, 1̅ = ⟨ 1, 0 ⟩ and:
-0.3em
* ⊕: ⟨ a, b ⟩⊕⟨ c, d ⟩ = ⟨ a + c, b + d ⟩
* ⊗: ⟨ a, b ⟩⊗⟨ c, d ⟩ = ⟨ ac, ad + bc ⟩
To leverage the expectation semiring, we first lift the weight of each edge in the gradient graph from w to ⟨|w|, |w|log|w|⟩
(where w is the local derivative between two connected nodes in the gradient graph).
Then, by computing:
⟨ Z, -∑_p ∈𝒫(i, N)|(p)|log|(p)| ⟩
= ⊕_p ∈𝒫(i, N)⊗_(j, k) ∈ p⟨|v_kv_j|, -|v_kv_j|log|v_kv_j|⟩
in linear time using <Ref>, we obtain ⟨ Z, ∑_p ∈𝒫(i, N)|(p)|log|(p)| ⟩, which are the normalizing factor and the unnormalized entropy of the graph, respectively.
As shown by <cit.>, we can then compute v_Nv_i = log Z - 1/Z∑_p ∈𝒫(i, N)|(p)|log|(p)|.
§ EXPERIMENTS
To demonstrate the utility of semiring backpropogation, we empirically analyze their behavior on two simple transformer models (1-2 layers) on well-controlled, synthetic tasks.
We also explore semiring backpropogation on a larger model, <cit.>, on the popular analysis task of subject–verb agreement to understand how our method can be useful for interpreting language models in more typical settings.
To implement semiring backpropagation, we developed our own Python-based reverse-mode automatic differentiation library, building off of the pedagogical library Brunoflow <cit.> and translating it into JAX <cit.>.[Library available at <https://github.com/kdu4108/brunoflow>.]
§.§ Validation on a synthetic task
Setup.
In this experiment, we test the hypothesis that most of the gradient should flow through the components that we judge a priori to be most critical to the model's predictions.
We are particularly interested in whether the gradient flow through a Transformer matches our expectation of the self-attention mechanism's components.
So, while we compute the top gradient path from the output to the input representations, we only inspect the top path at a Transformer's main branching point, which is when the hidden state is passed into the skip connection and the keys, values, and queries of the self-attention mechanism (<ref>).
If we observe higher levels of gradients flowing through one branch, a natural interpretation is that this component is more critical for the model's prediction.
To test whether this interpretation is justified, we construct a task where we can clearly reason about how a well-trained Transformer model ought to behave and identify how well the top gradient flow aligns with our expectations of a model's critical component.=-1
Model. We use a 1-layer Transformer model with hidden layer size of 16 and 2 attention heads to minimize branching points and increase interpretability.
We train this model to achieve 100% validation accuracy on the task described below.
representation/.style=draw=none, fill=gray!40, text width=0.7cm, inner sep=3pt, minimum height=2cm, rounded corners,
keysnode/.style=draw=none, fill=keycolor!80, text width=1.2cm, inner sep=3pt, rounded corners,
valuesnode/.style=draw=none, fill=valuecolor, text width=1.2cm, inner sep=3pt, rounded corners,
queriesnode/.style=draw=none, fill=querycolor, text width=1.2cm, inner sep=3pt, rounded corners,
extranode/.style=draw=black, fill=gray!20, line width=0.3mm, rounded corners,
bertgroup/.style=draw=black, rounded corners, dotted,
label/.style=draw=none, text width=3cm, align=left, anchor=west,
selfattngroup/.style=draw=black, fill=gray!10, rounded corners, dotted,
background rectangle/.style=draw=white!15,
show background rectangle,
arrow/.style=line width=0.3mm
Task.
We design the task to target the utility of this method for interpreting the self-attention mechanism.
In this task, an input consists of a sequence of numbers, which is labeled according to whether the first token appears again at any point in the sequence, e.g., , whereas .
Furthermore, the inputs are constrained such that the first token will be repeated at most once, to isolate the decision-making of the model to the presence (or lack thereof) of a single token.
We randomly generate a dataset of 10000 points with sequence length 10 and vocab size 20.
The correct decision-making process for this task entails comparing the first token to all others in the sequence and returning if there is a match.
This is, in fact, analogous to how queries and keys function within the self-attention mechanism: A query q_t is compared to the key k_t' of each token t' in the sequence and the greater the match, the greater attention paid to token t' by query token t.
We would therefore expect that the self-attention mechanism relies heavily on the query representation of the first token and key representations of the remaining tokens and, in particular, the key representation of the repeated token, if present.
In turn, we hypothesize the max-product gradient value will primarily originate from the queries branch for the first token and keys for the remaining tokens, and be especially high for the repeat token.
Results.
The results, summarized in <Ref>, provide strong evidence for our hypothesis that the behavior of the (max, ×) gradient reflects the importance of the different model components.
We observe all expected gradient behaviors described in the previous paragraph, and especially that the highest gradient flow (for any token) is through the keys of the repeat token.
§.§ Top gradient path of BERT for subject–verb agreement
Setup.
We now apply this method to understand the self-attention mechanism of a larger model (BERT) for the more complex NLP task of SVA.
We subsample 1000 examples from the dataset from <cit.> and use spaCy <cit.> to identify the subject and attractors within each sentence. We then filter down to 670 sentences after removing sentences where BERT tokenizes the subject or attractors as multiple tokens.
Using the max-product semiring, we then compute the top gradient path through the different branches (skip connection, keys, values, and queries) for (a) the subject of a sentence, (b) the attractors of a sentence, and (c) all tokens of a sentence.
Model.
BERT <cit.> is a popular encoder-only Transformer model for many NLP tasks.
BERT's architecture consists of multiple Transformer encoder layers stacked atop each other, along with a task-specific head.
We use the pretrained model from Huggingface <cit.>, which has 6 attention layers, hidden size of 512, and 8 attention heads.
Task.
We consider the subject–verb number agreement task in our experiments.
Variants of this task in English have become popular case studies in neural network probing.
Notably, this phenomenon has been used to evaluate the ability for models to learn hierarchical syntactic phenomena <cit.>. It has also served as a testing ground for interpretability studies which have found evidence of individual hidden units that track number and nested dependencies <cit.>, and that removing individual hidden units or subspaces from the models' representation space have a targeted impact on model predictions <cit.>.
Our formulation of the task uses BERT's native masked language modeling capability by recasting it as a cloze task: We mask a verb in the sentence and compare the probabilities with which BERT predicts the verb forms with correct and incorrect number marking. For example, given the input “all the other albums produced by this band their own article,” we compare the probabilities of “have” (correct) and “has” (incorrect).
We compute the gradient with respect to the difference between the log probability of the two inflections.=-1
The data for this experiment is from <cit.>. All the examples in their dataset also include one or more attractors. These are nouns such as “band” in the example above, which (a) are not the subject, (b) precede the verb, and (c) disagree with the subject in number. Furthermore, all masked verbs are third person and present tense, to ensure that number agreement is non-trivial.
Results.
From <ref>, we highlight key differences between the (max, ×) gradient behavior for subject tokens and all tokens in general.
Most saliently, for subject tokens only, the max-product gradient flows entirely through the self-attention mechanism in the last layer and mostly through the skip connection in earlier layers, which is consistent with findings from <cit.>.
Moreover, within the self-attention mechanism, most (76%) of the gradient in the last layer for the subject flows through the keys matrix.
In contrast, across all tokens, the top gradient paths mostly through the skip connection for all layers, and otherwise is more evenly distributed between keys and values.
We also note similarities and differences between the gradient flows of the subject and preceding attractors.
Both exhibit a similar trend in which the gradient flows primarily through the keys (and entirely through the self-attention mechanism) in the last layer.
However, the top gradient has a greater magnitude for the subject than the attractors (especially in the keys).
Since self-attention uses a token's keys to compute the relative importance of that token to the token, we speculate that the max-product gradient concentrating primarily on the keys (and more so for the subject than attractors) reflects that a successful attention mechanism relies on properly weighting the importances of the subject and attractors.
§.§ Gradient graph entropy vs. task difficulty
Setup. This experiment tests the hypothesis that the entropy of a model's gradient graph is positively correlated with the difficulty of the task that the model was trained to solve.
We construct a variety of synthetic tasks and compare the average gradient entropy of a 2-layer transformer on examples in each of these tasks.
We measure the difficulty of a task with the minimum description length
<cit.>.[The MDL of a dataset under a model measures the number of bits required to communicate the labels of the dataset, assuming the sender and receiver share both the unlabeled data and a model, which can be used to reduce the information the sender must transmit.
Alternatively, MDL can be thought of as the area under the loss curve as a function of dataset size.]
Following the approach used by <cit.> and <cit.>, we measure MDL by repeatedly training the model on the task with increasing quantities of data and summing the loss from each segment.
The higher the MDL, the more difficulty the model had in extracting the labels from the dataset, and therefore the more challenging the task.
We hypothesize that a model will have higher entropy for more difficult tasks because it will require using more paths in its computation graph.
During our analysis, we drop runs where the model was unable to achieve a validation accuracy of >90%, to avoid confounding results with models unable to learn the task.
Model. For all tasks, we use the same 2-layer transformer architecture with a hidden layer size of 64, 4 attention heads, and always predicts a distribution over 36 classes (with some possibly unused); this ensures our results are comparable across tasks with different numbers of classes.
We train the models for 50 epochs on each of the synthetic datasets.=-1
Task. We design a variety of synthetic tasks in order to control for difficulty more directly.
In the family of tasks, an input is a sequence of S numbers and labeled or based on whether the input contains all tokens in a pre-specified token set.
Different tasks within are defined by the pre-specified token set.
The family of tasks is parameterized by a number of classes C. In this task, an input x is a sequence of S numbers.
The label y is determined by the number of 1s in the sequence according to the following function:
y(x) = *(x)/S/C - 1,
i.e., in the 2-class instance of , an input is labeled 0 if it contains ≤ S/2 1s and 1 if it contains >S/2 1s.
Finally, we also evaluate on the synthetic datasets
, ,
, and from <cit.>.
For more details, see <ref>.
Results.
The results show clear evidence against our initial hypothesis that gradient entropy increases as a function of task difficulty, as measured by MDL.
While there appears to be some patterns evident between entropy and MDL in <Ref>, their interpretation is unclear.
From observing the lightest-hued points there appears to be a negative linear relationship between entropy and MDL for the binary tasks.
However, confusingly, the points seem to suggest a quadratic-like relationship between entropy and MDL for the tasks.
We speculate that this could be explained by a phase-change phenomena in the model's learning dynamics.
That is, for sufficiently easy tasks, the model need not focalize much in order to solve the task.
Incrementally more difficult tasks may require the model to focalize more, thus resulting in the decreasing entropy for tasks below a certain MDL threshold.
Then, once a task is sufficiently difficult, the model is required to use more of the network to solve the task.
Therefore, we see this increase in entropy as the MDL increases past a certain threshold for the task.
The presence of these clear (although somewhat mystifying) patterns indicates that there exists some relationship between entropy and MDL.
More experimentation is needed to understand the relationship between entropy and MDL for task difficulty.
§ CONCLUSION
We presented a semiring generalization of the backpropagation algorithm, which allows us to obtain an alternative view into the inner workings of a neural network.
We then introduced two semirings, the max-product and entropy semirings, which provide information about the branching points of a neural network and the dispersion of the gradient graph.
We find that gradient flow reflects model component importance, gradients flowing through the self-attention mechanism for the subject token pass primarily through the keys matrix, and the entropy has some relationship with the difficulty of learning a task.
Future work will consider semirings outside the scope of this work,
e.g., the top-k semiring <cit.> to track the top-k gradient paths, as well as computing semirings online for control during training. =-1
§ LIMITATIONS
While our approach inherits the linear runtime complexity of the backpropagation algorithm, runtime concerns should not be fully neglected.
Firstly, the linear runtime is only an analytical result, not an empirical measure. This means that the actual runtime of the backpropagation and thus our algorithm depend heavily on their implementation.
For instance, some deep learning frameworks do a better job at reusing and parallelizing computations than others <cit.>.
Indeed, our code is optimized for good readability and extensibility at the expense of speed, which hints at another limitation of our approach:
Our approach requires deep integration with the framework as it needs access to all model weights and the computation graph.
For this reason, our approach cannot be easily packaged and wrapped around any existing model or framework and we instead developed our own JAX-based reverse-mode autodifferentiation library, based on the numpy-based Brunoflow library <cit.>.
While we release our library to enable other researchers to analyze models through their gradient graphs, it faces some computational and memory constraints.
In our experiments, running the three semirings together on a single sentence can take several minutes (depending on sentence length) using , the 6-layered pretrained BERT from Huggingface <cit.>, totaling our experimentation time on our datasets at about 10 CPU-hours.
For improved adoption of this method, we encourage the direct integration of semiring implementations into the most popular deep learning frameworks.
Our final point pertains not only to our study but to most interpretability approaches: One has to be careful when drawing conclusions from gradient paths.
Cognitive biases, wrong expectations, and omitted confounds may lead to misinterpretation of results.
§ ETHICS STATEMENT
We foresee no ethical concerns with this work. Our work aims to make the inner workings of neural network models more interpretable. On this account, we hope to contribute to reducing biases inherent in model architectures, pre-trained model weights, and tasks by increasing overall transparency.
§ ACKNOWLEDGEMENTS
Kevin Du acknowledges funding from the Fulbright/Swiss Government Excellence Scholarship.
Lucas Torroba Hennigen acknowledges support from the Michael Athans fellowship fund.
Niklas Stoehr acknowledges funding through the Swiss Data Science Center (SDSC) Fellowship. Alex Warstadt acknowledges support through the ETH Postdoctoral Fellowship program.
acl_natbib
§ IMPLEMENTATION OF TOP GRADIENT PATH
In practice, we implement the top gradient path by storing 4 additional fields to each node in the graph: the most positive gradient of the node, a pointer to the child node which contributed this most positive gradient, the most negative gradient of the node, and a pointer to the child node which contributed this most negative gradient. In this way, each node tracks the paths containing the most positive gradient () and most negative gradient () from itself to the output node. To dynamically extend the path from v_k to v_j (j < k):
v_j. =
v_k.·v_kv_j if v_kv_j≥ 0
v_k.·v_kv_j otherwise
v_j. =
v_k.·v_kv_j if v_kv_j≥ 0
v_k.·v_kv_j otherwise
§ ADDITIONAL ENTROPY SANITY CHECKS AND EXPERIMENTS
§.§ Sanity Checks with Synthetic Data
To build intuition about the entropy of a model's computation graph, we run two sanity check experiments. First, we evaluate the entropy of a pretrained BERT model as the sentence length increases.
Since larger sentence lengths result in more paths in the computation graph, we expect the entropy of the model to increase with sentence length.
Our findings confirm this (<Ref>).
Second, we expect that the entropy of a trained model ought to increase with the model complexity, as measured by hidden size.
In this experiment, we create a 4-featured artificial dataset with randomly generated values in the range [0, 1], labeled by whether the first feature is greater than 0.5.
We train multilayer perceptrons with varying hidden sizes on this dataset and find that the entropy of the input features increases with model complexity as expected (see <ref>).
§.§ Entropy vs Example Difficulty in Subject–Verb Agreement
Setup. We investigate the relationship between the entropy of the gradient graph of BERT and input sentences in the task of subject–verb number agreement.
In this task, we measure example difficulty by the number of attractors in a sentence (more attractors corresponds to greater difficulty). We sub-sample the dataset from <cit.> to 1000 sentences, balanced evenly by the number of attractors per sentence (ranging from 1 to 4 attractors).
Then, using the entropy semiring, we compute the entropy of BERT's gradient graph for each sentence.
Results.
Since sentences with more tokens will naturally have a higher entropy due to a larger computation graph (see <ref>), we control by sentence length.
We bin sentences of similar length for (10–20, 20–30, 30–40, and 40–50 tokens) before analyzing the effect that the number of attractors has on entropy.
We present the results in <ref> and additionally run a Spearman correlation test between the entropy of the input representations (averaged across all tokens in the sentence) and the number of attractors.
For each group of sentence lengths, we find minimal correlation between number of attractors and entropy.
Therefore, there is little evidence to support a relationship between entropy and example difficulty as measured by number of attractors.
However, number of attractors is not necessarily a strong indicator of example difficulty, and recommend more rigorous comparison of entropy against a stronger metric of example difficulty in future work.
§ SYNTHETIC DATASETS
§.§ Binary Datasets
We list in <Ref> descriptions and examples of all binary tasks constructed for our experiments.
§.§ BinCountOnes Datasets
We construct one family of multiclass classification datasets, .
Parameterization. A task is parameterized by the number of classes C, between 2 to S, such that C divides S. For example, when S=6, C could be 3.
Description. Each example X is labeled between [0, C-1] by the following formula: label(X) = *Count1(X)/S/C - 1, where CCount1(X) is the number of 1s that appear in X.
Examples. See <Ref>.
|
http://arxiv.org/abs/2307.02684v1
|
20230705230240
|
Massive Spatial Multiplexing: Vision, Foundations, and Challenges
|
[
"Parisa Ramezani",
"Alva Kosasih",
"Amna Irshad",
"Emil Björnson"
] |
eess.SP
|
[
"eess.SP"
] |
The Kibble-Zurek Scenario and Coarsening Across
Nonequilibrium Phase Transitions in Driven Vortices and Skyrmions
C. Reichhardt and C. J. O. Reichhardt
August 1, 2023
===================================================================================================================
In this article, we present our vision for how extremely large aperture arrays (ELAAs), equipped with hundreds or thousands of antennas, can play a major role in future 6G networks by enabling a remarkable increase in data rates through massive spatial multiplexing to both a single user and many simultaneous users. Specifically, with the quantum leap in the array aperture size, the users will be in the so-called radiative near-field region of the array, where previously negligible physical phenomena dominate the propagation conditions and give the channel matrices more favorable properties. This article presents the foundational properties of communication in the radiative near-field region and then exemplifies how these properties enable two unprecedented spatial multiplexing schemes: depth-domain multiplexing of multiple users and angular multiplexing of data streams to a single user. We also highlight research challenges and open problems that require further investigation.
§ INTRODUCTION
The data traffic is growing at an exponential pace in wireless communication systems. To cater to this development, we must constantly increase the total bit rate (in bits per second) that the systems provide. The upper rate limit is the Shannon capacity
B log_2 ( 1 + P β/B N_0_= ),
which is determined by the transmit power P, channel gain β∈ [0,1], and noise power spectral density N_0.
From inspecting (<ref>), it appears that increasing the bandwidth B is the preferred way to enhance capacity.
The signal-to-noise ratio (SNR) can also be improved through beamforming, but the impact is smaller due to the logarithm.
Hence, the bandwidth has grown with every cellular network generation, with 100 MHz being typical in the first phase of 5G deployments.
Since spectrum is a scarce global resource, adding new bands is generally associated with moving to higher frequencies. For example, the second phase of 5G will use mmWave bands in the range 24-53 GHz <cit.>, while 6G is expected to also make use of the sub-THz band spanning from 100 to 300 GHz <cit.>.
These new bands feature gradually worse channel gain conditions, and hardware limitations prevent the transmit power from increasing proportionally to the increased bandwidth.
We can partially compensate for this by using more antennas for extremely narrow beamforming <cit.>.
However, even if we design advanced beamforming systems to reach the same received power levels in these new bands as in the current ones, the extra bandwidth gives diminishing returns since we must divide the signal power over it.
Fig. <ref> shows how the bit rate grows almost linearly with the bandwidth up to 1 GHz and then approaches an upper bound. We will reach this saturation level in the 5G era. At very short distances, we might reach saturation in 6G using around 10 GHz of spectrum in sub-THz bands. The main point is that the spectrum resource will eventually be depleted, and we need to look for new design dimensions to keep raising the capacity.
§.§ A Vision for Massive Spatial Multiplexing
How can we continue increasing the bit rate when the bandwidth resource is depleted? The only known alternative way is to transmit multiple parallel data streams as spatial layers <cit.>.
This approach requires antenna arrays at both the transmitter and receiver sides, and is known as multiple-input multiple-output (MIMO). It comes in two configurations.
In single-user MIMO (SU-MIMO), both the base station (BS) and user equipment (UE) are equipped with antenna arrays so that parallel spatial layers can be transmitted between them, using different angular dimensions. In multi-user MIMO (MU-MIMO), the BS communicates simultaneously with multiple UEs, each having one or a few antennas.
Both features are used in 5G <cit.>, but the number of spatial layers is limited to 8 (per polarization). Unfortunately, it is hard to reach even that many layers in practice. The point-to-point channels in SU-MIMO mode seldom provide more than 1-2 strong propagation paths that can carry different data layers. Moreover, the UEs typically have so similar channels in MU-MIMO mode that one must over-provision the BS with antennas to limit the interference (e.g., using 64 antennas to serve 8 UEs). This is called the massive MIMO regime <cit.> and 5G is built around it.
Does this mean that the spatial layering resource is also about to be depleted?
No, we envision that we still have the massive spatial multiplexing era ahead of us. By using physically larger arrays (at the BSs) and smaller wavelengths, future communication systems can operate in the radiative near-field where previously negligible physical phenomena become dominant.
In this article, we will show how these phenomena enable enhanced spatial multiplexing where each spatial layer has an unprecedentedly small focus area. The motivation goes back to the fundamental MIMO capacity expression <cit.>
C = max_Q: (Q) ≤ 1 B log_2 ( I_K + ·HQH^),
where H∈ℂ^K × K is the channel matrix between K transmit antennas and K receive antennas, and Q is the covariance matrix of the transmitted signal. For a given channel matrix, it is well known that the maximum in (<ref>) is achieved by transmitting along the right singular vectors of H and dividing the power using waterfilling <cit.>.
However, the practical challenge is to design future systems so that the channel matrix has favorable properties. The MIMO capacity in (<ref>) is a function of H and is maximized, under the Frobenius norm constraint H^2_F=K^2, when all its K singular values are equal to K. In this case, Q = 1/KI_K is the optimal signal covariance matrix.
It follows that (<ref>) is upper bounded as
C ≤ B K log_2 ( 1 + ),
where the bound is an increasing function of K, which we refer to as the multiplexing gain. Hence, if we can achieve channel matrices of the aforementioned kind, we can manage the anticipated traffic growth by increasing the number of spatial layers (and the number of transmit/receive antennas) instead of using more spectrum.
In the remainder of this article, we will first outline the previously overlooked near-field propagation phenomena and then showcase how these enable us to approach (<ref>) when K is large, in both SU-MIMO and MU-MIMO scenarios. For brevity in the presentation, we will focus on line-of-sight (LOS) channels, for which the results are easiest to interpret.
We will refer to the envisioned BS technology as an extremely large aperture array (ELAA) <cit.>, which is a term highlighting that the array is large compared to the wavelength. An ELAA can be physically large or small, depending on the wavelength of operation.
Fig. <ref> gives a first schematic description of how an ELAA can both transmit multiple spatial layers per UE and multiplex UEs in the depth domain. The technical details will follow.
§ NEAR-FIELD PROPERTIES
A transmit antenna consists of point sources that emit spherical waves. A point source at the location s∈ℝ^3 generates an electric field at an observation point located at r∈ℝ^3, given by the tensor Green’s function as <cit.>
G(d) = - jη e^-j 2π/λ d/2λ d[ (I - d̂d̂^) + jλ/2π d (I -3 d̂d̂^)
- λ^2/(2π d)^2(I -3 d̂d̂^)],
where d = r - s is the distance between the source and the observation point, η is the free space impedance, j is the imaginary number, λ is the wavelength, and d̂= (r - s)/d is a unit-length vector denoting the direction of d. The power of the last two terms in (<ref>) decays rapidly with d; thus, they are only considered when characterizing the electric field in the reactive near-field region, very close to the antenna <cit.>. The reactive near-field starts from the surface of the antenna and continues to d = d_N. For electrically small antennas, it is common to assume d_N = λ/(2π), although d_N = λ has been experimentally verified to be a better choice <cit.>. The reactive near-field ends approximately at d_N = 0.62√(D^3/λ) for electrically large antennas, where D is the antenna's largest dimension <cit.>.
The radiative near-field begins after the reactive near-field and is traditionally said to cover distances between d_N and d_F from the antenna (i.e., d_N < d <d_F). The distance at which the radiative near-field ends and the far-field begins is called the Fraunhofer distance <cit.> or the Rayleigh distance <cit.> and is commonly taken as
d_F = 2D^2/λ.
This is the distance beyond which the radiated spherical waves can be viewed as approximately planar, in the sense that the phase difference between the waves from the antenna's center and edge is smaller than
π/8 at the observation point.
The Fraunhofer distance is typically very short, even for relatively large antenna apertures. As an example, for an antenna with the maximum length of D=2λ operating at f=3 GHz, the Fraunhofer distance is computed as d_F = 8λ = 8c/f = 0.8 m, where c = 3 · 10^8 m/s is the speed of light.
Hence, the radiative near-field region has traditionally been overlooked by the communication community, where the “near-field” term has been reserved for short-range systems that utilize inductive coupling in the reactive near-field. This perception will likely change in the 6G era.
§.§ Fraunhofer Array Distance
If current antenna arrays evolve into ELAAs, the radiative near-field gets a new meaning. A large array contains many antennas that each has a small aperture D, but the maximum dimension W of the array is much larger. This creates the new situation illustrated in Fig. <ref>. We have exchanged the roles of the transmitter and receiver for a simplified presentation but stress that the channel reciprocity implies that the channel properties are the same in both directions.
The point source emits a spherical wave, and when it reaches the antenna array, each antenna will observe a locally plane wave because the propagation distance d satisfies d > d_F. The spherical curvature is, however, noticeable when comparing the received signals at different antennas in the array. This property must be taken into account when modeling the channel vector.
The term Fraunhofer array distance has recently sprung up to quantify at what distances this phenomenon appears <cit.>. If we want the phase variations to be negligible (i.e., smaller than π/8) across an array with the maximum dimension of W, the Fraunhofer array distance becomes
d_FA = 2W^2/λ.
This is nothing but the Fraunhofer distance in (<ref>) evaluated using the array aperture W instead of the antenna aperture D. However, the consequences are very different because an array can compensate for phase variations and exploit them for new types of beamforming and multiplexing, as we will shortly discuss.
The Fraunhofer array distance for an ELAA operating at f=3 GHz with the aperture length W = 10 m is d_FA = 2 km. If the same ELAA operates at f = 30 GHz, the Fraunhofer array distance increases to d_FA = 20 km.
Hence, if we take the Fraunhofer array distance as the border between radiative near-field and far-field regions when communicating using an ELAA, almost all prospective users served by the BS would fall in its near-field region where the spherical wavefront of the emitted waves must be properly modeled.
§.§ What Array Gain can be Achieved?
Suppose the ELAA in Fig. <ref> has N antennas. The received signal y∈ℂ^N can then be expressed as
y = h x + n,
where h∈ℂ^N is the channel vector, x is the transmitted signal, and n∈ℂ^N is the receiver noise.
This looks like a conventional single-input multiple-output (SIMO) channel, but the channel vector is modeled differently.
In a conventional far-field LOS scenario, h= √(β) [1, …,1]^, where β is the channel gain.
In the radiative near-field, there can be both phase and amplitude variations in h; an exact model will be provided later.
In any case, the amplitude/phase variations can be compensated for using matched filtering:
h^/hy = h x + h^/hn.
This leads to an SNR proportional to h^2, which coherently combines the total received signal power over all the receive antennas.
This capability is not possessed by a single large antenna with aperture W, which will basically perform the filtering [1, …,1]^h / √(N) in the analog domain, so that the phase-shifts caused by the spherical wavefront makes the received power average out between the antennas. In other words, spherical waves are detrimental to large antennas but not to antenna arrays.
We now consider that a single-antenna isotropic transmitter located at the location (0,0,z) sends a signal to an ELAA deployed in the xy plane centered at the origin.
In contrast to the one-dimensional ELAA illustrated in Fig. <ref>, we now consider a planar array that has N antennas in each row and M antennas in each column.
The channel vector h is now MN-dimensional. If the channel gain is β to an arbitrary antenna in the center of the ELAA, we would expect that h^2 = β MN when communicating in the far-field. The factor MN is called the array gain. The situation is different in the radiative near-field, where we instead get
h^2 = β MN G_array,
where the normalized array gain is computed as <cit.>
G_array = ∑_m=1^M∑_n=1^N| ∫_𝒮_m,n E(x,y) dx dy |^2 / MN A ∫_𝒮| E(x,y) |^2 dx dy,
where A is the physical area of each antenna, 𝒮_m,n is the set of points in the xy-plane spanned by the antenna in the mth row and nth column of the ELAA, 𝒮 is the area spanned by the reference antenna located in the origin, and
E(x,y) = E_0/√(4 π)√(z (x^2 + z^2))/(x^2+y^2+z^2)^5/4 e^-2π/λ√(x^2+y^2+z^2),
is the electric field measured an arbitrary point (x,y,0) with E_0 being the electric density. The expression in (<ref>) is the total power received by the MN antennas divided by the total received power of MN reference antennas of the kind in the origin.
The same method can be used to compute the exact complex-valued channel coefficient h_m,n in h for the antenna in the mth row and nth column:
h_m,n = 1/E_0√(1/A)∫_S_m,n E(x,y) dx dy.
The expression in (<ref>) becomes G_array = 1 in the far-field (i.e., z ≫ 0), where the electric field in (<ref>) can be approximated as E_0/(√(4π) z).
However, it can be smaller in the radiative near-field, even if the receiving array is capable of compensating for the phase-shifts. When that happens, we cannot make efficient use of the entire array area.
As indicated above, the Fraunhofer array distance d_FA specifies the distance beyond which a spherical wave is seen as planar by the ELAA. Does d_FA also determine the distance where the maximum normalized array gain is achievable?
No, it has been recently reported in <cit.> that the array gain in (<ref>) is very close to 1 in the majority of the radiative near-field. To shed light on this behavior, we will examine the planar ELAA in detail. The maximum dimension (i.e., aperture length) of the ELAA is obtained as
W = D√(M^2+N^2/2),
where D denotes the diagonal of a single antenna. The Fraunhofer array distance for the ELAA is thus given by
d_FA = ( W/D)^2 d_F = M^2+N^2/2 d_F.
Fig. <ref> depicts the normalized array gain in (<ref>) for two ELAAs with different numbers of antennas. In particular, the solid blue curve shows the array gain for an ELAA with M = 30 rows and N=40 columns of antennas, while the dotted red curve represents the array gain of another ELAA with M= 300 and N = 400. Each antenna has the size λ/4 ×λ/4. From (<ref>), their Fraunhofer array distances are respectively computed as 1.25 · 10^3 d_F and 1.25 · 10^5 d_F, indicated by arrows on the graphs. We note that a logarithmic scale is used on the horizontal axis. It can be observed that the normalized antenna array gains converge to 1 much earlier than d_FA, which suggests that the Fraunhofer array distance is not a good indicator of when the maximum array gain is achievable. This is an encouraging result because it demonstrates that we can achieve the maximum array gain in the vast majority of the radiative near-field, if we just utilize a matched filter receiver that compensates for the phase variations caused by the spherical waves.
For a planar square ELAA, it was first shown in <cit.> that almost 96% of the maximum array gain is achieved for d ≥ d_B, where d_B is twice the largest dimension of the ELAA. This is called the Björnson distance and characterizes when the propagation distances from the transmitter to the different parts of the receiver are so large that they affect the array gain. For the setup considered above, with an unequal number of rows and columns, this distance becomes
d_B = 2W = 2D√(M^2+N^2/2).
For the ELAAs investigated in Fig. <ref>, the Björnson distances are respectively obtained as 10^2 d_F and 10^3 d_F, shown by star markers on the curves. For both graphs, we have G_array≈ 0.96 at d = d_B, just as in the case with square ELAAs.
§.§ Beam Width and Beam Depth
When an ELAA transmitter uses matched filtering to focus the transmitted signal at a point (0,0,F), the maximum array gain is obtained at that point and smaller numbers at other points (x_r,y_r,z_r).
The novel aspect is that the focus area has both a narrow beam width (BW) and beam depth (BD), where the latter is a new concept for the near-field. These properties are sketched in Fig. <ref>.
The 3dB BW and BD can be derived by utilizing the
so-called Fresnel approximation of the electric field in (<ref>),
E(x,y) ≈E_0/√(4π)ze^-j2π/λ(z+x^2/2z+y^2/2z),
which is tight in the part of the radiative near-field where the array gain is maximum (i.e., d_B<z<d_FA).
If the elements in the ELAA are small, matched filtering is equivalent to multiplying E(x,y) with e^+j2π/λ(x^2/2F+y^2/2F), which is the phase-shift at the focal point.
To compute the 3dB BW, we consider a receiver located at (x_r,y_r,F) for which the normalized array gain can be shown to become <cit.>
G_array(x_r,y_r,F) ≈sinc^2(N/√(2)Dx_r/λ F) sinc^2(M/√(2)Dy_r/λ F).
Since sinc^2(x) has its maximum value at x = 0 and we have sinc^2(± 0.443) ≈ 0.5, the array gain in (<ref>) is half of its maximum value at
x_r,3 dB≈±0.443√(2)λ F/ND ,
and the 3dB BW along the x-axis can be computed as
BW_3 dB≈ 0.886√(2) λ F/ND.
This is the same expression as in far-field communications; in fact, the angular 3dB BW is approximately BW_3 dB/F radians and is independent of the distance to the focal point. However, the BW in meters becomes much smaller when the array is large; thus, the beams are narrower in the radiative near-field.
Fig. <ref> illustrates the normalized array gain of an ELAA with M=300 and N=400, consisting of antennas of the size λ/4 ×λ/4. The array gain is shown as a function of the x coordinate of a receiver located at (x_r,0,F). We consider three focusing scenarios: F = d_B = 10^3 d_F, F = d_FA/25 = 5· 10^3 d_F, and F = d_FA/10 =1.25 · 10^4 d_F. It can be observed that the normalized array gain is approximately 0.5 when x_r = ± 4.43 d_F for F = d_B which can also be obtained from (<ref>).
The 3dB BW can accordingly be obtained as BW_3 dB = 8.86 d_F. This number increases to BW_3 dB = 44.3 d_F and BW_3 dB = 110.75 d_F for F = d_FA/25 and F = d_FA/10, respectively.
To characterize the BD, we instead compute the normalized array gain for a receiver placed at (0,0,z_r), in the same direction as the focal point for the beam but at a different distance. The normalized array gain can then be shown to become <cit.>
G_array(0,0,z_r) ≈(8z_eff/MNd_F)^2 ·
(C^2 (M √(d_F/8z_eff)) + S^2 (M √(d_F/8z_eff)))·
(C^2 (N √(d_F/8z_eff)) + S^2 (N √(d_F/8z_eff))),
where z_eff = z_r F/|z_r - F| and C(x) = ∫_0^xcos(π t^2/2) dt and S(x) = ∫_0^xsin(π t^2/2) dt are the Fresnel integrals. The array gain expression in (<ref>) takes its maximum value at d_F/8z_eff = 0 or equivalently z_r = F, which is the focal point.
To determine the 3dB BD, we want to identify at what distance the array gain reduces to 0.5. The array gain in (<ref>) is a function of a =d_F/(8z_eff) and we let a_3 dB denote the value for which the array gain becomes 0.5. This value must generally be obtained numerically.
We can then use the expression for z_eff to solve for z_r, which results in the two roots
z_r,3 dB = d_FF/d_F± 8a_3 dBF.
This equation reveals that there is a depth interval
d_FF/d_F + 8a_3 dBF≤ z_r ≤d_FF/d_F - 8a_3 dBF
for which the normalized array gain is between 0.5 and 1. The length of this interval is the 3dB BD and it is obtained as
BD_3 dB = 16 a_3 dBd_FF^2/d_F^2 - 64 a^2_3 dB F^2, F < d_F/8a_3 dB,
∞, F ≥d_F/8a_3 dB,
where the second case occurs when the upper limit in (<ref>) becomes negative (i.e., non-existing).
According to (<ref>), the 3dB BD is finite when the ELAA focuses on a point closer than d_F/8a_3 dB and extends to infinity when focusing on more distant points. For a square array with M=N, the BD expression in (<ref>) is simplified as
BD_3 dB = 20d_FAF^2/d_FA^2 - 100 F^2, F < d_FA/10,
∞, F ≥d_FA/10.
To showcase how a beam focused at a near-field point behaves, Fig. <ref> shows the heat map of the normalized gain when the ELAA (with the same configuration and number of antennas as before) focuses at (0,0,d_B). It can be observed that the beam energy radiated by the ELAA is concentrated in an area around the focal point with finite width and depth.
Therefore, the beamforming in the radiative near-field only provides an array gain in a small region surrounding the focal point.
This is an excellent new feature that enables
the massive spatial multiplexing paradigm that will be described next.
§ MASSIVE SPATIAL MULTIPLEXING
By utilizing the precise signal focusing ability that appears in the radiative near-field region, ELAAs can remarkably
improve the sum rate in future systems through massive spatial multiplexing in both the depth and angular domains. This section will describe how this constitutes a paradigm shift in serving many users simultaneously and allows for sending multiple beams to one single user under LOS conditions. We will cover these two cases in separate sections.
§.§ Massive Multiplexing in the Depth Domain
We have demonstrated how beamforming in the radiative near-field generates beams with finite depth. This depth perception in the near-field allows the array to act like a lens that focuses the signal on a specific location instead of in a specific direction, as in the far-field. This enables a new multiplexing technique where multiple users that are positioned in the same angular direction with respect to the ELAA but at different distances can be simultaneously served since the channel vectors will be vastly different. Multiplexing in the depth domain is a game changer for serving massive crowds of users, which is hard for the BS to manage with traditional far-field beamforming.
According to (<ref>), we can have distinct BD intervals when focusing at distances closer than d_F/(8a_3 dB). Specifically, for a square ELAA with M=N, we can utilize (<ref>) to identify F = ∞, F = d_FA/20, F = d_FA/40,F = d_FA/60, and F = d_FA/80 as five focal points with non-overlapping 3 dB BD intervals.
Fig. <ref> shows the normalized array gain when an ELAA with M=N=200 and D=λ/2 focuses on the mentioned five points using matched filtering. It can be clearly seen that the 3dB BD intervals are non-overlapping, which lets the ELAA transmit to the five users at the same time without causing much interference. Thus, the finite BD in the radiative near-field cater for the multiplexing of multiple users placed at the same angular direction but at different propagation distances.
The arrangement of the antennas in the array also plays a role in determining how many users can be spatially multiplexed. By taking a closer look at the BD expression in (<ref>) for F<d_F/8a_3 dB, we can notice that it is an increasing function of a_3 dB which means that the BD expands if we increase a_3 dB.
For a fixed number of antennas, a_3 dB is maximized when the array has a square shape with the same number of antennas in both dimensions and it becomes smaller as the difference between the number of horizontal and vertical antennas increases. To illustrate this, we consider four different arrays with the same total number of antennas MN = 1024 but different number of columns and rows. Fig. <ref> shows the array gain expression from (<ref>), which we express as a function of x = d_F/8z_eff as
G(x) =
(C^2(M√(x))+S^2(M√(x)))(C^2(N√(x))+S^2(N√(x)))
/(MNx)^2.
This function is shown for the four considered setups and, for the sake of illustration, it is plotted for both positive and negative values of x. The star markers in the figure indicate x = ± a_3 dB for the corresponding graph. It can be observed that |a_3 dB| reduces as the array shape changes from square to rectangle.
The above analysis indicates that a square array has a larger BD than a rectangular array having the same number of antennas. The smaller BD of a rectangular-shaped array allows for multiplexing more users in the depth domain.
This can be observed in Fig. <ref> where the red dots show the served users. The transmitter has MN = 4· 10^4 antennas. Fig. <ref>(a) shows the scenario with a square array having 200 antennas with D=λ/2 in each dimension, while the array considered in Fig. <ref>(b) is rectangular with M= 80, N=500 antennas. For the rectangular case provides a smaller BD, which makes it possible to multiplex more users in the depth domain than in the square case (8 vs. 6 users in this example).
Apart from these novel channel properties, the transmitter/receiver processing remains the same as in conventional MU-MIMO systems. For example, if K single-antenna users are served and the channel vector to user k is denoted as h_k ∈ℂ^MN, then the received downlink signal y_k ∈ℂ can be expressed as
y_k = h_k^∑_i=1^Kw_i x_i + n_k,
where x_i ∈ℂ is the data signal intended for user i, w_i ∈ℂ^MN is the corresponding precoding vector, and n_k ∈ℂ is the receiver noise at user k.
The combined received signals of all users can be expressed as
[ y_1; ⋮; y_K ]_=y = [ h_1^; ⋮; h_K^ ]_=H^[ w_1 … w_K ]_=W[ x_1; ⋮; x_K ]_=x + [ n_1; ⋮; n_K ]_=n,
which can be expressed as y = H^Wx + n.
The transmitter can then alleviate the interference by using the zero-forcing precoding matrix
W = αH (H^H)^-1,
where α is a scaling factor that can be selected to satisfy a total power constraint. This precoding matrix is the pseudo-inverse of the channel matrix H^, but it only exists if the channel matrix has rank K.
In a far-field scenario where the K users are located in the same angular direction, all the rows in H^ will be identical (apart from the scaling factor); thus, zero-forcing does not exist because the users cannot be resolved by the transmitter.
This is not the case in the radiative near-field, where the channel matrix is generated differently.
The improved spatial resolution that occurs in the radiative near-field is also useful in multipath propagation scenarios, where scattering clusters located at different distances can be resolved to make the channel vectors of different users more distinguishable than in the far-field.
§.§ Massive Multiplexing in the Angular Domain
We will now shift focus to SU-MIMO, for which the channel matrix H in free-space LOS scenarios is conventionally viewed to have rank 1 <cit.>. This is because the communication is assumed to occur in the far-field region, where the wavefronts are planar and the channel response is governed by a single dominant path. Fig. <ref> exemplifies conventional far-field MIMO-LOS communication. In this case, H^H has a single dominant eigenvalue containing the vast majority of the channel gain.
The corresponding dominant beamforming mode, shown in Fig. <ref>(a), corresponds to beamforming straight towards the receiver and placing it in the middle of a wide beam. However, there are always additional weak beamforming modes, such as the one shown in Fig. <ref>(b), that contains 1.5% of the channel gain.
In the radiative near-field, we can make use of the spherical wavefront characteristics to achieve ideal full-rank MIMO channels with equal eigenvalues also in LOS scenarios. Fig. <ref> sketches the signal transmission between two ULAs with K antennas. We can see that the receiver can distinguish between the signals transmitted from the different antennas based on their spherical curvatures. This enables the receiver to distinguish multiple signals that are transmitted in LOS, particularly if the antenna spacing Δ is properly selected.
We will now delve deeper into the free-space LOS communication between two ULAs. The arrays are in the broadside direction and separated by a distance d. They are identically arranged with K antennas.
The antenna spacing is Δ, as illustrated in Fig. <ref> <cit.>.
The distance between the kth transmit and mth receive antenna, depending on the antenna spacing, is
d_m,k=√(d^2+( m-k)^2 Δ^2),
where m,k ∈{1,…, K}. Therefore, we can model the MIMO channel for a LOS scenario as <cit.>
𝐇=
[ √(β_1,1) e^-πd_1,1-d/λ ⋯ √(β_1,K)e^-πd_1,K-d/λ; ⋮ ⋱ ⋮; √(β_K,1)e^-πd_K,1-d/λ ⋯ √(β_K,K)e^-πd_K,K-d/λ; ],
where the phase-shifts are determined by the distance d_m,k between different transmit and receive antenna pairs and the reference distance d. The channel gain between the transmit and receive antennas is
β_m,k = G_t G_r ( λ/4 π d_m,k)^2,
where G_t and G_r are the transmit and receive antenna gains, respectively. In the case of isotropic antennas, they are both equal to one.
The diagonal in each array has a length of D =(K -1 ) Δ.
In typical propagation scenarios, beyond the Björnson distance, the channel gain is nearly the same between all antenna locations:
β_m,k≈β = (λ/4 π d)^2.
Furthermore, we can approximate the distance between the transmit and receive antennas using the first-order Taylor approximation as d_m,k≈ d + δ_m,k/2d, where δ_m,k = (m-k )^2 Δ^2. Based on this Fresnel approximation, the MIMO channel matrix in (<ref>) can be rewritten as
𝐇̃ ≈√(β)[ e^-πδ_1,1/dλ ⋯ e^-πδ_1,K/dλ; ⋮ ⋱ ⋮; e^-πδ_K,1/dλ ⋯ e^-πδ_K,K/dλ; ].
It follows that 𝐇̃_F^2=β K^2 is the sum of eigenvalues of 𝐇̃^H 𝐇̃. As pointed out at the beginning of this article, it is preferable if all the eigenvalues (i.e., the squared singular values of 𝐇̃) are equal. We can achieve this by tuning the antenna spacing Δ <cit.>.
An arbitrary off-diagonal entry (k,ℓ)th element of 𝐇̃^H 𝐇̃ where ℓ∈{1,…,K } and ℓ≠ k, have the magnitude
β| ∑_m=1^K e^ 2 π/d λ m (k - l) Δ^2| = β| 1- e^π 2 K ( l - k ) Δ^2 /λ d/1- e^j π2( l - k ) Δ^2 /λ d|.
The last equation is obtained by using the classical sum of geometric series <cit.>.
If we have K Δ^2/λ d =1,
then the magnitude of all off-diagonal entries is 0. In this case, all the K eigenvalues are equal to β K since 𝐇̃^H 𝐇̃ = β K 𝐈 and the capacity is maximized. Therefore, we obtain the capacity-maximizing antenna spacing as <cit.>
Δ = √(λ d/K).
The capacity with the optimal spacing results in the upper bound in (<ref>), i.e.,
C= B K log_2 (1+P β/B N_0).
Notice that the beamforming gain inside of the SNR term is canceled since we distribute the power equally across users.
The capacity can be enhanced by increasing the number of antennas K, which simultaneously increases the multiplexing gain and beamforming gain.
The area of the array is A_array = ( √(λ d/K) (K-1)+ W̃ )^2, where W̃≥ 0 is the width of an antenna. For a fixed array area A_array and given distance d between the transmit and receive arrays, the multiplexing gain K is a function of the wavelength λ through β, which can be tightly approximated by A_array^2/(λ d)^2 when λ is small. More specifically, the multiplexing gain is inversely proportional to the wavelength, implying a preference for using higher carrier frequencies to achieve a higher multiplexing gain.
We evaluate the capacity expression in (<ref>) with respect to the
carrier frequency in Fig. <ref>. We select a fixed array area A_array=0.16 m^2, distance d=50 m, W̃=λ/2, P/N_0=218.02 dB, and B=0.03f_c, where f denotes the carrier frequency. First, we consider isotropic antennas at both the transmitter and receiver, i.e., G_t=G_r=1. In that case, β is proportional to λ^2. The capacity with respect to the carrier frequency is illustrated by the red curve in Fig. <ref>. We observe that the capacity increases for carrier frequencies less than 5 · 10^2 GHz. This can be attributed to the multiplexing gain outweighing the SNR reduction. However, as we keep on increasing the carrier frequency, the capacity reaches a plateau (as in Fig. <ref>). In contrast, if we consider directive antennas (i.e., by setting G_t = G_r = 1/λ), we can keep the SNR constant as the frequency changes. The black curve in Fig. <ref> depicts the corresponding capacity. The capacity improves significantly compared to the case of isotropic antennas. For instance, with a carrier frequency of 300 GHz, the capacity improves approximately one order of magnitude.
The analysis above reveals several intriguing features that are at the heart of massive spatial multiplexing:
* Increasing the carrier frequency for the purpose of solely utilizing greater bandwidth does not guarantee capacity enhancement, as the SNR diminishes with shorter wavelengths. This finding is consistent with the observation in Fig. <ref>, where the capacity (measured in bit/s) reaches its upper limit as the bandwidth increases.
* The beamforming gain does not enhance the capacity as it gets nullified by the equal distribution of power. Therefore, equation (<ref>) disregards the presence of the beamforming gain term.
* The multiplexing gain consistently enhances the capacity, which serves as a driving force behind our proposition of a new paradigm. This paradigm revolves around seeking the multiplexing gain achieved through the utilization of additional antennas. Thanks to the progressive trend towards higher frequency in future wireless communications, enabling the incorporation of more antennas within the same array's area.
§ RESEARCH CHALLENGES AND FUTURE DIRECTIONS
In this section, we explore future research challenges and directions related to massive spatial multiplexing. We categorize them into four categories as follows.
§.§ Signal Processing Tailored to the Near-Field
The spherical wavefronts in near-field communications introduce the depth domain as a novel degree of freedom that enables massive spatial multiplexing. The channel matrix H must be estimated to achieve these gains, which is associated with a high complexity if traditional non-parametric methods (e.g., least squares) are used. A potential solution is to tailor the signal processing algorithms to the near-field scenario by utilizing channel parametrizations that capture the depth. There are major challenges related to this as well; for example, the channel estimation and beamforming codebook design must account for the additional depth grid <cit.>, otherwise the ability to distinguish signals arriving from similar angular directions is lost.
There is also a risk that limited phase-synchronization, manufacturing errors, and the presence of non-linear phase characteristics will further limit the signal processing performance. Therefore, it is required to study the redesign of the signal processing algorithms in detail.
Deep learning (DL) techniques might pave the way to overcome these challenges. DL techniques can be utilized to design a near-field codebook that accounts for non-linear phase shifts by leveraging the inherent non-linearity of neural networks <cit.>. To overcome the channel estimation challenge, a deep learning-based solution is reported in <cit.>. The paper formulates the estimation as a compressed sensing problem and then uses a model-driven learning approach to solve it. In <cit.>, a DL approach is proposed to perform frequency-aware beamforming that prevents the beamforming gain degradation due to the wideband effect. Despite these recent advances, unlike the DL applications in far-field communications that are well studied, further investigations are necessary to fully understand and optimize the application of DL techniques in the near-field communications.
When effective learning-based solutions have been developed, it is also desirable to extract the key properties for improved modeling.
§.§ Practical Hardware Models
For theoretical studies to be applicable to practical scenarios, accurate and realistic hardware models must be used, which take into account the intrinsic properties of ELAAs. For instance, the mutual coupling effects that have traditionally been neglected by the communication community (e.g., motivated by placing antennas half a wavelength apart) must be included in the system models to obtain physically accurate results.
Coupling-aware communication models are needed to accurately characterize the coupling between adjacent antennas, particularly, in arrays with densely-packed antenna elements. Though mutual coupling has been known to degrade the capacity in MIMO systems, it has been recently shown that the effective utilization of this property can also lead to high array gains in particular directions <cit.>.
§.§ Approaching the Spatial Degrees of Freedom
The Nyquist-Shannon sampling theorem determines how many orthogonal channels an array of a given size can distinguish between.
This number is called the spatial degrees of freedom <cit.>. For a planar array with a given area A_array, the degrees of freedom is <cit.>
Degrees of freedom = πA_array/λ^2.
The principle is that π channels can be distinguished per area component of size λ^2. The number in (<ref>) is much larger than the number of signals that were spatially multiplexed in the SU-MIMO case, because the receiver only “fills” a small fraction of the transmitter's field of view.
In other words, we have only scratched on the surfaces of massive spatial multiplexing; we can combine the SU-MIMO and MU-MIMO cases to achieve even more effective multiplexing.
One open research question is which array geometry and antenna distribution within the aperture can make the most effective use of the spatial degrees of freedom. In addition to uniform planar arrays, as considered in this article, another classical configuration is uniform circular arrays (UCAs) <cit.>. A high-rank LOS channel matrix can also be achieved when two UCAs are placed in the radiative near-field of each other. An interesting side-effect is that the “beams” take the shape of orbital angular momentum (OAM) modes, which emphasizes how different array geometries lead to different channel parametrizations and MIMO matrices.
The choice of antenna spacing is another open question: are there are any tangible gains from using shorter spacings than λ/2?
§.§ Massive Spatial Multiplexing with RIS
The near-field characteristics also influence communication systems that involve reconfigurable intelligent surface (RIS). These surfaces were conventionally viewed as reflectors that can create a virtual rank-one LOS path between a transmitter-receiver pair<cit.>. Such a scenario is illustrated in Fig. <ref>, where the direct path between the transmitter and receiver is blocked but signals can be reflected off the RIS. If both the transmitter and receiver are in the radiative near-field, the RIS can create a high-rank channel between the transmitter and receiver <cit.>.
The theory described earlier in this paper can be utilized to compute the 3dB BW and BD of the beam produced by the RIS <cit.>. Consequently, for an extremely large aperture RIS with the physical characteristics as an ELAA (i.e., the same number of elements and element spacing in the RIS as the number of antennas and antenna spacing in the ELAA), the 3 dB BW and BD are equal to those of the ELAA. Hence, all the previously mentioned research challenges can also be explored in the context of RIS-aided communications. Moreover, phase-dependent amplitude variations of the elements should be considered when designing the RIS configuration.
§ CONCLUSIONS
The relentless pursuit for elevated data rates in forthcoming wireless technologies remains insatiable. However, simply increasing the bandwidth is inadequate, as data rates eventually reach a plateau. The focus should shift towards transmitting multiple parallel data streams as spatial layers in both SU-MIMO and MU-MIMO scenarios. In the former case, large arrays are employed on both the transmitter and receiver sides enabling angular-domain multiplexing of data streams to a single user, whereas in the latter scenario, an ELAA is utilized at the BS allowing for depth-domain multiplexing of multiple users.
In particular, in the depth domain, simultaneous multiple user transmissions in the radiative near-field region are enabled using the signal focusing property. In the angular domain, spherical wavefront properties are leveraged to achieve an ideal full-rank MIMO channel with equal eigenvalues in LOS scenarios. Our numerical examples have showcased a consistent improvement in data rate by utilizing the benefits of multiplexing gain. This allows us to assert that we are now entering a new era marked by the advent of massive spatial multiplexing. Finally, we have posted some open issues and future directions such as the redesign of the signal processing aspect possibly using deep learning, the need for practical hardware considerations, exploration of different array shapes to approach the spatial degrees of freedom limit, and investigation of massive spatial multiplexing in RIS-aided communication.
IEEEtran
|
http://arxiv.org/abs/2307.03230v1
|
20230706180004
|
Rotation measure variations in Galactic Centre pulsars
|
[
"F. Abbate",
"A. Noutsos",
"G. Desvignes",
"R. S. Wharton",
"P. Torne",
"M. Kramer",
"R. P. Eatough",
"R. Karuppusamy",
"K. Liu",
"L. Shao",
"J. Wongphechauxsorn"
] |
astro-ph.HE
|
[
"astro-ph.HE",
"astro-ph.GA"
] |
firstpage–lastpage
Adaptive projected variational quantum dynamics
Giuseppe Carleo
August 1, 2023
===============================================
We report the results of an observational campaign using the Effelsberg 100-m telescope of the pulsars J1746-2849, J1746-2850, J1746-2856 and J1745-2912 located in the Central Molecular Zone (CMZ) close to the Galactic centre in order to study rotation measure (RM) variations. We report for the first time the RM value of PSR J1746-2850 to be -12234 ± 181 rad m^-2. This pulsar shows significant variations of RM of 300-400 rad m^-2 over the course of months to years that suggest a strongly magnetized environment. The structure function analysis of the RM of PSR J1746-2850 revealed a steep power-law index of 1.87_-0.3^+0.4 comparable to the value expected for isotropic turbulence.
This pulsar also showed large dispersion measure (DM) variation of ∼ 50 pc cm^-3 in an event lasting a few months where the RM increased by ∼ 200 rad m^-2. The large difference in RM between PSR J1746-2849 and PSR J1746-2850 despite the small angular separation reveals the presence of a magnetic field of at least 70 μG in the CMZ and can explain the lack of polarization in the radio images of the region. These results contribute to our understanding of the magnetic field in the CMZ and show similarities between the RM behaviours of these pulsars and some fast radio bursts (FRBs).
Galaxy: centre – magnetic fields – pulsars: general
§ INTRODUCTION
The interstellar medium (ISM) surrounding the supermassive black hole Sagittarius A^* (Sgr A^*) at the centre of the Milky way is quite extreme when compared to the Galactic disk. The region surrounding Sgr A^* with a radius of ∼ 150 pc, called the central molecular zone (CMZ, ), has densities <cit.> and cosmic-ray energy density <cit.> 2-3 orders of magnitude larger than the rest of the Galaxy. The situation is similar for the magnetic fields. While direct measurements of the magnetic fields in the region do not agree on a single value <cit.>, arguments regarding the non-thermal filaments (NTFs), cosmic-ray density and turbulent energy suggest that interstellar magnetic fields range between 100 μG and 1 mG <cit.>. This is about two orders of magnitude higher than the magnetic field in the Galactic disk <cit.>.
One technique that can be used to probe the magnetic field is through the rotation measure (RM) and dispersion measure (DM) of the pulsars located within this region. There are 6 known pulsars located within the CMZ, PSR J1745-2912, PSR J1746-2856, PSR J1746-2849, PSR J1746-2850, PSR J1745-2910 <cit.> and the Galactic centre magnetar PSR J1745-2900 <cit.>. These pulsars have among the highest DMs and RMs of any known pulsar <cit.>, which reinforces the idea that high densities and strong magnetic fields permeate the CMZ region.
The RM of PSR J1745-2900, located just 3 arcseconds away from Sgr A^*, has been closely monitored throughout the years since its discovery and shows very strong variations with an increase of ∼ 3500 from -66,960 to -63,402 in the span of 3 years <cit.>. This difference has been attributed to the rapidly changing magnetic environment close to Sgr A^*. Such strong variability is rare for pulsars and is only seen in another class of objects, the Fast Radio Bursts (FRBs, e.g. ).
In this case, the RM variability over a few months or years ranges from ∼ 50for FRB 20180916B <cit.> to a few tens of thousands of for FRB 121102 <cit.> and FRB 20190520B <cit.>. The similar variations over comparable timescales suggest that the surrounding environments might have similar levels of density and magnetic fields.
In this paper we show the results of a 3 year long observational campaign on PSR J1746-2849, PSR J1746-2850, PSR J1746-2856 and PSR J1745-2912 in order to determine the variability of RM over time. Studying the extent of this variability could help probe the properties of the magnetic field in the CMZ compared to the surrounding of Sgr A^*. Additionally, the RM variability of these pulsars will provide important clues for the interpretation of variability seen in FRBs.
§ OBSERVATIONS
The observations were carried out with the Effelsberg 100-m radio telescope of the Max Planck Institute for Radio Astronomy using the S45 broadband receiver. This receiver has two 2 GHz bands (between 4 and 8 GHz) that are fed into the PSRIX2 backend, consisting of two CASPER[<https://casper.berkeley.edu/>] ROACH2 boards. The signal is digitized creating a total of 4096 frequency channels sampled every 131 μs and recorded in full-Stokes.
PSR J1745-2910 was not detected in our first observation probably due to its high variability and/or dimming over time.
It is important to note that this pulsar has only been observed at the Green Bank Telescope at a frequency of 2 GHz. The uncertainty of the position, given by the beam of ∼ 5 arcmin, is larger than the beam at the Effelsberg 100-m telescope at 6 GHz (∼ 2 arcmin) meaning that the pulsar might be outside of the Effelsberg 100-m beam.
For this reason we focused on PSR J1746-2849, PSR J1746-2850, PSR J1746-2856 and PSR J1745-2912. The positions of the four observed pulsars are shown in Fig. <ref>. The observations were carried out from March 2019 to August 2022. The spacing of the observations is not uniform, and the minimum temporal difference between the observations is 10 d.
PSR J1746-2850 was the subject of a recent re-brightening following a few years of non-detections <cit.>. For this reason this pulsar was the target of a larger number of observations compared to the other pulsars.
Phase-binned imaging observations with the VLA have determined that the position of
PSR J1746-2850 is at R.A. (J2000) 17^h46^m06^s.959 ± 0.002 and DEC (J2000) -28^∘51'04”.54 ± 0.08
(Wharton et al., in prep), which is about 25 arcseconds away from the timing position
presented in <cit.>. Throughout the paper we will use the imaging position.
§ DATA ANALYSIS
The data were analyzed using standard [<http://psrchive.sourceforge.net>] <cit.> packages and calibrated in flux and polarization. The DMs were measured using [<https://bitbucket.org/psrsoft/tempo2/>] <cit.> by dividing each observation in 16 frequency channels and extracting time of arrivals (ToAs) for each channel. Temporal variations of the scattering properties of the pulsars could mimic variations of DM. We tried looking for evidence of variations of the scattering using the software <cit.> but no significant variation was detected.
The RMs were measured using a two-step approach similar to the one used by <cit.> and <cit.>. First we looked for the value of RM that maximises the linear polarization of the pulsed signal. We searched in a range of RMs between -50,000 and 50,000 rad m^-2 with a step size of 10 rad m^-2. After a preliminary value is found this way, we corrected the data for this value of RM, divided the data in 16 frequency channels and performed a fit of the position angle (PA) across the frequency band according to the formula:
Ψ(λ) = RM λ^2 + Ψ_0
where λ is the wavelength and Ψ_0 is the value of the PA as it was emitted by the pulsar. The measure of the value and error of the PA for each frequency channel follows the prescriptions described in <cit.>. Following the prescriptions described in <cit.>, we discard the detection if the significance of peak in linear polarization is smaller than 4 and we multiply the error by two if the significance is between 4 and 8.
Examples, details of the fitting procedure and comparison with a method based on RM synthesis <cit.> for each of the pulsars are shown in appendix <ref>.
§ RESULTS
The RM-corrected polarization profiles of PSR J1746-2849, PSR J1746-2850, PSR J1746-2856 and PSR J1745-2912 are shown in Fig. <ref>.
The values of the DMs and RMs at each observation are shown in Fig. <ref> and in Table <ref>.
For comparison purposes we show also the values of RM reported in <cit.> for PSR J1746-2849, PSR J1746-2856 and PSR J1745-2912. Those values reported in Table <ref> and shown as a dashed red line with an orange 1σ region in Fig. <ref> are compatible with the variability reported in this manuscript. This implies that, over 7 years, the average RM values for these pulsars have remained roughly constant and that the variability occurs on shorter timescales.
For PSR J1746-2850 this is the first published measurement of RM. The median value of RM is ∼ -12234 with an error estimated using the median of the absolute deviation from the median of 181 . This makes it the pulsar with third highest absolute value of RM after PSR J1745-2900 and PSR J1746-2856. Because of its implied high magnetic field, flat spectrum and transient behaviour, PSR J1746-2850 has been compared to radio loud magnetars <cit.>. Thanks to the polarization observations we can calculate the linear polarization percentage and compare it with the
very high (∼ 80-100 percent) linear polarization of magnetars <cit.>. The linear polarization of this pulsar is found to be 43 percent. This challenges the classification of PSR J1746-2850 as a magnetar but reinforces the idea that it might be a transitional object between a rotation-powered pulsar and a magnetar.
As seen in Fig. <ref>, PSR J1746-2850 is located close to the Arc NTF in a region dominated by the Sickle HII region G0.18-0.04 <cit.>. The young age of the pulsar derived from the timing properties (T≃ 13 kyr, ) suggests that it might have originated from the nearby Quintuplet or Arches clusters <cit.> and might be still located in the vicinity of the gas rich HII regions visible in Fig. <ref>. The high absolute value of RM is compatible with this position given the very high electron densities of these clouds, ∼ 300-400 <cit.>, and the large expected magnetic field in the region, between 100 μG and 1 mG <cit.>.
The behaviours of DM and RM are expected to be different for sources close to the Galactic centre. While the RM is primarily affected by local screens in regions with high magnetic fields, most of the DM arises from the collective effect of the electron along the entire line of sight <cit.>.
We test if there are significant variations in the RMs and DMs of the pulsars or if they are compatible with a single value. We perform a χ^2 test with the null-hypothesis that the RMs and DMs of the pulsars are compatible with a single value using a threshold p-value of 0.001. For J1746-2849 we obtain a p-value of 0.7 for the RMs and 0.5 for the DMs, for J1746-2850 we find <10^-5 for the RMs and <10^-5 for the DMs, for J1746-2856 we find 0.5 for the RMs and 0.07 for the DMs, while for J1745-2912 we find 0.3 for the RMs and 0.4 for the DMs. Therefore, only pulsar J1746-2850 shows significant variations in RM and DM.
The variations in RM can occur even on timescales of ∼ 10 days, the smallest timescales at which we observe. This suggests that the variability could occur even at smaller timescales. The variations in RM are smaller than what has been observed for PSR J1745-2900 over similar timescales <cit.>. This implies that the magnetic field is significantly stronger in the vicinity of Sgr A^* where PSR J1745-2900 is located compared to the location of the pulsars observed in this work.
An interesting event, highlighted in light blue in Fig. <ref> occurred around MJD 59400 and lasting ∼ 100 days for PSR J1746-2850 where the RM was measured to be monotonically increasing by ∼ 200 while the DM decreased by ∼ 50 .
Similar secular variations of RM over a period of a few months have already been observed in the Galactic centre magnetar <cit.> and in FRB 121102 <cit.> and FRB 20180916B <cit.>. However, in the case of PSR J1746-2850, we also observe a simultaneous decrease, on average, of the DM. The large and simultaneous RM and DM variations suggest that the event arises from a local source possibly located within the CMZ. Under the assumption that the contribution of the gas along the entire line of sight to the pulsar remains constant, we can estimate the average component of the magnetic field parallel to the line of sight within the local source with the equation:
B_∥∼ 1.23 RM_ end -RM_ start/ DM_ end-DM_ start∼ - 5 μ G,
where RM_ start and DM_ start are the RM and DM at the beginning of the event and RM_ end and DM_ end are the RM and DM at the end.
This value of magnetic field is significantly lower than the value of 0.1-1 mG expected for the CMZ <cit.>. The low value of the projected magnetic field together with the strong variability of electron density, as suggested by the DM, suggests that either the observed variability is happening in a gas-rich region with low magnetic field or that the magnetic field is mostly perpendicular to the line of sight.
§.§ RM and DM structure functions for PSR J1746-2850
In the case of PSR J1746-2850 we can check if the variations are caused by a turbulent medium by looking at the second-order structure function <cit.>. The RM SF is defined as:
D_ RM (τ)= ⟨ [ RM(t+τ)- RM(t)]^2⟩,
where t is the time of each observation, τ is the lag between two different observations and the angle brackets mean the average between all pairs with the same temporal difference.
The time difference between the observations can be considered as a proxy of the angular separation between the position of the pulsar at different observations.
The pulsar could have proper motion of several hundreds of km s^-1 <cit.> while
the velocity of the gas in the Sickle HII region is measured to be ∼ 30-80 km s^-1 <cit.>. This means that the line of sight to the pulsar will traverse different parts of gas with different density and magnetic fields at different times.
In the case of a turbulent medium, we expect the SF to follow either a single power-law or a broken power-law <cit.>.
To measure the RM SF we first estimate the square difference of all RM pairs from different observations. We then sort the pairs as a function of time difference between the observations and group the 231 pairs together in 11 bins containing each 21 pairs.
The value and the error of the SF for each bin is measured by taking the median and the standard error of the median with a Monte Carlo method.
We report the RM SF in top panel of Fig. <ref>.
We further verify the statistical significance of the SF by simulating the effects of the white noise. We repeat the same Monte Carlo extraction as described before but we use the same value of RM for each pulsar. The resulting SF represents the noise level given by the errors on the RM and is subtracted from the true SF before doing the fit.
We perform a fit to the data with a single power-law.
The fits are based on the Markov Chain Monte Carlo code <cit.>. The free parameters are the power-law index and the value of the SF at a separation of 1 day. For the power-law indices we used a flat prior between -2 and 4.
The 68 percent confidence interval of the power-law index is 1.87_-0.3^+0.4 and for the value at 1 day it is 0.23_-0.01^+1.44 rad^2 m^-4.
The best-fitting power-law is shown in the top panel of Fig. <ref> as the dashed green line. In the bins corresponding to the smallest temporal differences, the best fitting model goes below the noise limit meaning that these points are dominated by the noise.
The power-law index is compatible to what is expected by isotropic Kolmogorov turbulence (that predicts a value of 5/3, e.g. ). The strong variability not seen in other pulsar along the Galactic disk suggests that the turbulence is located in the CMZ close to the pulsar.
The power-law index is larger but still compatible given the large errors with the value of 1.23 ± 0.13 measured for PSR J1745-2900 <cit.>.
We repeat the same process for the DM and show the results in the bottom panel of Fig. <ref>. If we attempt to perform a fit to the observed values we get a 68 percent confidence interval for the power-law index of -0.01± 0.34 and for the value at 1 day it is 540_-500^+1600 pc^2 cm^-6. We notice that the observed values of the SF are close to the sensitivity limit for a large number of bins. This suggests the result is highly impacted by the errors on the measurements. The real underlying SF could be smaller and the apparent flat behaviour could be a consequence of the large errors. However, a similar flat behaviour has also been observed in PSR J1745-2900 <cit.>.
§.§ Implications for magnetic fields in the Radio Arc
Figure <ref> shows the position of the pulsars located within the Radio Arc with respect to the main radio continuum features. We show the position of the Sickle HII region G0.18-0.04 <cit.> and the Radio Arc Bubble <cit.> that is suggested to be driven by the outflow of the nearby Quintuplet cluster.
PSR J1746-2849 and PSR J1746-2850 are located close to each other with an angular separation of 1.9 arcmin, corresponding to a projected physical separation of 4.5 pc at the distance of Galactic Centre. Both pulsars are located close to the Sickle HII region near the border of the Radio Arc Bubble. Despite the small angular difference, the DM difference between the two pulsars is ∼ 400 while the RM difference is ∼ 22000 . The higher value of DM of PSR J1746-2849 indicates either a position in a denser environment or that it is located at a larger distance from us. The electron density in the Sickle was determined to be ∼ 220 cm^-3 from radio continuum observations <cit.> and ∼ 300-400 cm^-3 from spectral lines observations <cit.>. The electron density is related to the DM by the equation <cit.>:
DM= ∫_0^d n_e dl [pc cm^-3],
where d is the distance from the observer to the pulsar expressed in pc and n_e is the electron density expressed in cm^-3. This implies that, within the densest regions, a difference in DM of ∼ 400 can occur over only 1 or 2 pc.
Given the close proximity in projection of these pulsars it is possible that there is a common Faraday screen in front of them. Unfortunately, we do not detect significant variability in RM for J1746-2849. Even if RM variations of the same magnitude as for J1746-2850 are present, they would not be detectable due to the large uncertainties.
As an order of magnitude estimate, we can probe the component of the magnetic field parallel to the line of sight by comparing the DM and RM of these two pulsars. In the assumption that the foreground contribution to DM and RM is the same, the differences would be caused entirely by the extra ionized gas between the position of PSR J1746-2850 and PSR J1746-2849. The parallel component of the magnetic field in this region needed to generate this difference in RM is:
B_∥∼ 1.23 RM_J1746-2849-RM_J1746-2850/ DM_J1746-2849-DM_J1746-2850∼ 70 μ G.
Given that this is only the average value of the parallel component of the magnetic field averaged over the line of sight, the magnetic field will likely be stronger. This would support the idea that the magnetic field within the Arc NTF is of the order of 0.1 - 1 mG <cit.>.
We can compare the values of RM of the pulsars and previous studies of the Radio Arc. Polarization observations of the entire region have revealed that the polarization is concentrated in a region marked by the dotted box shown in Fig. <ref>. In this box the RMs vary from ∼ -500 to ∼ -5500 <cit.>. The lack of detectable polarization in the region close to the pulsars could be explained by variations of magnetic field strength and orientation within the observing telescope beam, different polarization angles averaged over the beam <cit.> or by thermal emission from large electron densities in HII regions <cit.>. These depolarization effects do not affect the pulsar observations as much. The reason is that, when observing a pulsar, we fold the time series of the observation at the specific rotational period of the pulsar. By doing so we add coherently only the signal from the pulsar while any other source in the field of view is added incoherently and contributes to a uniform background. By subtracting the background we are able to isolate the signal and polarization of the pulsar and remove the other sources that could lead to depolarization. This effect could be exploited when looking for new pulsars in such dense environments as they are likely to be the only polarized sources in the field of view.
The values of the RMs observed in pulsars are significantly larger than the one observed in the imaging observations of the Radio Arc. In particular, the very large RM difference between PSR J1746-2849 and PSR J1746-2850 and the RM variability of more than 300 over a few months suggest that the magnetic environment surrounding the pulsars is more complicated and variable than the region where polarization is visible in radio imaging.
§.§ Comparison with FRBs
RM variations as large as the ones in our sample are rare within the known pulsars with a few exceptions like PSR J1745-2900 that shows variations of ∼ 3000 <cit.> and some binary systems around gas-shedding stars <cit.>. The only other group of objects that show similar RM variations are FRBs, e.g. FRB 20201124A with variations of ∼ 500 <cit.>, FRB 121102 with variations of ∼ 30000 <cit.> FRB 20190520B with variations of ∼ 40000 <cit.>.
One interesting case is FRB 20180916B <cit.> that shows a secular increase in RM of ∼ 50 over a period of 8 months. We see a similar secular variation for PSR J1746-2850 in the event occurring around MJD 59400.
However, the RM SF of FRB 20180916B has a flat power-law index of ∼ 0.3 <cit.> compared to the steep value of 1.87_-0.3^+0.4 for PSR J1746-2850. Furthermore, FRB 20180916B is located in a region of the host galaxy with low star formation rate and low Hα luminosity <cit.> which is quite different from the bright and active CMZ.
§ CONCLUSIONS
We observed four of the six pulsars closest to the Galactic centre, PSR J1746-2849, PSR J1746-2850, PSR J1746-2856 and PSR J1745-2912 with the Effelsberg 100-m radio telescope in order to study the DM and RM variations over time. This complements high cadence observations of PSR J1745-2900 that were presented in <cit.>. We report for the first time the value of RM of PSR J1746-2850 of -12234 ± 181 , the third highest absolute value of RM of any known pulsar. Over the time of the observations, this pulsar shows large variations of RM of around 300-400 . These are among the strongest variations in RM known for pulsars but are still smaller than those of PSR J1745-2900 <cit.> and show similarities with the variations observed in some FRBs <cit.>.
The DM variations are smaller and in most cases compatible with a constant value except for PSR J1746-2850. For this pulsar, we observe an event occurring around MJD 59400 where the DM decreases systematically by about 50 while the RM increases simultaneously by about 200 . While DM variations of this magnitude are rare in pulsars, they have been observed in FRB 20190529B <cit.> and, to a lower extent also in FRB 121102 <cit.>.
PSR J1746-2850 is the only pulsar for which we have enough observations to analyse the DM and RM SFs.
The RM SF shows a growing trend with longer time separations with a power-law index of 1.87_-0.3^+0.4, a value that is compatible with the expected value of 5/3 in case of three dimensional isotropic turbulence. If real, this turbulence is likely to be related with the Sickle HII region or with the Radio Arc bubble. The DM SF, on the other hand, shows a flat behaviour but the errors are too large for this to be conclusive.
The very large difference of RM between PSR J1746-2849 and PSR J1746-2850 of ∼ 22000 despite being located only 1.9 arcmin apart suggests the presence of a very large magnetic field in the region that could be stronger than ∼ 70 μG. This, combined with the large RM variations of over ∼ 300 , could give an explanation to the depolarization in the imaging observations of the Sickle HII region.
Future observations of the pulsars close to the Galactic centre would allow us to determine the RM and DM SFs and to study in more detail the behaviour of PSR J1746-2850. Observations with the MeerKAT telescope at the proposed S-band (1.75-3.55 GHz) would allow a better determination of the DM and RM variations thanks to the higher elevation in the sky and the higher sensitivity. With the advent of this new facility we expect a potential discovery of more pulsars in the region that could provide precious information of the magneto-ionic properties close to the Galactic centre even in areas where continuum observations do not show polarization.
§ ACKNOWLEDGEMENTS
This work was based on observations with the 100 m telescope of the Max-Planck-Institut für Radioastronomie at Effelsberg. RPE Funded by Chinese Academy of Sciences President’s International Fellowship Initiative. Grant No. 2021FSM0004. FA, AN, GD, RW, PT, MK, RPE, RK, KL, and LS acknowledge the financial support by the European Research Council for the ERC Synergy grant Black Hole Cam under contract no. 610058. This work is supported by the Max-Planck Society as part of the "LEGACY" collaboration on low-frequency gravitational wave astronomy.
R.S.W. was supported by an appointment to the NASA Postdoctoral Program at the Jet Propulsion Laboratory, administered by Oak Ridge Associated Universities under contract with NASA.
Part of this research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration.
§ DATA AVAILABILITY
The data underlying this article will be shared upon reasonable request to the corresponding author.
mnras
§ DETAILS OF RM FITS
In Section <ref> we described the procedure to determine the RM for each observation. Here we will provide details of the procedure and compare the results with those obtained from an RM synthesis code.
For each observation we perform flux and polarization calibration and we excise the frequency channels and time integrations that are affected by radio frequency interference (RFI) using packages from . After that we sum all of the integrations together, reduce the number of channels by 4 and create profiles of the pulsars with 1024 frequency channels maintaining all of the Stokes parameters. The number of channels was chosen to be 1024 in order to maximise the signal to noise (S/N) while keeping the intra-channel PA swing to a minimum. The maximum value of RM we can search using this number of channels is determined by the value that causes the PA between neighbouring channels (measured at the lowest frequency) to differ by more than π radians and is ∼ 300,000 . This value is similar to the maximum RM value that can be searched in the data using RM synthesis <cit.>.
The next step is to select the phase window where pulsar signal is present. We also select a phase window where the signal is not present to estimate the properties of the baseline and remove it from the pulsar signal. This way we can make sure that only the signal that originates from the pulsar is analysed. For this reason and because the pulsar magnetosphere is not expected to contribute to Faraday rotation (eg. ), we expect the pulsars to be Faraday thin sources that present a single peak in the RM spectrum.
The first method we used to measure the RM is to look for a peak in linear polarization in the RM spectrum of the pulsar. Given the range of RMs for the pulsars measured by <cit.>, we decided to search for RMs in the range -50,000 - 50,000 using steps of 10 . Examples of these spectra for each of of the pulsars are shown in the left side of Figure <ref>.
In order to check if the peak is significant enough to claim a detection, we perform a fit of the cumulative distribution function of the linear polarization percentage with a Gaussian function. We estimate the significance as the distance of the polarization peak from the mean value of the noise in number of standard deviations.
We looked for an appropriate value of the threshold level by applying the same technique to noise taken from the off-pulse as input. In every case that we tested, the tallest peak in the RM spectrum for the noise has S/N < 3.5. Therefore we decided to use the value of S/N > 4 for the threshold[ Given that the noise levels are slightly different for each observation and pulsar it is not straightforward to convert between this and the threshold levels suggested by <cit.>.]. The resulting S/N are reported in the Table <ref> for the cases where the S/N is above the threshold.
A similar limit was used by <cit.> in their analysis of RM of pulsars. For observations with 4< S/N <8 we multiplied the error of the RM detection by a factor of 2 in order to consider the effects lower S/N as suggested by <cit.>.
Once an initial value of RM for the observation has been found, we apply the correction to the profile, reduced the number of channels to 16 in order to increase the S/N ratio in each channel and proceed to perform a PA fit using formula <ref>. There are two reasons for correcting the observation for the RM value that maximises the linear polarization before attempting the fit: we increase the chance that the linear polarization in each channel is high enough to allow an accurate determination of the PA and we avoid the risk of multiple phase jumps occurring. A phase jump occurs when the difference in PA between the first and the last frequency channels is larger than 180 degrees. The value of RM necessary for a phase jump to occur along the entire frequency band is 750 . This means that if the difference between the value of RM used to correct the profile and real value is less than 750 we don't expect any phase jump to occur.
For the PA fit we followed the prescription described in <cit.>. If the S/N ratio of the linear polarization in a channel is less than 2, the channel is ignored and the resulting PA is excluded from the fit.
Since the PA is periodic every 180 degrees, we allow for possible jumps of 180 degrees in the fit and show two realizations of the PA when necessary. The results are shown in the third column of Table <ref> with the errors multiplied by two if the S/N of the polarization peak is between 4 and 8.
While similar methods have been used multiple times in the past <cit.>, it has some limitations. The cropping of channels with low S/N leads to a loss of part of the pulsar signal. Furthermore, the detection threshold and the error determination can be affected among other things by the spectral index of the pulsars <cit.>. To test whether these limitations affect the reliability of the results, we repeated the analysis using the technique of RM synthesis <cit.>. We used the code [https://gitlab.mpifr-bonn.mpg.de/nporayko/RMcalc] <cit.> that implements RM synthesis. The results together with the S/N of the detection are shown in the forth and fifth column of Table <ref>. Similarly to above we chose a threshold level of 4 to confirm a detection. There is one case for pulsar J1746-2849 where RM synthesis was able to detect the RM while our method wasn't and there is one case of the opposite happening for pulsar J1745-2912. For the cases where both methods returned a significant detection, the two values are compatible with one another at the 1σ level for most of them and at the 2σ level for the rest. The comparison is shown in Figure <ref>.
The successful comparison suggests that, for the observations presented in the paper, the method used is as accurate as RM synthesis.
|
http://arxiv.org/abs/2307.01814v1
|
20230704164107
|
Market Making of Options via Reinforcement Learning
|
[
"Zhou Fang",
"Haiqing Xu"
] |
q-fin.TR
|
[
"q-fin.TR"
] |
=cmr12 at 15pt
1]Zhou Fang
2]Haiqing Xu
[1]Department of Mathematics, The University of Texas at Austin
[2]Department of Economics, The University of Texas at Austin
Market Making of Options via Reinforcement Learning
[
August 1, 2023
===================================================
Market making of options with different maturities and strikes is a challenging problem due to its highly dimensional nature. In this paper, we propose a novel approach that combines a stochastic policy and reinforcement learning-inspired techniques to determine the optimal policy for posting bid-ask spreads for an options market maker who trades options with different maturities and strikes. When the arrival of market orders is linearly inverse to the spreads, the optimal policy is normally distributed.
§ INTRODUCTION
The option market maker plays a crucial role in the financial market, serving as a dealer that buys and sells options. Their role is to provide liquidity to the market by offering prices for options and by taking positions to manage the risk associated with their trades. However, the task of setting optimal prices for options with different strikes and maturities is highly non-trivial. In this paper, we address the challenging problem of market-making for options with multiple strikes and maturities.
The studies of market making start from <cit.> and <cit.> in the 1980s. The idea in <cit.> was revived in <cit.>, which inspired a large number of subsequent literature on market making. Notably, Cartea and Jaimungal <cit.> and <cit.> have made influential contributions in this area. Other notable works include Baldacci et al. <cit.>, Bergault and Breton <cit.>, and Stoikov and Williams <cit.>.
Reinforcement learning has been applied to market making in several works. For instance, Spooner and Lamba <cit.>, Sadighian and Jaimungal <cit.>, Beysolow et al. <cit.>, and Ganesh and Borkar <cit.> have explored the use of reinforcement learning techniques in this context. However, these works often focus on engineering-oriented approaches and make simplifying assumptions in their models.
The use of stochastic policy is inspired by the reinforcement learning literature, and its first application in financial mathematics literature is found in Wang and Zhou <cit.> and <cit.> for portfolio management problems. Stochastic policies offer improved robustness and strike a balance between exploration and exploitation. Building upon these previous works, Jia et al. <cit.> and <cit.> propose a unified policy evaluation and policy gradient framework that extends the prior research.
In this paper, we propose a reinforcement learning framework for market making in call options. We assume market order arrivals follow a Poisson process, with intensity inversely related to the bid-ask spreads. We demonstrate that the optimal policy under this setting is Gaussian and showcase numerical results, including bid-ask spreads and return distributions.
This paper is organized as follows. Section 2 introduces our model settings, while Section 3 presents the derivation of the Hamilton-Jacobi-Bellman equation and the optimal policy. In Section 4, we prove a policy improvement theorem that serves as a cornerstone for our policy iteration algorithm. In Section 5, we structure two algorithms: the policy iteration algorithm and the actor-critic algorithm. The first algorithm is based on the policy improvement theorem and involves modeling the value function using a neural network. The second algorithm models both the policy and value function using separate neural networks, providing better control over the bid-ask spreads. In Section 6, we conduct numerical analysis of these algorithms, shedding light on the policy for posting bid-ask spreads, and identifying areas for improvement in our current model, as well as suggesting future research directions.
§ MODEL SETTINGS
The market maker plays a crucial role in the options market by providing bid-ask quotes for options spanning different strike prices and maturity dates. In this context, we use the following notations: ϵ_t^a(i,j) and ϵ_t^b(i,j) denote the spreads for the ask and bid quotes, respectively, posted on an option with a strike price of K_i and a maturity date of T_j at time t, where i = 1, 2, ..., m, and j = 1, 2, ..., n. The mid-price of this option is represented by 𝒪^i,j(t, S, σ(i,j)), where S represents the mid-price of the underlying asset, and σ(i,j) corresponds to the implied volatility associated with the option. To simplify our model, we assume that the implied volatility remains constant throughout the entire trading period, which is typically a short time frame.
In this model, we describe the arrival of Market orders (MOs) for option 𝒪^i,j as Poisson processes with intensities λ_t^a(i,j) and λ_t^b(i,j). These intensities are determined by the spreads ϵ_t^a(i,j) and ϵ_t^b(i,j). Additionally, we use N_t^+(i,j) and N_t^-(i,j) to represent the counting processes for buy and sell MOs of option 𝒪^i,j, respectively.
Therefore the inventory for option 𝒪^i,j is
dq_t^i,j = dN_t^+(i,j) - dN_t^-(i, j)
to capture the relationship between intensity and spreads of options, we make use of the following stylized functions in this paper
λ_t^+ (i,j) = A_i,j - B_i,jϵ_t^b(i,j)
λ_t^-(i,j) = A_i,j - B_i,jϵ_t^a(i,j)
Assume that the mid-price of the underlying asset follows the following dynamics
dS_t/S_t = μ dt + σ dW_t
to hedge an existing position in options, we need to hold a certain quantity of shares of the underlying asset. This quantity represents the number of shares required for effective hedging
Δ_t = i∑j∑∂_S 𝒪^i,j(t, S, σ(i,j)) q_t^i,j
consequently, the cash process is as follows
dC_t = i∑j∑[ ϵ_t^b(i,j) dN_t^+(i,j) + ϵ_t^a(i,j)dN_t^-(i,j) - 𝒪^i,j dq_t^i,j]
+ S_t d(Δ_t) + d⟨Δ, S ⟩_t
then the wealth has the following dynamics
dX_t = dC_t - d( Δ_t S_t) + ∑_i ∑_j d(𝒪^i,j q_t^i,j)
= ∑_i ∑_j [ ϵ_t^b(i,j) dN_t^+(i,j) + ϵ_t^a(i,j)dN_t^-(i,j) - 𝒪^i,jdq_t^i,j] - Δ_t dS_t + ∑_i ∑_j 𝒪^i,jdq_t^i,j + q_t^i,j d𝒪^i,j
=∑_i ∑_j [ ϵ_t^b(i,j) dN_t^+(i,j) + ϵ_t^a(i,j)dN_t^-(i,j) + ( ∂_t 𝒪^i,j + 1/2σ^2 ∂_SS𝒪^i,j) q_t^i,j dt ]
Due to the high dimensionality of the bid-ask spreads for all options, finding optimal bid-ask spreads mathematically is extremely challenging. As a result, we employ reinforcement learning as an alternative approach. Let ϵ_t = (ϵ_t^a(i,j), ϵ_t^b(i,j)) represent the vector of bid-ask spreads, q_t = (q_t^i,j) denote the inventory, and π(ϵ_t | t, q) represent the probability density of selecting bid-ask spreads ϵ_t at time t given the current inventory q.
§ MARKET MAKING FRAMEWORK FOR OPTIONS
§.§ Hamilton-Jacobian-Bellman Equation
Consider a policy π, and let q_t^π denote the inventory process under this policy. The initial condition at time t is given by q_t^π = q. We define the value function under policy π as follows
V^π(t, q) = 𝔼[ ∫_t^T ∫_ϵ_uπ(ϵ_u | u, q_u^π) ∑_i ∑_j [ϵ_u^b(i,j) dN_u^+(i,j) + ϵ_u^a(i,j)dN_u^-(i,j)] d ϵ_u
+ ∫_t^T ∫_ϵ_uπ(ϵ_u | u, q_u^π) ( ∑_i∑_j( ∂_t 𝒪^i,j + 1/2σ^2 ∂_SS𝒪^i,j) q_u^i,j - γlogπ(ϵ_u | u, q_u^π) ) dϵ_u du | q_t^π = q
]
then the value function under optimal policy is
V(t, q) = πmax𝔼[ ∫_t^T ∫_ϵ_uπ(ϵ_u | u, q_u^π) ∑_i ∑_j [ϵ_u^b(i,j) dN_u^+(i,j) + ϵ_u^a(i,j)dN_u^-(i,j)] d ϵ_u
+ ∫_t^T ∫_ϵ_uπ(ϵ_u | u, q_u^π) ( ∑_i∑_j( ∂_t 𝒪^i,j + 1/2σ^2 ∂_SS𝒪^i,j) q_u^i,j - γlogπ(ϵ_u | u, q_u^π) ) dϵ_u du | q_t^π = q
]
For the function V(t, q), let's consider the following derivation as Δ t approaches zero
V(t, q) = πmax𝔼[ ∫_t^t + Δ t∫_ϵ_uπ(ϵ_u | u, q_u^π) ∑_i ∑_j [ϵ_u^b(i,j) dN_u^+(i,j) + ϵ_u^a(i,j)dN_u^-(i,j)] d ϵ_u
+ ∫_t^t + Δ t∫_ϵ_uπ(ϵ_u | u, q_u^π) ( ∑_i∑_j( ∂_t 𝒪^i,j + 1/2σ^2 ∂_SS𝒪^i,j) q_u^i,j - γlogπ(ϵ_u | u, q_u^π) ) dϵ_u du
+ V(t + Δ t, q_t^π + Δ q_t^π) | q_t^π = q ]
= πmax𝔼[ ∫_ϵ_tπ(ϵ_t | t, q_t^π) ∑_i ∑_j [ϵ_t^b(i,j) dN_t^+(i,j) + ϵ_t^a(i,j)dN_t^-(i,j)] d ϵ_t Δ t
+ ∫_ϵ_tπ(ϵ_t | t, q_t^π) ( ∑_i∑_j( ∂_t 𝒪^i,j + 1/2σ^2 ∂_SS𝒪^i,j) q_t^i,j - γlogπ(ϵ_t | t, q_t^π) ) dϵ_t Δ t
+ V(t + Δ t, q_t^π + Δ q_t^π) | q_t^π = q
]
= πmax{∫_ϵ_tπ(ϵ_t | t, q_t^π) ∑_i ∑_j [ϵ_t^b(i,j) λ_t^+(i,j) + ϵ_t^a(i,j)λ_t^-(i,j)] d ϵ_t Δ t
+ ∑_i∑_j( ∂_t 𝒪^i,j + 1/2σ^2 ∂_SS𝒪^i,j) q_t^i,j - γ∫_ϵ_tπ(ϵ_t | t, q_t^π)logπ(ϵ_t | t, q_t^π) dϵ_t Δ t
+ 𝔼[ V(t + Δ t, q_t^π + Δ q_t^π) | q_t^π = q
] }
the general Ito formula for V(t + Δ t, q_t + Δq_t) = V(t + Δ t, q_t^1,1 + Δ q_t^1, 1, ..., q_t^m,n + Δ q_t^m,n) can be expressed as follows
V(t + Δ t, q_t^1,1 + Δ q_t^1, 1, ..., q_t^m,n + Δ q_t^m,n)
= V(t+Δ t, q_t^1,1, ..., q_t^m,n) ∏_(i,j)(1 - dN_t^+(i,j))(1 - dN_t^-(i,j))
+ ∑_i ∑_j [V(t+Δ t, q_t^1,1, ..., q_t^i,j + 1, ..., q_t^m,n) dN_t^+(i,j) + V(t+Δ t, q_t^1,1, ..., q_t^i,j - 1, ..., q_t^m,n) dN_t^-(i,j) ]
= [V(t, q_t^1,1, ..., q_t^m,n) + ∂_t V(t, q_t^1,1, ..., q_t^m,n) Δ t ] ∏_(i,j)(1 - dN_t^+(i,j))(1 - dN_t^-(i,j))
+ ∑_i ∑_j [ V(t, q_t^1,1, ..., q_t^i,j + 1, ..., q_t^m,n) dN_t^+(i,j) + V(t, q_t^1,1, ..., q_t^i,j - 1, ..., q_t^m,n) dN_t^-(i,j) ]
Please note that the aforementioned Ito formula is applicable under the assumption that q_t is known, thereby restricting its validity to cases where ϵ_t has already been determined. In order to compute the conditional expectation 𝔼[V(t + Δ t, q_t^π + Δ q_t^π) | q_t^π = q], it is necessary to consider the average across all possible outcomes. Consequently, we can proceed with the subsequent derivation
𝔼[V(t + Δ t, q_t^π + Δ q_t^π) | q_t^π = q ]
= V(t, q) -∫_ϵ_tπ(ϵ_t | t, q )∑_i ∑_j ( λ_t^+(i,j) + λ_t^-(i,j))V(t, q) dϵ_t Δ t + ∂_t V(t, q) Δ t
+ ∫_ϵ_tπ(ϵ_t | t, q) ∑_i ∑_j [ V(t, q + Δ_i,j) λ_t^+(i,j) + V(t, q - Δ_i,j) λ_t^-(i,j) ] d ϵ_t Δ t
Thus, the Hamilton-Jacobian-Bellman equation can be expressed as follows
πmax{∫_ϵ_tπ(ϵ_t | t, q)∑_i ∑_j λ_t^+(i,j) [ϵ_t^b - V(t, q) + V(t, q + Δ_i,j)] + λ_t^-(i,j) [ϵ_t^a - V(t, q) + V(t, q - Δ_i,j) ] dϵ_t
- γ∫_ϵ_tπ(ϵ_t | t, q) logπ(ϵ_t | t, q) } + ∑_i ∑_j (∂_t 𝒪^i,j + 1/2σ^2 ∂_SS𝒪^i,j) q^i,j + ∂_t V(t, q) = 0
§.§ Optimal Policy Derivation
To obtain the maximizer π^*, we employ the calculus of variations. For the maximizer π^*, the following condition holds
0 = ∫_ϵ_tδπ∑_i ∑_j [ λ_t^+(i,j) [ V(t, q + Δ_i,j) - V(t, q) + ϵ_t^b(i,j) ]
+ λ_t^-(i,j) [ V(t, q - Δ_i,j) - V(t, q) + ϵ_t^a(i,j) ] ] dϵ_t - γ∫_ϵ_tπ^* δπ/π^*dϵ_t -γ∫_ϵ_tδπlogπ^* d ϵ_t
Since π represents a probability density distribution, we can infer the following
∫_ϵ_tδπ dϵ_t = 0
then the above equation becomes
0 = ∫_ϵ_tδπ( ∑_i ∑_j [ λ_t^+(i,j) [ V(t, q + Δ_i,j) - V(t, q) + ϵ_t^b(i,j) ]
+ λ_t^-(i,j) [ V(t, q - Δ_i,j) - V(t, q) + ϵ_t^a(i,j) ] ] - γlogπ^*(ϵ_t | t, q) ) dϵ_t
hence, the optimal policy aims to maximize the quantity within the aforementioned bracket. As a result, it must satisfy the following equation
C = ∑_i ∑_j [ λ_t^+(i,j) [ V(t, q + Δ_i,j) - V(t, q) + ϵ_t^b(i,j) ] + λ_t^-(i,j) [ V(t, q - Δ_i,j) - V(t, q) + ϵ_t^a(i,j) ] ]
- γlogπ^* (ϵ_t | t, q)
then there is the derivation of the optimal policy π^*
π^* (ϵ_t | t, q) ∝exp{1/γ∑_i ∑_j λ_t^±(i,j) [ V(t, q ±Δ_i,j) - V(t, q) + ϵ_t^a,b(i,j) ] }
= exp{1/γ∑_i ∑_j λ_t^±(i,j) [ V(t, q_t ±Δ_i,j) - V(t, q) + ϵ_t^a,b(i,j) ] }
= ∏_i,jexp{1/γ(A_i,j - B_i,jϵ_t^a,b(i,j)) [ V(t, q ±Δ_i,j) - V(t, q) + ϵ_t^a,b(i,j) ] }
∝∏_i,jexp{ -B_i,j/γϵ_t^a,b(i,j)^2 + 1/γ[ A_i,j + B_i,j(V(t, q) - V(t, q ±Δ_i,j) ) ] ϵ_t^a,b(i,j) }
∝∏_i,jexp{ -B_i,j/γ[ ϵ_t^a,b(i,j) - A_i,j/2B_i,j - 1/2( V(t, q) - V(t, q ±Δ_i,j) ) ]^2 }
∝∏_i,j𝒩( ϵ_t^a,b(i,j) | A_i,j/2B_i,j + 1/2( V(t, q) - V(t, q ±Δ_i,j) ) , γ/ 2 B_i,j)
Therefore, it can be observed that the optimal policy follows a multi-dimensional Gaussian distribution. For the sake of notation simplification, let's denote
μ(t, q, π) = (A_1,1/2B_1,1 + 1/2( V^π(t, q) - V^π(t, q ±Δ_1,1) ), ..., A_m,n/2B_m,n + 1/2( V^π(t, q) - V^π(t, q ±Δ_m,n) ))
Σ =
[ γ/2B_1,1 ; γ/2B_1,1 ; ⋱ ; γ/2B_m,n ; γ/2B_m,n ]
The optimal policy is
π^* ∼𝒩(·| μ(t, q, π^*), Σ)
§ POLICY IMPROVEMENT THEOREM
Given any π, let the new policy π_new to be
π_new∼𝒩 ( ·| μ(t, q, π), Σ )
then the value function
V^π(t, q) ≤ V^π_new(t, q)
Let q_t^π_new represent the inventory process under policy π_new. Considering the initial condition q_t^π_new = q, we can apply the Ito formula to obtain the following expression
V^π(t, q )
= 𝔼[ V^π(s, q_s^π_new) + ∫_t^s ∫_ϵ_u V^π(u, q_u^π_new) π_new(ϵ_u | u, q_u^π_new)∑_i ∑_j [dN_u^+(i,j) + dN_u^-(i,j)] d ϵ_u
- ∫_t^s ∫_ϵ_uπ_new(ϵ_u | u, q_u^π_new) ∑_i ∑_j [ V^π(u, q_u^π_new + Δ_i,j) dN_u^+(i,j) + V^π(u, q_u^π_new - Δ_i,j) dN_u^-(i,j) ] dϵ_u
- ∫_t^s ∂_t V(u, q_u^π_new) du | q_t^π_new = q ]
which becomes
V^π(t, q)
= 𝔼[ V^π(s, q_s^π_new) | q_t^π_new = q ] + ∫_t^s ∫_ϵ_uπ_new(ϵ_u | q_u^π_new)∑_i ∑_j [λ_u^+(i,j) + λ_u^-(i,j)] V^π(u, q_u^π_new) d ϵ_u du
- ∫_t^s ∫_ϵ_uπ_new(ϵ_u | q_u^π_new) ∑_i ∑_j [ V^π(u, q_u^π_new + Δ_i,j) λ_u^+(i,j) + V^π(u, q_u^π_new - Δ_i,j) λ^-(i,j) ] dϵ_u du
- ∫_t^s ∂_t V(u, q_u^π_new) du
for a given policy π, there is the equation
∫_ϵ_tπ(ϵ_t | t, q)∑_i ∑_j λ_t^+(i,j) [ϵ_t^b - V^π(t, q) + V^π(t, q + Δ_i,j)] + λ_t^-(i,j) [ϵ_t^a - V^π(t, q) + V^π(t, q - Δ_i,j) ] dϵ_t
- γ∫_ϵ_tπ(ϵ_t | t, q) logπ(ϵ_t | t, q) + ∑_i ∑_j (∂_t 𝒪^i,j + 1/2σ^2 ∂_SS𝒪^i,j) q^i,j + ∂_t V^π(t, q) = 0
Based on the construction of π_new, by the same calculus of variation arguments, the π_new is maximizer of the following quantity
πmax{∫_ϵ_tπ(ϵ_t | t, q) ( ∑_i ∑_j λ_t^+(i,j) [ϵ_t^b - V^π(t, q) + V^π(t, q + Δ_i,j)]
+ λ_t^-(i,j) [ϵ_t^a - V^π(t, q) + V^π(t, q - Δ_i,j) ] ) dϵ_t - γ∫_ϵ_tπ(ϵ_t | t, q) logπ(ϵ_t | t, q) }
then there is
∫_ϵ_tπ_new(ϵ_t | t, q) ( ∑_i ∑_j λ_t^+(i,j) [ϵ_t^b - V^π(t, q) + V^π(t, q + Δ_i,j)]
+ λ_t^-(i,j) [ϵ_t^a - V^π(t, q) + V^π(t, q - Δ_i,j) ] ) dϵ_t - γ∫_ϵ_tπ_new(ϵ_t | t, q) logπ_new(ϵ_t | t, q)
+ ∑_i ∑_j (∂_t 𝒪^i,j + 1/2σ^2 ∂_SS𝒪^i,j) q^i,j + ∂_t V^π(t, q) ≥ 0
which leads to
V^π(t, q)
≤𝔼[ V^π(s, q_s^π_new) + ∫_t^s ∑_i ∑_j (∂_t 𝒪^i,j + 1/2σ^2 ∂_SS𝒪^i,j) q_u^i,j, π_new
+ ∫_t^s ∫_ϵ_uπ_new(ϵ_u | u, q_u^π_new) ∑_i ∑_j [λ_u^+(i,j) ϵ_u^b(i,j) + λ_u^-(i,j) ϵ_u^a(i,j) ] dϵ_u du
- γ∫_t^s ∫_ϵ_uπ_new(ϵ_u | u, q_u^π_new) logπ_new(ϵ_u | u, q_u^π_new) dϵ_u du | q_t^π_new = q ]
Setting s = T, there is V^π(T, q_T^π_new) = V^π_new(T, q_T^π_new). Substituting this into equation (75), one can obtain the following expression
V^π(t, q)
≤𝔼[ V^π_new(T, q_T^π_new) + ∫_t^T ∑_i ∑_j (∂_t 𝒪^i,j + 1/2σ^2 ∂_SS𝒪^i,j) q_u^i,j, π_new
+ ∫_t^T ∫_ϵ_uπ_new(ϵ_u | u, q_u^π_new) ∑_i ∑_j [λ_u^+(i,j) ϵ_u^b(i,j) + λ_u^-(i,j) ϵ_u^a(i,j) ] dϵ_u du
- γ∫_t^T ∫_ϵ_uπ_new(ϵ_u | u, q_u^π_new) logπ_new(ϵ_u | u, q_u^π_new) dϵ_u du | q_t^π_new = q ]
=𝔼[ V^π_new(T, q_T^π_new) + ∫_t^T ∑_i ∑_j (∂_t 𝒪^i,j + 1/2σ^2 ∂_SS𝒪^i,j) q_u^i,j, π_new
+ ∫_t^T ∫_ϵ_uπ_new(ϵ_u | u, q_u^π_new) ∑_i ∑_j [dN_u^+(i,j) ϵ_u^b(i,j) + dN_u^-(i,j) ϵ_u^a(i,j) ] dϵ_u du
- γ∫_t^T ∫_ϵ_uπ_new(ϵ_u | u, q_u^π_new) logπ_new(ϵ_u | u, q_u^π_new) dϵ_u du | q_t^π_new = q ]
= V^π_new(t, q)
§ REINFORCEMENT LEARNING ALGORITHM
In this section, we present two reinforcement learning algorithms for solving the market-making problem. The first algorithm is based on the policy improvement theorem, which only requires the use of a neural network to model the value function. The second algorithm is the actor-critic algorithm, which employs neural networks to model both the policy and value function. The actor-critic algorithm offers the advantage of significantly reducing the training time and more control of the range of bid-ask spreads compared to the first algorithm.
§.§ Policy Iteration Algorithm
Consider the following derivation
V^π(t, q) = 𝔼[ ∫_t^s ∫_ϵ_uπ(ϵ_u | u, q_u^π) ∑_i ∑_j [ϵ_u^b(i,j) dN_u^+(i,j) + ϵ_u^a(i,j)dN_u^-(i,j) ]d ϵ_u
+ ∫_t^s ∑_i∑_j( ∂_t 𝒪^i,j + 1/2σ^2 ∂_SS𝒪^i,j) q_u^i,j, π du - γ∫_t^s ∫_ϵ_uπ(ϵ_u | u, q_u^π) logπ(ϵ_u | u, q_u^π) dϵ_u du
+ V^π(s, q_s^π) | q_t^π = q
]
which leads to
0 = 𝔼[ 1/s - t∫_t^s ∫_ϵ_uπ(ϵ_u | u, q_u^π) ∑_i ∑_j [ϵ_u^b(i,j) dN_u^+(i,j) + ϵ_u^a(i,j)dN_u^-(i,j) ]d ϵ_u
+ 1/s - t∫_t^s ∑_i∑_j( ∂_t 𝒪^i,j + 1/2σ^2 ∂_SS𝒪^i,j) q_u^i,j, π du - 1/s-tγ∫_t^s ∫_ϵ_uπ(ϵ_u | u, q_u^π) logπ(ϵ_u | u, q_u^π) dϵ_u du
+ V^π(s, q_s^π) - V^π(t, q_t^π)/s - t | q_t^π = q_t
]
as s approaches t, and assuming the value function under policy π is parameterized as V^π_θ by a neural network, we can define the temporal difference in continuous-time as follows
δ_t^θ = 𝔼[V_θ^π(s, q_s^π) - V_θ^π(t, q_t^π)/s - t | q_t^π = q_t
] + ∫_ϵ_tπ(ϵ_t | t, q_t^π) ∑_i ∑_j [ϵ_t^b(i,j) dN_t^+(i,j) + ϵ_t^a(i,j)dN_t^-(i,j) ]d ϵ_t
+ ∑_i∑_j( ∂_t 𝒪^i,j + 1/2σ^2 ∂_SS𝒪^i,j) q_t^i,j, π - γ∫_ϵ_tπ(ϵ_t | t, q_t^π) logπ(ϵ_t | t, q_t^π) dϵ_t
the loss function needs to be minimized is
ML(θ) = 1/2𝔼[ ∫_0^T |δ_t^θ | ^ 2 dt ]
given the policy π, we generate a set of sample paths denoted as 𝒟 = { (t_k, S_t_k, q_k, Δ N_t_k^+, Δ N_t_k^- )_k = 0^K }_d = 1^D. Then the discrete version of the loss function is as follows
ML(θ)
= 1/2∑_𝒟∑_k = 0^K - 1( V^π_θ(t_k + 1, q_k+1^d) - V^π_θ(t_k, q_k^d)/Δ t + ∫_ϵ_t_kπ(ϵ_t_k | t_k, q_t_k^d) ∑_i ∑_j [ϵ_t_k^b(i,j) Δ N_t_k^+(i,j) + ϵ_t_k^a(i,j)Δ N_t_k^-(i,j) ]d ϵ_t_k
+ ∑_i∑_j( ∂_t 𝒪^i,j + 1/2σ^2 ∂_SS𝒪^i,j) q_t_k^i,j, d - γ∫_ϵ_tπ(ϵ_t_k | t_k, q_t_k^d) logπ(ϵ_t_k | t_k, q_t_k^d) dϵ_t )^2 Δ t
= 1/2∑_𝒟∑_k = 0^K - 1( V^π_θ(t_k + 1, q_k+1^d) - V^π_θ(t_k, q_k^d)/Δ t + ∑_i ∑_j [ ( A_i,j/2B_i,j + V_θ^π(t_k, q_t_k^d)/2) ( Δ N_t_k^+(i,j) + Δ N_t_k^-(i,j) )
- ( V_θ^π(t_k, q_t_k^d + Δ_i,j) Δ N^+_t_k(i,j) + V_θ^π(t_k, q_t_k^d - Δ_i,j) Δ N^-_t_k(i,j) ) + ( ∂_t 𝒪^i,j + 1/2σ^2 ∂_SS𝒪^i,j) q_t_k^i,j, d]
- γ( m n (1 + log 2π) + ∑_i ∑_j logγ/2B_i,j) )^2 Δ t
The following is a summary of the policy iteration algorithm
§.§ Actor-Critic Algorithm
Due to the limitations of simply modeling the value function and using the policy improvement algorithm, we can leverage the knowledge that the optimal policy follows a Gaussian distribution with a fixed covariance matrix. In this approach, we utilize a neural network to model the mean of the Gaussian policy, taking inputs of (t, q) and producing outputs corresponding to the bid-ask spreads for each option. Denoting the policy as π^ϕ = 𝒩(μ^ϕ(t, q), Σ), where μ^ϕ(t, q) represents the mean determined by the neural network, and Σ is the fixed covariance matrix. Additionally, we employ another neural network, V^θ (t, q), to model the value function.
During the path generation process, we obtain a dataset 𝒟 = (t_k, S_k, q_k, Δ N_t_k^+, Δ N_t_k^-)_k = 0^K, which consists of time steps, underlying asset prices, inventory levels, and the corresponding changes in the buy and sell market orders
δ^θ_t_k = r_t_k + V^θ(t_k + 1, q_k + 1) - V^θ (t_k, q_k)
= (Δ N_t_k^+(1, 1), Δ N_t_k^-(1, 1), ..., Δ N_t_k^+(m,n), Δ N_t_k^-(m,n) ) μ^ϕ (t, q) + ∑_i ∑_j ( ∂_t 𝒪^i, j+ 1/2σ^2 ∂_SS𝒪^i, j) q^i,j_t_kΔ t
- γ( m n (1 + log 2π) + ∑_i ∑_j logγ/2B_i,j) Δ t + V^θ(t_i + 1, q_k + 1) - V^θ (t_k, q_k)
the critic loss is
L(θ) = 1/2∑_i (δ_t_k^θ)^2
and the policy gradient is
∇_ϕ J(ϕ) = ∑_i δ_t_i^θ∇_ϕlogπ^ϕ(ϵ | t_k, q_k )
§ NUMERICAL RESULTS
During the numerical analysis, we made the following parameter choices: the trading period was set to T = 1, and the trading interval to dt = 0.01. The initial price was S_0 = 100, with a drift coefficient of μ = 0.01 and a volatility coefficient of σ = 0.05. Additionally, we defined a set of multiple strikes, maturities, and an implied volatility surface for the options as follows
Strikes = [90, 95, 100, 105, 110]
Maturities = [2, 3, 4, 5]
Volatility Surface = [ 0.2 0.2 0.18 0.18; 0.14 0.14 0.12 0.12; 0.1 0.1 0.08 0.08; 0.14 0.14 0.12 0.12; 0.2 0.2 0.18 0.18 ]
To determine the bid-ask spreads, we set the parameters A and B as follows, where each row corresponds to a strike and each column corresponds to a maturity
A =
[ 36 34 32 30; 46 44 42 40; 56 54 52 50; 46 44 42 40; 36 34 32 30 ]
B = [ 3 3 3 3; 4 4 4 4; 5 5 5 5; 4 4 4 4; 3 3 3 3 ]
To model the value function in both the policy iteration algorithm and the actor-critic algorithm, we utilize a convolutional neural network with residual blocks. The neural network takes the inputs (t, q) and produces a single value as its output. In the case of the actor-critic algorithm, we employ the same neural network structure to model the mean of the Gaussian policy, but with modified outputs representing the mean of the bid-ask spreads.
The convolutional layers in the neural network share identical configurations. Each 1-dimensional convolutional layer has an input and output channel of 1, a kernel size of 3, and a stride and padding of 1. The residual block consists of two convolutional layers with the rectified linear unit (ReLU) serving as the activation function. The structure of the neural network involves mapping the inputs to a high-dimensional space through a linear layer (in our case, with a dimensionality of 1024), followed by two residual blocks and four linear layers, all using ReLU as the activation function.
Figure 1 presents the initial loss of the policy improvement algorithm. We iterate the policy update process three times, and Figures 2-4 illustrate the return distributions after each iteration. Each figure showcases a histogram representing the returns obtained from 100 simulations, accompanied by a Gaussian density estimation curve depicting the return distribution. Figure 5 offers a comparison between the Gaussian density estimation after the first policy iteration and the third policy iteration. It is evident that the return distribution after the third policy iteration exhibits a significant improvement over the distribution after the first iteration, indicating enhanced overall performance.
While the return distribution appears favorable, the bid-ask spreads shown in Figures 6-11 exhibit unrealistically large values, which explains why the return distribution reaches thousands of dollars. To illustrate this issue, we provide three examples of posted bid-ask spreads after three policy iterations. Each example corresponds to different time points and inventory configurations.
In the first example, t = 0, and the inventory q_1 is given by:
q_1 = [ 0 0 0 0; 0 0 0 0; 0 0 0 0; 0 0 0 0; 0 0 0 0 ]
In the second example, t = 0.5, and the inventory q_2 is given by:
q_2 = [ 6 7 8 9; 3 4 5 6; 0 1 2 3; 3 4 5 6; 6 7 8 9 ]
In the third example, t = 0, and the inventory q_3 is given by:
q_3 = [ 3 4 -5 -6; 1 2 -3 -4; 0 1 -1 1; 1 -2 3 4; 2 -3 4 5 ]
Despite their unrealistic nature, these bid-ask spreads offer insights into the policy for posting them. The first and most apparent observation is that bid-ask spreads tend to be larger for options with longer maturities, while they are smaller for options with strikes closer to the stock price. Comparing example 1 and example 2, we notice that in example 2, the bid spreads are generally higher for most options, while the ask spreads are lower for most options. This behavior indicates that when there is a positive inventory, our agent adjusts bid-ask spreads to reduce the inventory, aligning with realistic scenarios. If we examine the bid-ask spreads in example 3, we find that options with negative inventories have relatively higher ask spreads. Although these spreads are unrealistic, they provide further insights into the behavior of bid-ask spread policies.
Although the policy improvement algorithm is theoretically sound, the numerical analysis reveals two significant issues. Firstly, due to the assumption made in our setting that the arrival intensity of market orders is linearly inversely related to the bid-ask spreads, the resulting bid-ask spreads become unrealistically large. This assumption leads to a distortion in the spread values, which undermines the realism of our model. Secondly, the choice of neural networks for policy iteration introduces approximation limitations. Without controlling the mean of the policy, there is a possibility that it may become negative, resulting in suboptimal outcomes. These issues highlight the need to address the unrealistic bid-ask spreads and refine the policy control mechanisms for more accurate and reliable results.
Next, we present the results obtained from the actor-critic algorithm. In this algorithm, both the value function and the mean of the Gaussian policy are modeled using the same neural network structure, differing only in the output size for the value function network and the policy mean network. Furthermore, to constrain the range of the proposed bid-ask spreads, we apply a sigmoid activation multiplied by 0.1 to the final output when modeling the mean of the Gaussian policy.
Figures 12-13 depict the loss graphs for the policy loss and critic loss, while figure 14 displays the return distribution, which includes some negative returns but ultimately represents a profitable strategy. These results indicate a more realistic return distribution; however, achieving this comes at the cost of not converging to the optimal strategy due to the imposed limitations on the bid-ask spread range.
§ CONCLUSION
In this paper, we analyze the market-making problem with the assumption that bid-ask spreads are inversely proportional to the market order arrival intensity. However, it is important to note that this model does not accurately represent real-world conditions, resulting in unrealistic bid-ask spreads. While the bid-ask spreads can be controlled within a reasonable range using the actor-critic algorithm, this introduces a trade-off where the convergence to an optimal policy cannot be guaranteed.
These limitations serve as motivation for further studies on market making in a more realistic setting. We believe that by incorporating more realistic assumptions, the outputs of both algorithms will yield bid-ask spreads that align with each other and the actual market conditions. In such a scenario, the need to incorporate activation functions to control the range of bid-ask spreads within the actor-critic algorithm would no longer be necessary.
apacite
|
http://arxiv.org/abs/2307.03085v1
|
20230706155258
|
Ergodicity-breaking phase diagram and fractal dimensions in long-range models with generically correlated disorder
|
[
"Shilpi Roy",
"Saurabh Basu",
"Ivan M. Khaymovich"
] |
cond-mat.dis-nn
|
[
"cond-mat.dis-nn"
] | |
http://arxiv.org/abs/2307.02084v1
|
20230705074733
|
Parameter estimation for Einstein-dilaton-Gauss-Bonnet gravity with ringdown signals
|
[
"Cai-Ying Shao",
"Yu Hu",
"Cheng-Gang Shao"
] |
gr-qc
|
[
"gr-qc"
] | |
http://arxiv.org/abs/2307.00365v1
|
20230701152608
|
Understanding recent deep-learning techniques for identifying collective variables of molecular dynamics
|
[
"Wei Zhang",
"Christof Schütte"
] |
cs.LG
|
[
"cs.LG",
"math.OC"
] |
Understanding recent deep-learning techniques for identifying collective variables of molecular dynamics
Wei Zhang ^*
Christof Schütte ^*,
========================================================================================================
^*Zuse Institute Berlin, Takustrasse 7, 14195 Berlin, Germany
^Institute of Mathematics, Freie Universität Berlin, Arnimallee 6, 14195 Berlin, Germany
Email: wei.zhang@fu-berlin.de, christof.schuette@fu-berlin.de
The dynamics of a high-dimensional metastable molecular system can often be characterised by a few features of the system, i.e. collective variables (CVs).
Thanks to the rapid advance in the area of machine learning, various deep learning-based CV identification techniques have been developed in recent years, allowing accurate modelling and efficient simulation of complex
molecular systems. In this paper, we look at two different categories of deep learning-based approaches for finding
CVs, either by computing leading eigenfunctions of infinitesimal generator or transfer operator associated to the underlying dynamics, or by learning an
autoencoder via minimisation of reconstruction error. We present a concise
overview of the mathematics behind these two approaches and conduct a comparative numerical study of these two approaches on illustrative examples.
molecular dynamics, collective variable identification, eigenfunction, autoencoder, variational characterisation, deep learning
§ INTRODUCTION
Molecular dynamics (MD) simulation is a mature computational technique for the
study of biomolecular systems. It has proven valuable in a wide range of
applications, e.g. understanding functional mechanisms of proteins and discovering new drugs <cit.>. However,
the capability of direct (all-atom) MD simulations is often limited, due to
the disparity between the tiny step-sizes that the simulations have to adopt in
order to ensure numerical stability and the large timescales on which the
functionally relevant conformational changes of biomolecules, such as protein folding, typically occur.
One general approach to overcome the aforementioned challenge in MD
simulations is by utilizing the fact that in many cases the dynamics of a
high-dimensional metastable molecular system can be characterised by a few features, i.e. collective variables (CVs) of the system. In deed, many enhanced sampling methods (see <cit.> for a review) and approaches for building surrogate models <cit.> rely on knowing CVs of the underlying molecular system.
While empirical approaches and physical/chemical intuition are still widely
adopted in choosing CVs (e.g. mass centers, bonds, or angles), it is often
difficult or even impossible to intuit biomolecular systems in real-life
applications due to their high dimensionality, as well as structural and dynamical complexities.
Thanks to the availability of numerous molecular data being generated and the
rapid advance of machine learning techniques, data-driven automatic
identification of CVs has attracted considerable research interests. Numerous machine
learning-based techniques for CV identification have emerged, such as the well-known principal component analysis (PCA) <cit.>, diffusion maps <cit.>,
ISOMAP <cit.>, sketch-map <cit.>, time-lagged independent
component analysis (TICA) <cit.>, as well the
kernel-PCA <cit.> and kernel-TICA <cit.> using kernel
techniques. See <cit.> for reviews.
The recent developments mostly employ deep learning techniques and largely fall into two categories.
Methods in the first category are based on the operator approach for the study
of stochastic dynamical systems. These include VAMPnets <cit.> and the
variant state-free reversible VAMPnets (SRV) <cit.>, the
deep-TICA approach <cit.>, and ISOKANN <cit.>, which are
capable of learning eigenfunctions of Koopman/transfer operators. The authors
of this paper have also developed a deep learning-based method for learning
eigenfunctions of infinitesimal generator associated to overdamped Langevin dynamics <cit.>.
Methods in the second category combine deep learning with dimension reduction techniques, typically by training autoencoders <cit.>. For instance, several approaches are proposed to
iteratively train autoencoders and improve training data by “on-the-fly” enhanced sampling.
These include the Molecular Enhanced Sampling with Autoencoders (MESA) <cit.>, Free Energy Biasing and
Iterative Learning with Autoencoders (FEBILAE) <cit.>, the method based on the predictive information bottleneck
framework <cit.>, the Spectral Gap Optimisation of Order Parameters (SGOOP) <cit.>, the deep Linear Discriminant Analysis (deep-LDA) <cit.>.
Besides, various generalized autoencoders are proposed, such as the extended autoencoder (EAE) model <cit.>, the time-lagged (variational)
autoencoder <cit.>, Gaussian mixture variational autoencoder <cit.>, and EncoderMap <cit.>.
Motivated by these rapid advances, in this paper we study the two aforementioned categories of deep learning-based approaches for finding CVs, i.e. approaches for computing leading eigenfunctions of infinitesimal generator or transfer operator associated to the underlying dynamics and approaches that learn an autoencoder via minimisation of reconstruction error. We focus on theoretical aspects of these approaches in order to gain better understanding on their capabilities.
The remainder of this article is organized as follows. In Section <ref>, we present an overview of the approaches for CV identification based on computing eigenfunctions.
We give a brief introduction to infinitesimal generator and transfer operator,
then we discuss motivations for the use of eigenfunctions as CVs in studying
molecular kinetics, and finally we present variational characterisations as well as loss functions for learning eigenfunctions.
In Section <ref>, we study autoencoders. We discuss the connection with PCA and present a characterisation of the optimal (time-lagged) autoencoder.
In Section <ref>, we illustrate the numerical approaches for
learning eigenfunctions and autoencoder by applying them to two simple yet illustrative systems.
Appendix <ref> contains the proofs of two lemmas in Section <ref>.
§ EIGENFUNCTIONS AS CVS FOR THE STUDY OF MOLECULAR KINETICS ON LARGE TIMESCALES
In this section, we consider eigenfunctions of infinitesimal generator and transfer operator that are associated to the underlying dynamics.
We begin by introducing the relevant operators, whose eigenfunctions will be
the focus of this section. After that we present two different perspectives,
which motivate the use of eigenfunctions as CVs for kinetics study on large timescales. Finally, we discuss variational
formulations of leading eigenvalues and eigenfunctions, which will be useful in designing loss functions for training artificial neural networks.
§.§ Operator approach
Generator. Molecular dynamics can be modelled by stochastic differential equations (SDEs). For both simplicity and mathematical convenience,
we consider here the following SDE, often called the overdamped Langevin dynamics,
dX_s = -∇ V(X_s) ds + √(2β^-1) dW_s ,
where X_s ∈ℝ^d is the system's state at time s∈ [0,+∞),
V: ℝ^d→ℝ is a smooth potential function, W_s is a d-dimensional Brownian motion that mimics the effect of the noisy environment, and the noise strength
β=(k_BT)^-1 is proportional to the inverse of the system's temperature T.
We assume that dynamics (<ref>) is ergodic with respect to its unique
invariant measure
dμ(x) = π(x)dx, with π(x)= 1/Z e^-β V(x), x∈ℝ^d ,
where Z is a normalising constant.
The infinitesimal generator of (<ref>) is a second-order differential operator, defined by
ℒ f = -∇ V ·∇ f + 1/βΔ f = 1/βe^β V (e^-β V∇ f) ,
for a test function f: ℝ^d→ℝ. Corresponding to the reversibility of dynamics (<ref>),
the generator ℒ is self-adjoint in L^2(μ) endowed with
the weighted inner product ⟨ f, g⟩_μ:= ∫_ℝ^d fg
dμ. In fact, using (<ref>)–(<ref>) and integration by parts, one can verify that
⟨ (-ℒ) f, g⟩_μ = ⟨ f, (-ℒ) g⟩_μ = 1/β𝐄_μ(∇ f·∇ g) ,
for two C^2-smooth test functions f, g: ℝ^d→ℝ, where 𝐄_μ(·) denotes the mathematical expectation with respect to the measure μ in (<ref>).
We also define the energy
ℰ(f)=1/β𝐄_μ(|∇ f|^2) , f: ℝ^d→ℝ ,
which is considered to be +∞ if the right hand side in
(<ref>) is undefined. Under certain conditions on V, the operator -ℒ has purely discrete spectrum, consisting of a sequence of eigenvalues <cit.>
0 = λ_0 < λ_1 ≤λ_2 ≤⋯ ,
with the corresponding (orthogonal and normalised) eigenfunctions φ_0≡ 1, φ_1, φ_2, ⋯∈ L^2(μ). The leading (smallest) nontrivial eigenvalues in (<ref>) encode the large timescales of the underlying dynamics, whereas the corresponding
eigenfunctions are closely related to its metastable conformations.
Transfer operator.
In contrast to the discussion above based on SDEs, transfer operator approach offers an alternative way to study dynamical systems without specifying the
governing equations <cit.> and is hence
attractive in developing numerical algorithms. In this framework, one assumes that the trajectory data is sampled from an underlying (equilibrium) system whose
state y at time t+τ given its state x at time t can be modelled as
a discrete-time Markovian process with transition density p_τ(y|x), for
all t≥ 0, where τ>0 is called the lag-time and the process is assumed to be ergodic with respect to the
unique invariant distribution μ in (<ref>).
The transfer operator associated to this discrete-time Markovian process is
defined as <cit.>
𝒯 u (x) = 1/π(x)∫_ℝ^d p_τ(x|y) u(y) π(y)
dy , x ∈ℝ^d
for a density (with respect to μ) u: ℝ^d→ℝ^+. We assume that the detailed balance condition is satisfied, i.e. p_τ(y|x)π(x) = p_τ(x|y)π(y) for all
x,y∈ℝ^d. As a result, we have
𝒯 u (x) = 1/π(x)∫_ℝ^d p_τ(x|y) u(y) π(y) dy
= ∫_ℝ^d p_τ(y|x) u(y) dy
= 𝐄(u(X_τ)|X_0 = x) ,
which shows that in the reversible setting the transfer operator coincides
with the element (at time τ) of the semigroup associated to the
underlying process <cit.> [In literature, the
expression in the last line of (<ref>) is also
used to define Koopman operators for stochastic dynamics <cit.>. We stick
to the notion of transfer operator and note that both operators are identical for reversible
processes. We refer to <cit.> and the references therein for extensive study of stochastic dynamics using Koopman operators.].
Similar to the generator, one can show that 𝒯 is self-adjoint in L^2(μ) with respect to ⟨·,·⟩_μ (see (<ref>)).
Also, in analogy to (<ref>), for a function f∈ L^2(μ) we define the energy
ℰ_τ(f) = 1/2∫_ℝ^d∫_ℝ^d(f(y) - f(x))^2 p_τ(y|x) π(x) dx dy .
The following lemma provides an alternative expression of (<ref>) using the transfer operator 𝒯.
Denote by I: L^2(μ)→ L^2(μ) the identity map. For all f ∈ L^2(μ), we have
ℰ_τ(f) = ∫_ℝ^d[(I-𝒯)f(x)]
f(x) dμ(x) = ⟨ (I-𝒯)f, f⟩_μ .
The proof of Lemma <ref> is straightforward and we present it in Appendix <ref>.
Lemma <ref> and (<ref>) imply that all eigenvalues of
𝒯 are no larger than one. We assume that the spectrum of 𝒯 consists of discrete eigenvalues
1=ν_0 > ν_1 ≥⋯
and the largest eigenvalue ν_0=1 (corresponding to the trivial eigenfunction φ_0≡ 1) is non-degenerate.
These eigenvalues and their corresponding eigenfunctions are of great interests in applications, since they encode the information about the timescales and metastable conformations of the underlying dynamics, respectively <cit.>.
For the process defined by SDE (<ref>), in particular, the transfer
operator and the generator satisfy 𝒯=e^τℒ, which implies that their eigenvalues are related by ν_i=e^-τλ_i with the identical eigenfunctions φ_i, for i≥ 0 <cit.>.
§.§ Motivations for using eigenfunctions as CVs
There is a large amount of literature on the study of eigenfunctions of the
infinitesimal generator, transfer operator, or Koopman operator. For the
transfer operator 𝒯, for instance, many of these studies are
motivated by the connection between the (pairwise orthogonal and normalised) eigenfunctions and the consecutive actions of 𝒯 on test functions f
∈ L^2(μ), i.e. in the reversible case,
𝒯^n f(x) =
𝐄(f(X_nτ)|X_0=x) =
𝐄_μ(f) + ∑_i=1^+∞⟨ f, φ_i⟩_μν_i^n φ_i(x) , x∈ℝ^d, n = 1,2, … .
Since ν_1, ν_2, … are all smaller than 1, for large integers n,
the action 𝒯^n f is mainly determined by the leading eigenvalues
of 𝒯 in (<ref>) and the corresponding eigenfunctions.
Therefore, knowing the leading eigenvalues and eigenfunctions of 𝒯 helps study the map 𝒯^n for large n, which in turn
helps understand the behavior of the underlying dynamics at large time T=nτ. For Koopman operator, the leading eigenfunctions define the optimal linear Koopman model for features (functions) <cit.>.
Here, we contribute to this discussion by providing two different perspectives
that directly connect eigenfunctions to the underlying dynamics and to the choices of CVs.
We assume that the dynamics satisfies SDE (<ref>) and we will work with its generator ℒ. Most of the results below can be extended to a more
general setting, e.g. overdamped Langevin dynamics with state-dependent diffusion coefficients.
It is also possible to obtain parallel results for the discrete-time Markovian
process involving the transfer operator 𝒯 [This is an ongoing work that will be published in future.].
Let ξ = (ξ_1, ξ_2, …, ξ_k)^⊤: ℝ^d→ℝ^k be a smooth CV map, where 1 < k≪ d. Ito's formula gives
dξ(X_s) = ℒξ(X_s) ds + √(2β^-1)∇ξ(X_s) dW_s ,
where ∇ξ(x) ∈ℝ^k× d denotes the Jacobian matrix of ξ at x∈ℝ^d.
Given the projection dimension k, we are interested in finding a good CV map
ξ that is both non-trivial and non-degenerate. In other words, ξ
should be non-constant and the components ξ_1,ξ_2,…, ξ_k are
linearly independent (or, the image of ξ spans a k-dimensional space).
These two requirements can be met by imposing the following conditions without loss of generality
𝐄_μ(ξ_i) = 0 , ⟨ξ_i, ξ_j⟩_μ =
𝐄_μ(ξ_iξ_j)=δ_ij , 1 ≤ i ≤ j ≤ k .
Optimal CVs for the study of slow motions.
For the first perspective, we make an analogy between the dynamics of (<ref>) on large timescales and the slow motions in it.
This suggests that a good CV map ξ that is capable of capturing the behavior of
(<ref>) on large timescales should meet the following criteria:
ξ(X_s) evolves much more slowly comparing to the dynamics X_s itself. (C1)
Since ξ(X_s) satisfies SDE (<ref>), to meet criteria (C1) it
is therefore natural to require the magnitude of both terms on the right hand side of
(<ref>) to be small (in the sense of average with respect to the
invariant distribution μ in (<ref>)). The latter can be formulated as an optimisation problem
min_ξ_1, …, ξ_k∑_i=1^k ω_i ∫_ℝ^d(|ℒξ_i|^2(x)
+ |∇ξ_i|^2(x)) dμ(x), ,
where ω_1 ≥ω_2 ≥…≥ω_k >0 are weights assigned
to the k equations in (<ref>). One can choose the weights to be identical,
but using pairwise distinct weights could help eliminate non-uniqueness of the optimiser of (<ref>) due to permutations.
We make the following claim concerning the optimiser of (<ref>).
Assume that -ℒ has purely discrete spectrum consisting of the eigenvalues in (<ref>).
Then, the minimum of (<ref>) is attained by the first k (non-trivial) eigenfunctions of -ℒ, i.e. when ξ_i=φ_i for i=1,…, k.
Using the identities in (<ref>), one can reformulate the optimisation problem (<ref>) as
min_ξ_1, …, ξ_k∑_i=1^k ω_i ⟨ [(-ℒ)^2 + β (-ℒ)] ξ_i, ξ_i⟩_μ, .
The conclusion follows once we show that the minimum of (<ref>) is attained when ξ_i=φ_i for i=1,…, k.
This can be done straightforwardly by repeating the proof of Theorem <ref> below (see <cit.>) for the operator (-ℒ)^2 + β (-ℒ)
and using the fact that both (-ℒ)^2 + β (-ℒ) and -ℒ have the same set of eigenfunctions.
It is not difficult to see that the eigenfunctions φ_1, …, φ_k actually minimise both terms in the objective (<ref>) simultaneously (subject to (<ref>)).
The following identity provides an explicit expression for the first term in (<ref>), which involves the operator (-ℒ)^2.
For any smooth function f:ℝ^d→ℝ such that ⟨ (-ℒ)^2 f, f⟩_μ <+∞, we have
∫_ℝ^d |ℒf|^2 dμ = ⟨ (-ℒ)^2 f, f⟩_μ = 1/β∫_ℝ^d[HessV(∇ f, ∇ f) + 1/β|∇^2 f|^2_F ] dμ ,
where |∇^2 f|_F denotes the Frobenius norm of the matrix ∇^2 f.
The proof of Lemma <ref> is given in Appendix <ref>.
The integrand of the rightmost integral in (<ref>) consists of the Hessian of the potential V and a regularising term.
Loosely speaking, since the eigenfunctions minimise (<ref>)
subject to (<ref>), (<ref>) reveals the connection between the (global) eigenfunctions of (-ℒ) to the (local) eigenvectors of the potential function V.
A final remark on (<ref>) is that it does not rely on the
specific form of the SDE. Therefore, in principle it can be used as a criteria
of good CVs in the case where the SDE has a more general form, e.g. a non-reversible SDE or underdamped Langevin dynamics. In these general settings, it will be
interesting to study whether (<ref>) can be solved efficiently using approaches such as physics-informed neural networks (PINN) <cit.>.
Optimal CVs for building effective dynamics.
The second perspective is related to the effective dynamics of (<ref>) using conditional expectations <cit.>.
Specifically, note that the SDE of ξ(X_s) given by (<ref>) is non-closed, in the sense
that the terms on its right hand side still depend on the full state X_s ∈ℝ^d.
The authors in <cit.> proposed an effective dynamics as a Markovian approximation of (<ref>), which is described by the SDE
dz(s) = b(z(s)) ds + √(2β^-1)σ(z(s)) dw(s) ,
where w(s) is a k-dimensional Brownian motion, the coefficients b: ℝ^k →ℝ^k and σ∈ℝ^k× k are defined by
b_l(z) = 𝐄_μ_z(ℒξ_l) , 1 ≤ l ≤ k,
(σσ^⊤)(z) = 𝐄_μ_z(∇ξ∇ξ^⊤) , for z ∈ℝ^k ,
respectively. In the above, for z ∈ℝ^k, 𝐄_μ_z(·) denotes the conditional expectation on the level set Σ_z = {x∈ℝ^d |ξ(x) = z} with respect to the so-called conditional measure μ_z on Σ_z:
dμ_z(x) = 1/Q(z)e^-β V(x)/Z[(∇ξ∇ξ^⊤)(x)]^-1/2 dν_z(x)
= 1/Q(z)e^-β V(x)/Zδ(ξ(x)-z) dx ,
where the first equality follows from the co-area formula, Q(z) is a normalising constant, and ν_z denotes the surface measure on Σ_z. We refer to
<cit.> for detailed discussions about the definition and properties of the effective dynamics (<ref>).
Note that the effective dynamics (<ref>) can be defined with a
general CV map ξ. A natural question is how to choose ξ such that
the resulting effective dynamics is a good approximation of the original
dynamics <cit.>.
One way to quantify the approximation quality of (<ref>) is by
comparing its timescales to the timescales of the original dynamics <cit.>.
For the overdamped Langevin dynamics (<ref>), in particular, the infinitesimal generator of its effective dynamics (<ref>), denoted by
ℒ, is again self-adjoint in an appropriate Hilbert space <cit.>. Assume that -ℒ has purely discrete spectrum, which consists of eigenvalues
0=λ_0 < λ_1 ≤λ_2 ≤⋯,
and let φ_i: ℝ^k→ℝ be the corresponding orthonormal eigenfunctions.
The following result estimates the approximation error of the effective dynamics in terms of eigenvalues.
Recall the energy ℰ defined in (<ref>). For i=1,2,…, we have
λ_i ≤λ_i ≤λ_i + ℰ(φ_i - φ_i ∘ξ) .
In particular, when ξ(x) = (φ_1(x), φ_2(x), ⋯, φ_k(x))^⊤∈ℝ^k, we have λ_i = λ_i, for 1 ≤ i ≤ k.
Proposition <ref> implies that, for a general CV map ξ, the
eigenvalues associated to the effective dynamics are always larger or equal to the
corresponding true eigenvalues, and the approximation error depends on the closeness between the corresponding
eigenfunctions (measured by the energy ℰ).
Also, choosing eigenfunctions associated to the original dynamics as the CV map ξ yields the optimal effective
dynamics (<ref>), in the sense that it preserves the corresponding eigenvalues (timescales).
§.§ From variational characterisations to loss functions
In the following, we discuss variational characterisations of eigenfunctions for both generator and transfer operator.
These characterisations are useful in developing numerical algorithms <cit.>, in
particular in designing loss functions in recent deep learning-based approaches <cit.>.
For the generator ℒ, note that (<ref>) has already
given a variational characterisation of the leading eigenfunctions φ_1, …, φ_k thanks to Proposition <ref>.
However, as mentioned in Section <ref>, the leading
eigenfunctions actually minimise both terms in (<ref>) simultaneously and a simpler characterisation is preferred for numerical purposes.
In this regard, we record the following characterisation obtained in <cit.>.
Let k∈ℕ and ω_1 ≥…≥ω_k >0. Define ℋ^1 := {f∈ L^2(μ) | 𝐄_μ(f)=0, ⟨ (-ℒ f, f⟩_μ < +∞}.
We have
∑_i=1^k ω_iλ_i =min_f_1,…, f_k∈ℋ^1∑_i=1^k ω_i ℰ(f_i) ,
where ℰ denotes the energy (<ref>), and the minimisation is over all f_1,f_2,…, f_k∈ℋ^1 such that
⟨ f_i,f_j⟩_μ = δ_ij , ∀ i,j ∈{1,…,k} .
Moreover, the minimum in (<ref>) is achieved when f_i=φ_i for 1 ≤ i ≤ k.
To apply Theorem <ref> in designing learning algorithms, we use the right hand side of (<ref>) as objective and
add penalty term to it in order to incorporate the constraints (<ref>).
In the end, we obtain the loss function that can be used to learn eigenfunctions of the generator by training neural networks:
Loss(f_1, f_2, …, f_k) =
1/β∑_i=1^kω_i 𝐄^data(|∇
f_i|^2)/Var^data (f_i) + α∑_1≤ i_1 ≤ i_2 ≤ k(Cov^data(f_i_1,f_i_2) - δ_i_1i_2)^2 ,
where α is a penalty constant, and 𝐄^data,
Var^data, Cov^data denote
empirical estimators that approximate the mean, variance, and co-variance with respect to the measure μ, respectively.
For brevity, we omit further discussions on the loss (<ref>), and we refer to <cit.> for more details.
For the transfer operator 𝒯, using the same proof of Theorem <ref> (see <cit.>)
and Lemma <ref> we can prove the following variational characterisation.
Let k∈ℕ and ω_1 ≥…≥ω_k >0.
Assume that 𝒯 has discrete spectrum consisting of the eigenvalues in (<ref>) with the corresponding eigenfunctions φ_i, i≥ 0.
Define L^2_0(μ) := {f∈ L^2(μ) | 𝐄_μ(f)=0}. We have
∑_i=1^k ω_i(1-ν_i)=min_f_1,…, f_k∈ L^2_0(μ)∑_i=1^k ω_i ℰ_τ(f_i) ,
where ℰ_τ is the energy defined in (<ref>)
for 𝒯, and the minimisation is over all f_1,f_2,…, f_k∈
L^2_0(μ) under the constraints (<ref>).
Moreover, the minimum in (<ref>) is achieved when f_i=φ_i for 1 ≤ i ≤ k.
We note that similar variational characterisations for eigenfunctions of transfer operator or Koopman operator have been studied in <cit.>.
As in the case of generator, Theorem <ref>
motivates the following loss function for learning eigenfunctions of the
transfer operator 𝒯 [For overdamped dynamics (<ref>),
we have 1 - ν_i/τ=1-e^-τλ_i/τ≈λ_i when τ is small, where λ_i
is the corresponding eigenvalue of the generator. Based on this relation, we
include the constant 1/τ in the first term of
(<ref>). Also note that in contrast to (<ref>)
time-series data is required in order to use the loss (<ref>) in training.]:
Loss_τ(f_1, f_2, …, f_k) =
1/2τ∑_i=1^kω_i 𝐄^data|f_i(X_· +τ) - f_i(X_·)|^2/Var^data (f_i) + α∑_1≤ i_1 ≤ i_2 ≤ k(Cov^data(f_i_1,f_i_2) - δ_i_1i_2)^2 .
Compared to VAMPnets <cit.>, the loss
(<ref>) imposes orthogonality constraints
(<ref>) explicitly and directly targets the leading eigenfunctions rather than basis of eigenspaces. Also, as opposed to the approach in <cit.>, training with both losses (<ref>) and (<ref>) does not require backpropagation on matrix eigenvalue problems.
§ ENCODER AS CVS FOR LOW-DIMENSIONAL REPRESENTATION OF MOLECULAR CONFIGURATIONS
In this section, we briefly discuss autoencoders in the context of CV identification for molecular dynamics.
An autoencoder <cit.> on ℝ^d is a function f that maps
an input data x ∈ℝ^d to an output y∈ℝ^d by
passing through an intermediate (latent) space ℝ^k, where 1 ≤ k < d.
It can be written in the form f=f_dec∘ f_enc, where f_enc: ℝ^d→ℝ^k and f_dec: ℝ^k→ℝ^d are called an encoder and a decoder, respectively.
The integer k is called the encoded dimension (resp. bottleneck dimension). In other words, under the mapping of the autoencoder f, the input x is first
mapped to a state z in the latent space ℝ^k by the encoder f_enc, which is then mapped to y in the original space by the decoder f_dec.
In practice, both the encoder and the decoder are represented by artificial neural networks (see Figure <ref>).
Given a set of data x^(1), x^(2), …, x^(N)∈ℝ^d, they are typically trained by minimising the empirical reconstruction error
Loss^AE(f_enc, f_dec) = 1/N∑_i=1^N |f_dec∘ f_enc(x^(i)) - x^(i)|^2 .
In the context of CV identification for molecular systems, the trained encoder
f_enc is used to define the CV map, i.e. ξ=f_enc. Note that the loss (<ref>) is invariant under permutation of the training data.
For trajectory data, instead of (<ref>) it would be benefical to employ a loss that incorporates temporal information in the data.
In this regard, several variants, such as time-lagged autoencoders <cit.> and the extended autoencoder using
committor function <cit.>, have been proposed in order to
learn low-dimensional representations of the system that can capture its dynamics.
Connection with PCA.
An autoencoder can be viewed as a nonlinear generalisation of PCA, which is a widely used technique for dimensionality reduction.
To elucidate their connection, let us assume without loss of generality that the data satisfies 1/N∑_i=1^N x^(i)=0 and recall that the PCA algorithm actually solves the optimisation problem
min_V_k∑_i=1^N | x^(i) - V_kV_k^⊤ x^(i)|^2
among matrices V_k ∈ℝ^d× k with k orthogonal unit vectors as columns <cit.>.
Comparing (<ref>) and
(<ref>), it is apparent that autoencoder can be considered as a nonlinear generalisation of PCA and it
reduces to PCA when the encoder and decoder are restricted to linear maps
given by f_enc(x) := V_k^⊤ x and f_dec(z) := V_k z for x∈ℝ^d, z∈ℝ^k, respectively.
Characterisation of time-lagged autoencoders.
We give a characterisation of the optimal encoder and the optimal decoder in the time-lagged autoencoders <cit.>.
Assume that the data x^(0), x^(1), …, x^(i),… comes from the
trajectory of an underlying ergodic process with invariant measure μ (<ref>) at time iΔ t, where Δ t>0 and
i=0,1,…. Also assume that, for some τ>0, the state y of the underlying system after time τ given its current state x can be described as an ergodic Makrovian jump process with the transition density
p_τ(y|x) (see the discussion on transfer operator in Section <ref>).
For simplicity, we assume τ=jΔ t for some integer j>0. The time-lagged autoencoder is an autoencoder trained with the loss
Loss^AE_τ(f_enc, f_dec) = 1/N-j∑_i=0^N-j-1 |f_dec∘ f_enc(x^(i)) - x^(i+j)|^2 ,
which reduces to the standard reconstruction loss (<ref>) when j=0.
Let us consider the limit of (<ref>) when N→ +∞. Given the encoder f_enc, denote by μ^f_enc_z the conditional measure
on the level set Σ^f_enc_z:={x∈ℝ^d|f_enc(x)=z} for z ∈ℝ^k (see (<ref>) for definition) and let Q^f_enc(z) be the corresponding normalising constant in (<ref>).
Using (<ref>) and ergodicity, we have
Loss^AE_τ(f_enc, f_dec) = lim_N→ +∞1/N-j∑_i=0^N-j-1 |f_dec∘ f_enc(x^(i)) - x^(i+j)|^2
= ∫_x∈ℝ^d∫_y∈ℝ^d |f_dec∘ f_enc(x) - y|^2 p_τ(y|x) dμ(x) dy
= ∫_x∈ℝ^d∫_y∈ℝ^d[∫_z∈ℝ^k |f_dec(z)- y|^2δ(f_enc(x)-z)dz] p_τ(y|x) dμ(x) dy
= ∫_z∈ℝ^k[∫_y∈ℝ^d∫_x∈Σ^f_enc_z |f_dec(z)- y|^2 p_τ(y|x) dμ^f_enc_z(x) dy] Q^f_enc(z) dz
= ∫_z∈ℝ^k[𝐄_y ∼μ^f_enc_z,τ|f_dec(z)- y|^2] Q^f_enc(z) dz
= 𝐄_z ∼μ^f_enc[𝐄_y ∼μ^f_enc_z,τ|f_dec(z)- y|^2] ,
where dμ^f_enc= Q^f_enc(z) dz is a probability measure
on ℝ^k, and we have denoted by μ^f_enc_z,τ the probability measure on ℝ^d defined by
dμ^f_enc_z,τ(y)= (∫_x∈Σ^f_enc_z p_τ(y|x) dμ^f_enc_z(x)) dy , y ∈ℝ^d .
Using the simple identity
min_y' ∈ℝ^d𝐄_y ∼μ^f_enc_z,τ|y- y'|^2 =
𝐕𝐚𝐫_y∈μ^f_enc_z,τ (y)
where the minimum is attained at y'= 𝐄_y ∼μ^f_enc_z,τ (y), we can finally write the minimisation of (<ref>) as
min_f_enc,f_decLoss^AE_τ(f_enc, f_dec)
= min_f_encmin_f_dec𝐄_z ∼μ^f_enc[𝐄_y ∼μ^f_enc_z,τ|f_dec(z)- y|^2]
= min_f_enc𝐄_z ∼μ^f_enc[min_y'=f_dec(z)𝐄_y ∼μ^f_enc_z,τ|y'- y|^2]
= min_f_enc𝐄_z ∼μ^f_enc[𝐕𝐚𝐫_y∼μ^f_enc_z,τ (y)] .
Note that (<ref>) is the distribution of y after time τ starting from points x on the levelset Σ^f_enc_z distributed
according to the conditional measure μ^f_enc_z. To summarize, (<ref>) implies that, when N→ +∞,
training time-lagged autoencoder yields (in theory) the encoder map f_enc
that minimises the average variance of the future states y (after time
τ) of points x on Σ^f_enc_z distributed according to
μ^f_enc_z, and the decoder that is given by the mean of the future
states y, i.e. f_dec(z) = 𝐄_y∈μ^f_enc_z,τ (y) for z∈ℝ^k.
Similar results hold for the standard autoencoder with the reconstruction loss (<ref>).
In fact, choosing τ=0 in the above derivation leads to the conclusion that the optimal encoder f_enc minimises the average variance of the measures μ^f_enc_z on the levelsets.
To conclude, we note that although the loss (<ref>) in time-lagged autoencoders encodes temporal information of data,
from the characterisation (<ref>) it is not
completely clear that this information will be able to yield encoders that are
suitable to define good CVs (in the sense discussed in
Section <ref>). In the next section, we will further compare
autoencoders and eigenfunctions on concrete numerical examples. We refer
to <cit.> for discussions on the capability and limitations of time-lagged autoencoders.
§ NUMERICAL EXAMPLES
In this section, we show numerical results of eigenfunctions and autoencoders for two simple two-dimensional systems.
For eigenfunctions, we only consider the transfer operator and the loss (<ref>) due to its simplicity.
Numerical study on computing eigenfunctions for the generator using the loss (<ref>) can be found in <cit.>. The code for training is implemented in PyTorch.
§.§ First example
The first system satisfies the SDE (<ref>) with β=4.0 and the potential (taken from <cit.>)
V(x_1, x_2) = (x_1^2-1)^2 + 1/ϵ (x_1^2 + x_2 - 1)^2, (x_1, x_2)^⊤∈ℝ^2 ,
where we choose ϵ=0.5. As shown in Figure <ref>,
there are two metastable regions in the state space, and the system can
transit from one to the other through a curved transition channel. We sample
the trajectory of (<ref>) for 10^5 steps using Euler-Maruyama scheme with time step-size Δ t=0.005.
The sampled states are recorded every 2 steps. This results in a dataset
consisting of 5× 10^4 states, which will used in training neural
networks [Note that the empirical distribution of the data (shown in Figure <ref>) slightly differs from the true invariant distribution μ of the dynamics.
However, there are sufficiently many samples in both metastable regions and also in the transition region.
In particular, the discrepancy between the empirical distribution and the true
invariant distribution is not the main factor that determines the quality of the numerical results.].
We train neural networks with the loss (<ref>) for standard autoencoders and the loss (<ref>) for time-lagged autoencoders.
In each test, since the total dimension is 2, we choose the bottleneck
dimension k=1. The encoder is represented by a neural network that has an
input layer of size 2, an output layer of size 1, and 4 hidden layers
of size 30 each. The decoder is represented by a neural network that has an
input layer of size 1, an output layer of size 1, and 3 hidden layers
of size 30 each. We take tanh as activation function in all neural
networks. In the training, we use Adam optimiser <cit.> with batch size 2× 10^4 and learning rate 0.005.
The random seed is fixed to be 2046 and the total number of training epochs
is set to 500. Figure <ref> shows the trained autoencoders with different lag-times.
As one can see there, for both the standard autoencoder (τ=0.0) and the time-lagged autoencoder with a small lag-time (τ=0.5), the contour lines of the
trained encoder match well with the stiff direction of the potential. The
curves determined by the image of the decoders are also close to the
transition path. However, the results for time-lagged autoencoders become unsatisfactory when the lag-time is chosen as 1.0 and 2.0.
We also learn the first eigenfunction φ_1 of the transfer operator
using the loss (<ref>), where we choose k=1, the coefficient ω_1=1.0, lag-time τ=1.0, and the penalty constant α=10.0.
The same dataset and the same training parameters as in the training of autoencoders are used, except that for the eigenfunction we employ a neural network that has 3 hidden layers of size 20 each.
The learned eigenfunction is shown in Figure <ref>. We can see
that the eigenfunction is indeed capable of identifying the two metastable regions and its contour lines are well aligned with the stiff directions of the potential in the transition region (but not inside the metastable regions).
§.§ Second example
In the second example, we consider a system that satisfies the SDE
(<ref>) with β=1.5 and the potential
V(x_1, x_2) = e^1.5 x_2^2/1 + e^5(x_1^2-1) - 4 e^-4 (x_1-2)^2-0.4x_2^2 -5 e^-4 (x_1+2)^2-0.4x_2^2 + 0.2 (x_1^4 + x_2^4) + 0.5 e^-2x_1^2 ,
for (x_1,x_2)^⊤∈ℝ^2. As shown in
Figure <ref>, there are again two metastable regions. The
region on the left contains the global minimum point of V, and the region on the right contains a local minimum point of V.
To prepare training data, we sample the trajectory of (<ref>) using Euler-Maruyama scheme with the
same parameters as in the previous example, except that in this example we sample in total 5 × 10^5 steps and by recording states every 2 steps we obtain a dataset of size 2.5× 10^5.
We learn the autoencoder with the standard reconstruction loss (<ref>) and the eigenfunction φ_1 of transfer operator with loss (<ref>), respectively.
For both autoencoder and eigenfunction, we use the same network architectures as in the previous example. We also use the same training parameters, except
that in this example a larger batch-size 10^5 is used and the total number of training epochs is set to 1000.
The lag-time for transfer operator is τ=0.5. Figure <ref> shows the learned autoencoder and the eigenfunction φ_1.
As one can see there, since the autoencoder is trained to minimise the
reconstruction error and most sampled data falls into the two metastable
regions, the contour lines of the learned encoder match the stiff directions of the potential in the metastable regions, but the transition region is poorly characterised.
On the contrary, the learned eigenfunction φ_1, while being close to constant inside the two metastable regions, gives a good parameterisation of the transition region.
We also tried time-lagged autoencoders with lag-time τ=0.5 and τ=1.0
(results are not shown here). But, we were not successful in obtaining satisfactory results as compared to the learned eigenfunction in Figure <ref>.
§ ACKNOWLEDGEMENT
W. Zhang thanks Tony Lelièvre and Gabriel Stolz for fruitful discussions on autoencoders.
The work of C. Schütte and W. Zhang is supported by the DFG under Germany's Excellence Strategy-MATH+: The Berlin Mathematics Research Centre (EXC-2046/1)-project ID:390685689.
§ PROOFS OF LEMMA <REF> AND LEMMA <REF>
Applying the detailed balance condition and the second identity in (<ref>), we can derive
ℰ_τ(f) = 1/2∫_ℝ^d∫_ℝ^d(f(y) - f(x))^2 p_τ(y|x) π(x) dx dy
= 1/2∫_ℝ^d∫_ℝ^d(f(y)^2 - 2f(x)f(y) + f(x)^2) p_τ(y|x) π(x) dx dy
= ∫_ℝ^d∫_ℝ^d f(x)^2 π(x) dx - ∫_ℝ^d∫_ℝ^d f(x)f(y) p_τ(y|x) π(x) dx dy
= ∫_ℝ^d[(I-𝒯)f(x)] f(x) dμ(x)
= ⟨ (I-𝒯)f, f⟩_μ .
It is straightforward to verify the identity (Bochner's formula)
1/2Δ |∇ f|^2 = ∇(Δ f) ·∇ f + |∇^2 f|_F^2 ,
where ∇^2 f denotes the matrix with entries ∂^2
f/∂ x_i ∂ x_j for 1 ≤ i,j ≤ d and |∇^2 f|_F is its Frobenius norm. Using (<ref>), (<ref>), and (<ref>), we can derive
∫_ℝ^d |ℒf|^2 dμ
= -1/β∫_ℝ^d∇ f·∇ (ℒf) dμ
= -1/β∫_ℝ^d∇ f·∇ (-∇ V
·∇ f + 1/βΔ f) dμ
= 1/β∫_ℝ^d[V(∇ f,
∇ f) + 1/2∇ |∇ f|^2·∇ V -
1/β∇ f ·∇Δ f] dμ
= 1/β∫_ℝ^d[V(∇ f,
∇ f) + 1/2∇ |∇ f|^2·∇ V -
1/β(1/2Δ |∇ f|^2 - |∇^2 f|^2_F )] dμ
= 1/β∫_ℝ^d[V(∇ f,
∇ f) - 1/2ℒ (|∇ f|^2) +1/β
|∇^2 f|^2_F ] dμ
= 1/β∫_ℝ^d[V(∇ f,
∇ f) + 1/β|∇^2 f|_F^2 ] dμ ,
where the last equality follows from the fact that ∫ℒ |∇ f|^2 dμ = 0.
siamplain
|
http://arxiv.org/abs/2307.02827v1
|
20230706075047
|
Cell-Free XL-MIMO Meets Multi-Agent Reinforcement Learning: Architectures, Challenges, and Future Directions
|
[
"Zhilong Liu",
"Jiayi Zhang",
"Ziheng Liu",
"Hongyang Du",
"Zhe Wang",
"Dusit Niyato",
"Mohsen Guizani",
"Bo Ai"
] |
cs.IT
|
[
"cs.IT",
"eess.SP",
"math.IT"
] |
Cell-Free XL-MIMO Meets Multi-Agent Reinforcement Learning: Architectures, Challenges, and Future Directions
Zhilong Liu, Graduate Student Member, IEEE,
Jiayi Zhang, Senior Member, IEEE, Ziheng Liu, Hongyang Du, Zhe Wang,
Dusit Niyato, Fellow, IEEE, Mohsen Guizani, Fellow, IEEE, and Bo Ai, Fellow, IEEE
Z. Liu, J. Zhang, Z. Liu, Z. Wang and B. Ai are with Beijing Jiaotong University; H. Du and D. Niyato are with Nanyang Technological University; Mohsen Guizani is with Mohamed Bin Zayed University of Artificial Intelligence.
============================================================================================================================================================================================================================================================================================================================================================================================================================================
Cell-free massive multiple-input multiple-output (mMIMO) and extremely large-scale MIMO (XL-MIMO) are regarded as promising innovations for the forthcoming generation of wireless communication systems. Their significant advantages in augmenting the number of degrees of freedom have garnered considerable interest. In this article, we first review the essential opportunities and challenges induced by XL-MIMO systems. We then propose the enhanced paradigm of cell-free XL-MIMO, which incorporates multi-agent reinforcement learning (MARL) to provide a distributed strategy for tackling the problem of high-dimension signal processing and costly energy consumption. Based on the unique near-field characteristics, we propose two categories of the low-complexity design, i.e., antenna selection and power control, to adapt to different cell-free XL-MIMO scenarios and achieve the maximum data rate. For inspiration, several critical future research directions pertaining to green cell-free XL-MIMO systems are presented.
§ INTRODUCTION
The next generation of wireless communication systems, i.e., the sixth-generation (6G), is expected to deliver unprecedented levels of performance, particularly in digital twins, integrated sensing and communication, and extended reality scenarios. The commercialization of massive multiple-input multiple-output (mMIMO) technology has played a significant role in wireless network development. However, conventional MIMO techniques face limitations in meeting the complex requirements of 6G use cases. In light of this challenge, emerging technologies such as cell-free mMIMO and extremely large-scale MIMO (XL-MIMO) are being proposed to overcome the capacity constraints of conventional MIMO. These advanced technologies are critical to support the massive connectivity and all-round multidimensional access to air, earth, and sea that will enable the Internet of Everything.
As a high-profile technology, the novel cell-free mMIMO holds great promise in meeting the growing demand for increasing network throughput and low-latency transmission. By deploying a large number of geographically distributed access points (APs) connected to a central processing unit (CPU), cell-free mMIMO can effectively address the inter-cell interference that exists in the intrinsic implementation of “cell-centric” network <cit.>. Similarly, the promising XL-MIMO technology inherits the prior cellular network with the world-shaking change of base stations (BSs) to adapt the communication variations from far-field to near-field since the massive antennas deployment <cit.>. Moreover, the XL-MIMO can also provide a much stronger beamforming gain as well as harvest abundant degrees of freedom (DoFs) to compensate for the severe path loss in the millimeter-wave and terahertz band communications.
From the perspective of electromagnetic (EM) fields, the addition of antennas in XL-MIMO is a superficial phenomenon, and the really significant changes occur in the analysis methods, where the spherical wavefront-based method replaces the planar wavefront-based one <cit.>. In cell-free mMIMO systems, the data processing tasks can be performed locally using the large-scale fading decoding (LSFD) method <cit.>. This approach is highly effective in relieving the computational load on CPUs. By integrating cell-free mMIMO and XL-MIMO, namely cell-free XL-MIMO, this prototype will be a forward-looking architecture that can accommodate full scenarios and hot-spot venues, as shown in Fig. 1.
To reduce the overall system computational complexity and energy consumption, low-complexity baseband signal processing algorithms are in demand. Multi-agent reinforcement learning (MARL) has been widely used for decision-making in large-scale network scenarios <cit.>, e.g., unmanned aerial vehicles (UAVs), swarm intelligence, and traffic scheduling. The algorithms are widely adopted to improve spectral efficiency (SE), enhance interference management in XL-MIMO systems, and to increase coverage, improve user fairness, and achieve distributed resource allocation in cell-free mMIMO systems. In multi-agent systems, interactions between intelligent agents and environments drive the achievement of goals. The MARL-based methods have been expanded to address the resource allocation challenges in MIMO systems <cit.>, e.g., power control, antenna selection (AS), and hybrid-field beamforming. In particular, RL algorithms become an almost indispensable tool for exploring complex dynamic scenarios, which can effectively reduce overall power consumption. Notable advances include the development of low-complexity RL-based power control algorithms that can be scaled to large-scale antennas and the exploration of hybrid analog-digital precoding schemes that can considerably enhance the energy efficiency (EE).
Motivated by the aforementioned works, we investigate the cell-free XL-MIMO systems with MARL techniques. The main contributions are summarized as follows:
∙ We introduce new characteristics from the near-field communication, basic system scheme, and application scenarios of cell-free XL-MIMO systems. More important, we comprehensively introduce the crucial challenges of power consumption and computational complexity.
∙ We investigate three technical frameworks, e.g., fully decentralized, fully centralized, and centralized training and decentralized execution (CTDE), algorithm categories, and applications of MARL methods in existing literature, as shown in Fig. 2.
∙ To strive for the undiscovered performance, we focus on two critical methods, i.e., AS and power control, to reduce power consumption and improve SE with MARL methods. Numerical results are given to illustrate the ability to improve SE and EE. Finally, the article concludes by discussing open problems toward uncovering the potential of cell-free XL-MIMO systems.
§ OPPORTUNITIES AND CHALLENGES OF XL-MIMO COMMUNICATION SYSTEMS
In this section, we focus on the newly discovered EM wave transmission characteristics in the near-field domain. Through the unique near-field properties uncovered by the XL-MIMO, such as the spherical wave model (SWM), spatial non-stationary effect, and effective DoF (EDoF), they can be well designed to enhance the communication performance. In addition, the power consumption and computational complexity problems present us with new challenges.
§.§ New Opportunities
∙ Spherical Wave Model
The SWM is a mathematical model used to describe the behavior of EM waves in three-dimensional space <cit.>. An accurate SWM is essential for the design of XL-MIMO systems as it facilitates the efficient processing and manipulation of EM waves, thereby improving signal quality and enhancing data throughput. In previous research, channel models mainly focused on the basic assumption of Rayleigh or Rician fading channels based on the far-field assumption. However, once the communication distance is shorter than the Rayleigh distance, e.g., for an XL-MIMO panel with a diagonal of 10 m at 3 GHz, the boundary is up to 2 km, the communication domain focuses on the near-field rather than the far-field. Therefore, the existing channel models used to analyze the conventional MIMO systems are not suitable for XL-MIMO systems as the near-field communication dominates <cit.>.
Furthermore, the integration of massive antennas can make it difficult to obtain accurate channel state information in XL-MIMO systems. Regarding the near-field effects, the channel should be properly modelled to ensure accuracy in the near-field under the spherical wavefront assumption. SWM, based on electromagnetic information theory, has revolutionized the field of wireless communications, enabling high-speed data transmission, improved network coverage and better user experience <cit.>.
∙ Spatial Non-stationary Effect
In XL-MIMO systems, the spatial non-stationary effect arises because only partial antennas in BSs can receive spherical EM waves from specific UEs propagated by different scatters. This can lead to fluctuations in the channel gain, phase, and delay over time <cit.>. Similarly, each UE can only observe a subset of the antenna array, which is called the visibility region (VR), as shown in Fig. 1. As a result, the channel capacity and quality vary significantly, and the traditional channel estimation (CE) and equalization techniques may not be effective at mitigating the effects of non-stationary channels. Thus, effectively exploiting this peculiarity would be a tutorial for green communication systems, as a cost-effective way to reduce the computational complexity for crowded scenarios.
∙ Effective DoF
An important parameter characterizing the performance of XL-MIMO systems is the EDoF, referring to the number of significant electromagnetic modes. It represents the potential capacity of a MIMO system to spatially multiplex multiple data streams. The EDoF considers the effects of various factors, e.g., channel correlations, signal-to-noise ratios, and interference. However, increasing the number of antennas may not always improve the EDoF, as it may increase channel correlation, interference between different data streams, and energy consumption, all of which degrade the system performance <cit.>. Therefore, to achieve a high EDoF in practical XL-MIMO systems, appropriate antenna numbers and configurations should be chosen based on specific wireless channels and system requirements, e.g., the maximum EDoF is around 1600 for a 2.25 m × 2.25 m panel size at 0.1 m wavelength to satisfy the hot-spot scenarios.
§.§ New Challenges
∙ Power Consumption
Although the XL-MIMO technology can effectively improve the speed and reliability of the signal transmission, implementing enormous sub-processing units in the XL-MIMO transceiver can result in high hardware cost and power consumption. Reducing the power consumption in the XL-MIMO is essential to ensure the energy-efficient, cost-effective, and sustainable while maintaining its high performance capabilities <cit.>. Currently, existing methods to solve the above challenges mainly focus on traditional methods, e.g., heuristic fractional power control laws and deep learning-based power control methods. However, it is necessary to balance the system performance and power consumption factors and choose appropriate system configurations to achieve a balance between the performance and the power consumption in practical scenarios.
∙ Computation Complexity
Complex computations significantly increase latency in a wireless communication system. By reducing the computational complexity, the XL-MIMO can reduce latency and specialized processors or additional memory requirements. In general, the antenna selection (AS) technique involves selecting the appropriate subset of antennas from the antenna array, which can help minimize the computational dimension of the signal processing and power consumption, as well as improve the signal-to-noise ratio of the system <cit.>. In fact, there is a trade-off between the performance and computational complexity in large-scale systems.
With these aspects, the XL-MIMO can be seen as an extended version of the conventional MIMO, which involves more than just increasing the number of antennas deployed from 64 antennas to 256 antennas. From an environmentally-friendly perspective, by optimizing the transceiver power and adopting appropriate distributed processing algorithms, XL-MIMO systems can achieve a superior performance while minimizing the energy consumption and reducing the carbon footprint of wireless communication systems.
∙ User Mobility
In XL-MIMO systems, user mobility leads to time-varying channel conditions. As users move within the coverage area, channel characteristics, such as path loss, fading, and interference, change dynamically. Moreover, the movement of UEs can switch the propagation mode between the near-field and far-field, and thus, the channel estimation and codebook design in the hybrid-field should be re-examined. Additionally, user mobility necessitates dynamic adaptation of transmission strategies, efficient handover management, user scheduling, antenna selection, power control, and mobility prediction. By considering these factors, XL-MIMO systems can be effectively optimized according to user mobility, maintaining reliable connectivity, mitigating interference, and enhancing system performance.
§ SYSTEM ARCHITECTURE OF MULTI-AGENT CELL-FREE XL-MIMO
With the increasing computational dimension and time-varying configurations and parameters, the traditional optimization methods do not work well with the XL-MIMO. We have to seek a better solution to resolve it. In what follows, by integrating the advantages of the cell-free mMIMO and MARL methods, we propose a novel cell-free XL-MIMO system with the MARL optimization design scheme to further improve the performance of cell-free XL-MIMO systems.
§.§ Multi-agent Reinforcement Learning
MARL, a subfield of artificial intelligence, has been widely used in real-world scenarios focusing on the interaction with the environment and multiple agents. Extending from a single-agent domain to a multi-agent environment, this method arises from the need to develop intelligent systems that can interact with other intelligent agents in complex and dynamic environments. It combines the principles of RL, game theory, and multi-agent systems to enable agents to learn how to interact with other agents and the environment to achieve their goals <cit.>. The main idea behind MARL is to model the behavior of a group of agents that can cooperate, compete, and even negotiate. More intuitively, different training schemes, e.g., fully decentralized, fully centralized, and centralized training and decentralized execution (CTDE), are considered as promising paradigms to adapt to different environments. Furthermore, the existing MARL algorithms can be divided into three categories:
∙ Value decomposition (VD)
VD-based algorithms are usually based on value functions, i.e., Deep Recurrent Q-Network, to decompose value functions into local value functions for agents, so as to deal with the interaction between multiple agents. This type of algorithm usually combines the actions and states of multiple agents as global states, and then uses single-agent algorithms such as Q-learning to learn local value functions.
∙ Actor-Critic (AC)
AC-based algorithms combine value functions and strategy functions with two networks, Actor and Critic, where the Actor network learns the strategy and the Critic network evaluates the value of the action and updates the actor's policy. Examples of AC-based methods include Asynchronous Advantage Actor-Critic (A3C) <cit.> and Multi-agent Deep Deterministic Policy Gradient (MADDPG) <cit.>. To illustrate, MADDPG follows the CTDE paradigm, where the additional information has been gathered in Critic to faciliate the training process while Actors take actions based on their own local observations.
∙ Experience replay (ER)
ER-based algorithms typically use experience replay caches to store past experiences and randomly sample them for training. This approach speeds up training by making more efficient use of data, and is usually applied experiential playback to single-agent algorithms, i.e., Deep Policy Inference Q-Network. However, in multi-agent scenarios, the implementation of experience reply is more complicated, and the interaction between the multi-agent needs to be considered.
As shown in Fig. 2, these three categories have been widely used in communication scenarios for resource allocation. While the centralized learning method is advantageous for global assessment with unified decision-making, distributed learning using the MARL methods is more feasible for local processing, which is beneficial for real-time processing.
In multi-agent environments, agents' actions affect the state of the environment, and each agent must learn a policy that not only maximizes its rewards but also takes into account the actions of other agents. The MADDPG algorithm extends the popular DDPG algorithm by introducing a centralized Critic network that can observe the joint actions of all agents and provide feedback to each agent's policy network, as shown in Fig. 3. In turn, the Actor network learns to optimize their policies, taking into account the feedback from the Critic network and the observations of other agents.
In the signal processing phase of the XL-MIMO, high-dimensional matrix operations, and time-sensitive actions are critical to achieve the optimal system performance. Therefore, traditional data processing schemes no longer meet the requirements of cell-free XL-MIMO systems. As such, we have to concentrate on local processing or distributed signal processing to reduce the load on the fronthaul links. For example, we can apply the MARL methods to approach the SE or EE maximum by defining a Markov decision process that includes states, actions, and rewards <cit.>. The agents interact with the environment in the current state and move to the next state. Then, the next state is sent to the agent, which decides to take an action against the environment. The environment then sends the next state and reward to the agent.
§.§ System Architecture of Multi-Agent Cell-Free XL-MIMO
In conventional massive MIMO systems, centralized processing methods lack the ability to parallelize operations. Furthermore, scaling up the dimensions of the array proves to be an arduous feat owing to the significant amount of interconnections and overwhelming burden placed on the central node. Therefore, various decentralized techniques have been proposed. Among them is the cell-free architecture, which aims at eliminating cell boundaries and focusing on user-centric communication <cit.>, providing more flexible transmission/reception of UEs. To adapt to the requirements of distributed architectures, we propose a modified embodiment of distributed XL-MIMO that exploits the advantages of cell-free mMIMO systems while considering multi-agent systems simultaneously.
As shown in Fig. 1, a distributed-processing XL-MIMO system architecture drawing on the merits of cell-free mMIMO is illustrated. The so-called LSFD method can be used to detect the signals using maximum ratio combining or minimum mean squared error combining <cit.>. For each BS equipped with XL-MIMO panels, it completes the signal processing as well as the channel estimation with all the channel state information. All processed signals are then transmitted to the CPU via fronthaul links. In cell-free XL-MIMO systems, there are multiple antennas at the transmitter and receiver sides, and a large number of UEs communicating simultaneously. The communication and resource allocation between these antennas and users can be optimized using MARL, a technique that allows agents to learn how to behave in an environment by interacting with it and receiving feedback in the form of rewards.
Using MARL, agents, i.e., UEs, BSs, and even antennas, can learn to allocate resources and optimize the transmission strategy. They interact with the system environment with their CSI and location for acquiring the future decision until the SE or EE maximum is reached. Besides, the MARL-based approach can adapt to dynamic changes in the environment, such as UE mobility and time-varying channels.
§ DIRECTIONS AND SOLUTIONS OF MULTI-AGENT CELL-FREE XL-MIMO SYSTEMS
Multi-antenna technology has been widely recognized as an effective means of improving SE with diversity gain and multiplexing gain. However, to achieve more performance gains in cell-free XL-MIMO systems, the computational complexity of cell-free XL-MIMO systems increases rapidly with the number of antennas and grievous interference causes signal quality degradation.
Having introduced the new opportunities, in this section, we provide a new look to solve the urgent challenges with MARL methods, e.g., AS and power control.
§.§ Challenge 1: Antenna Selection
1) MARL-empowering Antenna Selection
In cell-free XL-MIMO systems, it is necessary to explore effective AS techniques to reduce the number of antennas used in work patterns, enhance performance, and minimize complexity, especially in energy-constrained environments <cit.>. Not all antennas serve uplink or downlink UEs simultaneously, making it possible to reduce the number of radio-frequency (RF) links and signal processing units to lower hardware cost and power consumption. As the number of antennas tends to be enormous, the circuit cost and computational complexity of conventional methods based on fully-digital receive arrays will increase dramatically.
AS provides a low hardware-complexity mentality for exploiting the spatial-diversity benefits of multiple antenna technology with solely partial antennas activated to serve different UEs and can be considered at both transmitters and receivers in cell-free XL-MIMO systems. The basic idea of AS is to choose the optimal subset of antennas from the available antennas in the whole antenna array, based on some selection criteria <cit.>, as shown in Fig. 4. In a cell-free XL-MIMO, AS can be achieved either statically or dynamically. In static AS, a fixed set of antennas is selected that remains unchanged during transmission, while in dynamic AS, the optimal set of antennas is determined based on the channel conditions at each transmission.
In Fig. 5, we draw the boxplot of the sum SE and average EE under different AS strategies. Each UEs' antenna is regarded as an individual agent to select different BS antenna for achieving the maximum SE. For a fair comparison, we assume the case without AS as a benchmark. The traditional optimal channel selection based on LSF coefficients decreases the system performance by sacrificing DoFs. However, with the introduction of “multi-agent", each antenna can dynamically adjust the selected antennas. It is noteworthy that the MADDPG algorithm can effectively improve the SE of poor quality UEs and nearly achieve a 26% EE improvement compared with the benchmark.
2) Future Research Directions
∙ Hardware Design: To overcome computationally complex bottlenecks, one promising solution is to partition the uniform planar array (UPA) or uniform linear array (ULA)-based XL-MIMO into subarrays-disjoint units with partial-connected structure and individual processing units. Instead of connecting all the antennas, only a subset of antennas is interconnected, allowing antennas to be connected in a flexible and scalable manner.
∙ Subarray Selection: Apart from AS, subarray selection is worth investigating with fixed or adjustable format depending on whether they correspond to separate hardware entities or software-defined logical connections between different antenna elements, as shown in Fig. 4. The use of subarrays enables more efficient and distributed processing, enabling the system to handle larger and more complex data sets without compromising on performance and accuracy.
∙ Non-stationary Perspective:
One approach to achieving AS in non-stationary channels is to use multiple antennas in combination with channel estimation and equalization techniques, such as space-time coding and beamforming. These techniques can help mitigate the effects of non-stationary channels by using multiple antennas to create a more robust signal and adapting the transmit and receive strategies to the changing channel conditions.
§.§ Challenge 2: Power Control Design
1) Existing Power Control Method
Apart from the AS, designing an effective power allocation algorithm is another open challenge for reducing power consumption in cell-free XL-MIMO systems. With limited communication resources, the dynamic power allocation is worth optimizing based on the real-time channel information. The existing power control methods solving the inter-user interference are focused on the following optimization objectives: max-min, max-product, and max-sum. Traditional power control methods, such as linear optimization techniques, have limitations in large-scale MIMO systems due to the increased complexity and static configuration. Though the non-convex problem can be easily solved using supervised learning-based methods or centralized mechanisms, the prior optimal output data is challenging to obtain in large-scale networks.
With the benefits of massive antennas, the cell-free XL-MIMO poses new challenges for power optimization. Affected by the near-field propagation and spherical wavefront, different parts of the extremely large array encounter different signal strengths. Besides, certain antennas may have minimal impact on the overall system performance due to the non-stationarities and VRs. These lead to the activation of power-intensive RF links for these antennas becoming burdensome and significantly reducing the total EE of systems. In this case, existing algorithms are not always able to harvest the global optimal solution, especially when dealing with high-dimensional matrix operations. To overcome these limitations, MARL algorithms have been applied to deal with power control in cell-free XL-MIMO systems.
2) Proposed MARL-based Power Control Method
RL algorithms enable real-time optimization of power control decisions based on the current state of the system, including channel conditions and signal quality.
The basic idea behind using MARL algorithms for power control in large-scale MIMO systems is to model each BS or antenna as an individual agent and to optimize the joint behavior of all agents using RL techniques. This allows for a more flexible and data-driven power control solution compared to traditional methods.
To achieve power control in large-scale MIMO systems using MARL algorithms, the following steps can be taken:
∙ Select individual agent: Each antenna, BS, or UE can be modeled as an independent agent, with its unique state, action, and reward, depending on the uplink or downlink transmission. The state of the agent should represent the current channel conditions and interference, while the action should represent the transmit power of the antenna.
∙ Define reward function: The reward function should reflect the performance objective of the power control algorithm, such as maximizing SE or minimizing interference.
∙ Train MARL algorithm: MARL algorithms should be trained using the defined reward function and the modelled agents. The training process involves multiple iterations of the agents taking actions, observing the results, and updating their policies based on the reward received.
∙ Implement power control algorithm: Once the training process is complete, the power control algorithm can be implemented in large-scale MIMO systems. The algorithm will use the learned policies of the agents to determine the optimal transmit power of each antenna.
∙ Evaluate performance: The performance of power control algorithms should be evaluated in a realistic simulation or test environment to ensure their effectiveness in cell-free XL-MIMO systems.
The application of MARL algorithms for power control in large-scale MIMO systems is a growing area of research, and recent studies have demonstrated the potential of these algorithms for improving the performance and efficiency of MIMO systems <cit.>. Based on existing MARL methods, we successfully apply the MADDPG algorithm to solve power control problems for better performance. In addition, we introduce a double-layer power control architecture called D-MADDPG that is based on LSF coefficients between antennas. This architecture differs from the conventional single-layer architecture, which considers all antennas subjected to an agent as a whole, and it demonstrates a notable advantage in increasing the sum SE, as shown in Fig. 6.
3) Future Research Directions
∙ Precoding Design:
The hybrid precoding is promising to relieve the pressure of excessive power consumption by decomposing the high-dimensional full-digital precoder into the realization of an analog beamformer and digital precoder. Through effective precoding design, the RF links and power costs can be significantly reduced. Additionally, advanced precoding designs can mitigate the beam split effect that severely degrades the achievable rate degradation.
∙ Partial-Interaction Design:
Designing a distributed MARL algorithm with partial-interaction architecture is a promising way to lessen the quantity of network training and information interaction. Partial-interaction allows agents to selectively select appropriate agents for interaction based on distance, service relationship, and other factors, rather than interacting with all agents in cell-free XL-MIMO systems, which is more practical for scalable networks.
∙ Jointly Optimized Design:
The jointly optimized design of AS and power control is promising to enhance the robustness of the system, eliminating the need for separate optimization. And the power allocation can be re-examined with appropriate antennas selected from XL-MIMO systems.
Based on the above discussion, the design of AS and power control based on real-time interactions with the MADDPG method achieves a higher performance gain in the near-field. Accordingly, such effective MARL methods can be extended to other resource allocation schemes.
§ FUTURE RESEARCH DIRECTIONS
§.§ Hybrid-Field Channel Estimation
To obtain accurate channel state information, CE in the near-field of cell-free XL-MIMO is a key challenge because the near-field angle-domain channel is not sparse. Faced with huge data streams, lightweight CE methods with reduced computational complexity, fast convergence, and exhaustive channel feature capture are essential to adapt to the near-field characteristics and non-stationary channels. Furthermore, the accurate models based on the spherical wavefronts even the hybrid spherical- and planar-wavefronts, which capture more channel details are essential to reduce the bit error rate due to the UE mobility.
§.§ Hybrid-Field Beamforming
First, for the near-field beam training, the array response vector of near-field channels is not only related to the angle but also the distance, resulting in a high-dimension codebook set. Thus, a polar-domain codebook should be utilized instead of a discrete Fourier transform codebook to capture the information on the channel paths. Secondly, the near-field beam split effect occurs when the transmitting antennas are placed close to each other and the distance between the antennas is comparable to the wavelength of the signal being transmitted. In such cases, the transmitted signal may split into multiple near-field beams that interfere with each other. Thirdly, it is a hybrid-field joint design optimization for solving the switch from far-field beamsteering to near-field beamfocusing, and vice versa.
§.§ RIS-aided Cell-free XL-MIMO
The original reconfigurable intelligent surface (RIS)-aided communication paradigm has the advantage of promoting performance with controllable reflective links through software control. With the ability to dynamically reconfigure the electromagnetic environment, RIS can improve channel quality and overcome the limitations of the propagation environment. Additionally, RIS can also reduce hardware complexity and cost by acting as a passive element for beamforming and interference mitigation. Since the main communication area is in the non-negligible near-field region, the RIS codebook should be well-designed based on the spherical wave channel models.
§.§ Green Communications
To achieve green communications, next-generation communication systems propose sustainable, energy-efficient, and energy-aware requirements. Low-resolution devices are the trend to cope with the great expense of cell-free XL-MIMO systems. On the one hand, hardware impairments still confuse signal processing, especially when the dimension is gigantic. Accordingly, the fruitful compensation algorithm design is necessary to approach the optimum. On the other hand, the simultaneous wireless information and power transfer technology should focus on elaborate near-field beamforming design to achieve a higher performance.
§ CONCLUSION
In this article, the fundamental opportunities in the near-field communication of cell-free XL-MIMO systems and open challenges have been discussed in terms of SWM, spatial non-stationary effect, EDoF, power consumption, and computational complexity, respectively. In particular, we investigated the existing MARL categories and proposed the basic scheme of promising cell-free XL-MIMO systems using MARL methods. Then, we started with two existing challenges namely AS and power control. Accordingly, we successfully applied MADDPG algorithms to solve them. Finally, we pointed out the critical and promising future research directions, which are hybrid-field CE, hybrid-field beamforming, RIS-aided cell-free XL-MIMO architecture, and green communication.
IEEEtran
|
http://arxiv.org/abs/2307.01260v1
|
20230703180002
|
Nontrivial worldline winding in non-Hermitian quantum systems
|
[
"Shi-Xin Hu",
"Yongxu Fu",
"Yi Zhang"
] |
quant-ph
|
[
"quant-ph",
"cond-mat.mes-hall",
"cond-mat.stat-mech"
] |
International Center for Quantum Materials, School of Physics, Peking University, Beijing, 100871, China
yongxufu@pku.edu.cn
International Center for Quantum Materials, School of Physics, Peking University, Beijing, 100871, China
frankzhangyi@gmail.com
International Center for Quantum Materials, School of Physics, Peking University, Beijing, 100871, China
Amid the growing interest in non-Hermitian quantum systems, non-interacting models have received the most attention. Here, through the stochastic series expansion quantum Monte Carlo method, we investigate non-Hermitian physics in interacting quantum systems, e.g., various non-Hermitian quantum spin chains. While calculations yield consistent numerical results under open boundary conditions, non-Hermitian quantum systems under periodic boundary conditions observe an unusual concentration of imaginary-time worldlines over nontrivial winding and require enhanced ergodicity between winding-number sectors for proper convergences. Such nontrivial worldline winding is an emergent physical phenomenon that also exists in other non-Hermitian models and analytical approaches. Alongside the non-Hermitian skin effect and the point-gap spectroscopy, it largely extends the identification and analysis of non-Hermitian topological phenomena to quantum systems with interactions, finite temperatures, biorthogonal basis, and periodic boundary conditions in a novel and controlled fashion. Finally, we study the direct physical implications of such nontrivial worldline winding, which bring additional, potentially quasi-long-range contributions to the entanglement entropy.
Nontrivial worldline winding in non-Hermitian quantum systems
Yi Zhang
August 1, 2023
=============================================================
§ INTRODUCTION
Recent explorations of non-Hermitian quantum systems have broadened the scope of condensed matter physics <cit.>, and rapidly spread to the field of higher-order non-Hermitian systems <cit.> and exceptional points <cit.>. Originating from effective models for open systems <cit.>, dissipative optical systems <cit.>, and electric circuits <cit.>, etc., non-Hermitian quantum systems display a wide range of interesting physical properties open to theoretical studies and experimental realizations. For example, the non-Hermitian skin effect (NHSE) is a remarkable feature that predicts an extensive number of eigenstates localized at the edges under open boundary conditions (OBCs) as well as the breakdown of the Bloch band theory <cit.>.
Interestingly, the NHSE is also deeply associated with the nontrivial point-gap topology of non-Hermitian quantum systems, i.e., the winding number of the energy spectra under PBCs around the reference energy in the complex plane controls the occurrence or absence of the NHSE <cit.> and reflects non-Hermitian bulk-boundary correspondence <cit.>. Simultaneously, the NHSE must accompany the departure of the energy spectra under OBCs and PBCs <cit.>. However, the NHSE also comes with its systematic limitations: It focuses on the right-eigenstates of non-interacting fermion systems under OBCs, thus inapplicable to finite temperatures, interactions, periodic boundary conditions, and expectation values under biorthogonal bases, which are common scenarios in condensed matter physics.
Beyond single-particle physics, researches on non-Hermitian quantum systems with interactions have also been picking up paces lately and revealed many exotic many-body properties <cit.>. Here, we take a quantum many-body perspective into non-Hermitian physics by generalizing the stochastic series expansion quantum Monte Carlo (QMC-SSE) <cit.> method to certain non-Hermitian quantum systems without the sign problem. The QMC-SSE method stochastically samples imaginary-time operator sequences, i.e., worldlines in (D+1)-dimensional space-time, in the Taylor series expansion of the partition function; it is highly efficient and easily implementable for some quantum spin <cit.> and boson lattice models <cit.>, albeit Hermitian or not. We obtain consistent results on non-Hermitian quantum many-body systems under OBCs. Under periodic boundary conditions (PBCs), however, the worldlines are dominated by nontrivial winding-number sectors and may obstruct convergence. To enhance ergodicity and facilitate convergence, we introduce a simple remedy for the QMC-SSE algorithm.
Importantly, like the NHSE, the nontrivial worldline winding may act as a defining character for non-Hermitian point-gap topological phenomena. In non-interacting cases, its conditions coincide with those for the NHSE, and the corresponding point-gap topological invariant is nonzero. However, unlike the NHSE <cit.>, nontrivial worldline winding is also applicable for interacting quantum systems and finite temperatures; indeed, its emergence exhibits explicit interaction dependence. Also, the related phenomena are reflected in physical observables corresponding to biorthogonal expectation values, including additional contributions to the entanglement entropy that resembles quasi-long-range entanglement. Further, instead of a binary “yes or no" answer, it offers a semi-quantitative measure of the extent of non-Hermitian topological physics at play. Finally, its PBC promptly complements the NHSE under OBCs.
We organize the rest of this paper as follows: In the next section, we briefly review the QMC-SSE technique (Sec. <ref>) before examining its generalization and applicability on non-Hermitian quantum systems (Sec. <ref>); then, in Sec. <ref>, we discuss the results of non-Hermitian quantum spin chains under OBCs as examples. In Sec. <ref>, we show the difficulties that QMC-SSE calculations encounter for the same non-Hermitian quantum spin chains yet under PBCs; for the explanation, we discuss the nontrivial worldline winding in a non-Hermitian toy model in Sec. <ref>; correspondingly, we propose a simple algorithmic technique to enhance ergodicity in Sec. <ref>, which indeed restores the QMC-SSE credibility for non-Hermitian quantum models under PBCs. In Sec. <ref>, we give a systematic analysis of the nontrivial worldline winding, whose conditions are consistent with the point-gap topology, as well as finite-temperature and interacting scenarios beyond the current theoretical framework. Sec. <ref> is attributed to physical implications of such nontrivial worldline winding - additional contributions to the entanglement entropy. We summarize and conclude the paper in Sec. <ref>, discussing potential generalizations such as general algorithms, higher dimensions, diverse boundary conditions, and other non-Hermitian topology.
§ QMC-SSE METHOD FOR NON-HERMITIAN QUANTUM SYSTEMS
§.§ Review of the QMC-SSE method
The QMC-SSE method is a powerful tool for calculating the physical quantities of quantum many-body systems. It is based upon the Taylor expansion of the Boltzmann factor in the partition function <cit.>:
Z=Tr{e^-βĤ}=∑_α∑_n=0^∞β^n/n !⟨α|(-Ĥ)^n| α⟩,
where β is the inverse temperature, and {|α⟩} is an orthogonal basis, e.g., |α⟩=|S_1^z, S_2^z, …, S_N^z⟩ for a spin system with N sites.
We can decompose the Hamiltonian Ĥ into:
Ĥ=-∑_a, bĤ_a,b,
where b labels different bonds (sites) within the lattice, and a denotes different types of operators. Consequently, we re-express the partition function as:
Z=∑_α∑_n=0^∞∑_S_nβ^n/n !⟨α| ∏_i=1^n Ĥ_a_i, b_i|α⟩,
where ∑_S_n sums over different sequences of operators:
S_n=[a_1, b_1],[a_2, b_2], …,[a_n, b_n].
In practice, we truncate the Taylor series at a sufficiently large M so that M>n for the highest power with meaningful contribution, achieved via thermalization before the actual sampling. Instead of varying n, it is more convenient to consider an operator sequence with a fixed length M, including n nontrivial operators and M-n identity operators Ĥ_0,0=Î <cit.>. Although the identity operators make no direct contribution, there are M!/(M-n)!n! number of ways of equivalent insertions of such identity operators, a binomial factor we must divide out for the partition function:
Z=∑_α∑_S_Mβ^n(M-n) !/M !⟨α|∏_i=1^M Ĥ_a_i, b_i| α⟩,
where the operator sequence S_M includes n nontrivial operators and M-n identity operators.
It is convenient to define a propagated state <cit.>:
|α_p⟩∝∏_i=1^p Ĥ_a_i, b_i|α⟩,
which satisfies the no-branching condition, i.e., |α_p⟩ is always proportional to one of the states in the chosen basis. Depending on whether the operator Ĥ_a_p,b_p is diagonal or off-diagonal, |α_p⟩ = Ĥ_a_p,b_p |α_p-1⟩ may either equal |α_p-1⟩ or differ from |α_p-1⟩ on the b_p bond (site), e.g., due to spin flips. The identity (operator) is also diagonal. The finite matrix elements ⟨α_p|Ĥ_a_p,b_p|α_p-1⟩ of the operators, also called the vertices and illustrated in Fig. <ref>(a), keep track of the configuration differences, if any, between two neighboring time slices p-1 and p; see examples in Fig. <ref>(a).
We may sample the |α_p⟩ configurations in the (D+1)-dimensional space-time, uniquely determined by the initial state |α⟩ and the operator sequence S_M, with which we can trace |α_p⟩ along the imaginary-time direction slice-by-slice; see Fig. <ref>(b). Following Eq. <ref>, the Monte Carlo weight of each configuration is:
W(α,S_M) = β^n(M-n)!/M!⟨α|∏_p=1^M Ĥ_a_p, b_p| α⟩
= β^n(M-n) !/M !∏_p=1^M ⟨α_p|Ĥ_a_p, b_p| α_p-1⟩,
where |α_0⟩ = |α_M⟩ = |α⟩. As a result, we can evaluate the expectation value of operator  as:
⟨Â⟩=∑_α, S_M A(α, S_M) W(α, S_M)/∑_α, S_M W(α, S_M),
where A(α, S_M) is the matrix element of  given the configuration in |α⟩ and S_M. One important example is the expectation value ⟨Ĥ_a,b⟩, where H_a,b(α,S_M)=n_a,b/β, and n_a,b is the number of Ĥ_a,b in the operator sequence S_M.
There is one more essential requirement to make the QMC-SSE method work: the sampling probabilities W(α,S_M) in Eq. <ref> [W(α,S_M)/∑_α, S_M W(α, S_M) after normalization] need to be positive-semidefinite. Correspondingly, either the matrix elements ⟨α_p|Ĥ_a_p,b_p|α_p-1⟩ are positive-semidefinite, or the number of negative matrix elements in the operator sequence is always even, so the overall product is still positive-semidefinite <cit.>. If the negative probability cannot be removed by any means, we encounter the sign problem <cit.> and cannot carry out the calculations in a controlled way, especially for large systems.
As the imaginary time propagates and we keep track of the configuration changes, e.g., the spin-up positions intervened by off-diagonal operators in a quantum spin model, we obtain a series of trajectories called the worldlines; see Fig. <ref>(c). The worldlines offer another representation of the configurations and play a crucial role in efficient loop updates for the QMC-SSE method <cit.>.
Due to the presenting trace in the partition function, |α_0⟩ = |α_M⟩, the worldlines in the QMC-SSE samples must obey periodic boundary conditions in the imaginary-time direction and form closed loops [Fig. <ref>(c)]. Meanwhile, the worldlines can wrap around the system, and the net number of times they wrap around is called the winding number w <cit.>. w can be a finite integer in PBCs, while w should always be zero in OBCs. One of this work's key conclusions is the emergence of nontrivial dominant worldline winding in non-Hermitian quantum systems.
§.§ QMC-SSE applicability towards non-Hermitian quantum systems
For a non-Hermitian Hamiltonian Ĥ≠Ĥ^†, its right eigenstates |Ψ_i^R⟩ and left eigenstates |Ψ_i^L⟩ corresponding to eigenvalue E_i <cit.>:
Ĥ|Ψ_i^R⟩ = E_i|Ψ_i^R⟩,
Ĥ^†|Ψ_i^L⟩ = E_i^*|Ψ_i^L⟩,
are different in general, and obey the biorthogonal conditions ⟨Ψ_n^L|Ψ_m^R⟩=δ_mn, Î=∑_n |Ψ_n^R⟩⟨Ψ_n^L| instead.
Given a non-Hermitian Hamiltonian in a biorthogonal form <cit.>:
Ĥ=∑_i E_i|Ψ_i^R⟩⟨Ψ_i^L|,
we note its partition function:
Z = ∑_n e^-β E_n =∑_n e^-β E_n⟨Ψ_n^L|∑_α|α⟩⟨α|Ψ_n^R⟩
= ∑_α⟨α|e^-βĤ∑_n |Ψ_n^R⟩⟨Ψ_n^L|α⟩ =∑_α⟨α|e^-βĤ|α⟩,
retains the definition under an orthogonal basis {|α⟩} in Eq. <ref>. Therefore, the non-Hermiticity and biorthogonality of non-Hermitian quantum systems do not pose direct obstacles to the QMC-SSE method.
Similar to the Hermitian cases, we require the matrix elements ⟨α_p|Ĥ_a_p,b_p|α_p-1⟩ to be non-negative or the number of negative matrix elements to be even, so that the overall sampling probability remains positive-semidefinite (sign-problem-free) in QMC-SSE calculations. Such requirements mainly depend on the model parameters and operators rather than the Hermiticity. However, unlike the Hermitian cases, where the partition function is always real and positive, here, a positive-definite partition function is a requirement. For certain non-Hermitian systems <cit.>, e.g., Ĥ with 𝒫𝒯 symmetry, the spectrum is either real or in complex-conjugate pairs, and the corresponding partition function is guaranteed to be real <cit.>:
Z =Tr[∑_i e^-β E_i|Ψ_i^R⟩⟨Ψ_i^L|]
=∑_E_i ∈reale^-β E_i+∑_E_i ∈complex(e^-β E_i+e^-β E_i^*).
Therefore, although non-Hermitian quantum systems may possess potentially complex spectra, the QMC-SSE method is still viable as long as its matrix elements are sign-problem-free. Like in the Hermitian cases, the QMC-SSE method is an efficient and straightforward algorithm applicable to relatively large systems and even higher dimensions.
We can also use the QMC-SSE method to study the ground-state properties of non-Hermitian quantum many-body systems. Here, we define the ground state as the eigenstate (|Ψ_0^R⟩ and |Ψ_0^L⟩) with the lowest real part of its eigenenergy. For a sufficiently low temperature (large β, e.g., β=100 in unit of common model parameters):
⟨Â⟩_LR = Tr[Â∑_i e^-β E_i|Ψ_i^R⟩⟨Ψ_i^L|]/Z ≈⟨Ψ_0^L|Â|Ψ_0^R⟩.
§.§ Example: non-Hermitian quantum spin chains
Without loss of generality, let us consider the following non-Hermitian quantum spin chain of length N:
Ĥ = ∑_b J_z S^z_b S^z_b+1 + [1- (-1)^b Δ J] (S^x_b S^x_b+1+S^y_b S^y_b+1)
+ iδ (S^x_b S^y_b+1-S^y_b S^x_b+1)
= ∑_b J_z S^z_b S^z_b+1+1/2[1-(-1)^bΔ J-δ] S_b^+ S_b+1^-
+ 1/2[1-(-1)^bΔ J+δ] S_b^- S_b+1^+,
where J_z, Δ J, δ∈ℝ are model parameters: J_z is an Ising-type interaction, Δ J is a staggered XY interaction, and δ is responsible for the overall non-Hermiticity of the model. For OBC, the summation of b runs between 1 and N-1, while we sum over b∈ [1, N] and identify b=1, N+1 for PBC.
To apply the QMC-SSE method, we decompose the Hamiltonian as:
Ĥ =-∑_b Ĥ_1,b- Ĥ_2, b- Ĥ_3, b,
Ĥ_1, b =C-J_z S_b^z S_b+1^z,
Ĥ_2, b =1/2[1-Δ J(-1)^b-δ] S_b^+ S_b+1^-,
Ĥ_3, b =1/2[1-Δ J(-1)^b+δ] S_b^- S_b+1^+,
where C=ϵ+J_z/4 is a constant that alters some matrix elements while keeping the model physics invariant. We also regard Ĥ_2,b and Ĥ_3,b as two separate off-diagonal operators. Their coefficients differ when δ≠ 0 and allow Ĥ to be non-Hermitian. Correspondingly, the partition function takes the following form:
Z=∑_α,S_nβ^n/n!(-1)^n_2+n_3⟨α|∏_p=1^n Ĥ_a_p,b_p|α⟩,
where n_2 and n_3 are the number of Ĥ_2,b and Ĥ_3,b operators in the operator sequence {[a_p, b_p]}, respectively. For a quantum spin chain with an even number N of sites, the total number of off-diagonal operators that shift a spin up by one lattice spacing, n_2+n_3, is always even irrespective of the configurations. Thus, we can safely drop the (-1)^n_2+n_3 factor. The nonzero matrix elements of the nontrivial operators are:
W_11 =⟨↑↑|Ĥ_1,b|↑↑⟩=ϵ,
W_12 =⟨↓↓|Ĥ_1,b|↓↓⟩=ϵ,
W_13 =⟨↑↓|Ĥ_1,b|↑↓⟩=ϵ+J_z/2,
W_14 =⟨↓↑|Ĥ_1,b|↓↑⟩=ϵ+J_z/2,
W_2 =⟨↑↓|Ĥ_2,b|↓↑⟩=1/2[1-Δ J(-1)^b-δ],
W_3 =⟨↓↑|Ĥ_3,b|↑↓⟩=1/2[1-Δ J(-1)^b+δ],
whose vertices are illustrated in Fig. <ref>(a). To meet the positive-semidefinite requirement on such vertices, we should make the model parameters satisfy 1-|Δ J|-|δ|≥0 and ϵ≥max(0,-J_z/2). The resulting model is sign-problem-free for the QMC-SSE method.
For benchmark, we first calculate the energies of non-Hermitian quantum spin chains under OBCs at a low temperature β=100 with the QMC-SSE method and compare with the ground-state energy via exact diagonalization (ED) for relatively small systems N=12. The ED results have also confirmed that the models host real spectra, which pose no problem for the QMC-SSE method. We summarize the results for various δ and Δ J with a finite J_z=0.5 in Fig. <ref>, showing satisfactory consistency and that QMC-SSE works well on non-Hermitian systems with interactions.
Interestingly, we can map the non-Hermitian quantum spin chain in Eq. <ref> to a non-Hermitian interacting fermion chain through the Jordan-Wigner transformation <cit.>:
S^z_i = f_i^†f_i-1/2,
S_i^+S_i+1^- = f_i^†f_i+1,
S_i^-S_i+1^+ = f_i+1^†f_i,
where a spin-up (spin-down) site in the spin model corresponds to an occupied (empty) site in the fermion model. Likewise, the worldlines trace the fermions and form closed loops in the fermion model. Therefore, the QMC-SSE method also generalizes straightforwardly to non-Hermitian interacting fermion systems.
In particular, the corresponding fermion chain is non-interacting when J_z=0. We note that the single-particle right eigenstates of non-Hermitian free-fermion chains may exhibit the NHSE under certain parameter settings, as shown in Fig. <ref>. However, the NHSE is absent from the quantum many-body perspective, as forbidden by the Pauli exclusion principle <cit.> and under the biorthogonal basis. Indeed, we evaluate the density distribution of a single-particle state by taking the difference between two many-body densities with n_f=N/2, N/2-1 fermions (S_z^tot=0, -1 under the quantum spin representation) with or without the target single-particle state, respectively. The results display no NHSE and are consistent with the density expectation values under the biorthogonal basis; see Fig. <ref>. Such consistency also indicates that our QMC-SSE calculations are readily applicable to relatively large systems and low temperatures.
§ NONTRIVIAL WORLDLINE WINDING IN NON-HERMITIAN QUANTUM SYSTEMS
§.§ QMC difficulty for non-Hermitian systems under PBC
Unlike the OBC cases, however, the QMC-SSE calculations for non-Hermitian models under PBC sometimes strike obstacles and fail to converge to the benchmark values. For example, we evaluate the ground-state energies of various non-Hermitian quantum spin chains under PBCs, and the divergences between QMC-SSE results and ED benchmarks are clearly beyond an uncertainty explanation; see Fig. <ref>. Such deviation generally increases with the non-Hermitian parameter δ and decreases with Δ J and J_z. We will first give a prompt answer on the origin of such difficulty; in later subsections, we will give more quantitative studies and discuss its possible resolution and physical consequences.
In Sec. <ref>, we discussed the concept of worldlines in (D+1)-dimensional space-time and their corresponding winding number w. Obviously, we have w=0 in the cases of OBCs; under PBCs, however, worldlines may possess nontrivial winding numbers w≠ 0, i.e., wrap around the system along a periodic spatial direction for a finite number of net times before returning to the initial spot as it evolves under imaginary time. Indeed, the problem in QMC-SSE calculations for non-Hermitian quantum systems under PBCs is associated with such global loops and winding numbers: (1) the dominant worldline sector in the partition function, thus in the QMC-SSE sampling, may shift to w_opt≠ 0, and (2) the transitions between different winding-number sectors are limited, breaking the ergodicity essential for convergence; see Table <ref> for example.
§.§ Nontrivial worldline winding from a non-Hermitian toy-model perspective
To illustrate such a nontrivial distribution of worldline winding numbers in non-Hermitian quantum systems, we consider the following non-Hermitian toy model on a 1D periodic system:
Ĥ=-∂^2/∂θ^2+α∂/∂θ,
whose eigenstates [The left and right eigenstates are identical in this case.] and eigenenergies are:
ψ_m(θ) =exp (i m θ), E_m = m^2+iα m,
where m∈ℤ is the angular momentum.
Following the imaginary-time path-integral formalism, we can derive the partition function as:
Z = ∫ D θ∏_j=1^N⟨θ_j+1|exp (-Δτ H)| θ_j⟩
= ∫ D θ∏_j=1^N {∑_m_lexp[i m_l(θ_j+1-θ_j) -Δτ(m_l^2+i α m_l)]}
= ∫ D θ∏_j=1^N{∑_n_lexp[-(θ_j+1-θ_j+2 π n_l-α Δτ)^2/4 Δτ]},
where Δτ=β / N is a small discrete step in the imaginary-time direction, labeled by j, with θ_N+1=θ_1. We have employed Poisson’s summation formula in the last line.
To tackle such a functional integral, we start from a typical path:
θ_j=[θ_1+2 π w/N(j-1)+δθ_j, 2 π],
where w is θ's winding number and δθ_j are local fluctuations that are essentially independent of w:
θ_j+1-θ_j={2π w/N , θ_j+2π w/N<2π,
2π w/N-2π , θ_j+2π w/N>2π,
.
where θ goes across 2π from j to j+1 for the second line. Consequently, for Δτ→ 0, i.e., N→∞, the summation over n in Eq. <ref> is dominated by n=0 so that (θ_j+1-θ_j+2 π n-α Δτ)^2≈0, unless θ goes across 2π from j to j+1, where n=1 dominates. As a result, after keeping only the contributing terms, we obtain:
Z =f(β)∑_w=-∞^+∞∏_j=1^Nexp{-(2π w-αβ)^2/4Nβ}
=f(β)∑_w=-∞^+∞exp[-(2π w-αβ)^2/4β],
where f(β) is a function on the effects of δθ_j fluctuations independent of w.
The partition function in Eq. <ref> characterizes the weights and importance of different winding-number sectors, which contain imaginary-time path-integral worldlines that wrap around the [0,2π] interval a net w number of times. In an ideal QMC sampling process, the larger the weight of a particular winding number w, the more frequently we should sample the corresponding sector's configurations. For the Hermitian case with α=0, the partition function is dominated by the w=0 sector <cit.>. Especially, the weights for different winding numbers converge at low temperatures (large β); thus, calculations in a specific sector, e.g., w=0 for typical initializations, are as good as calculations that run through all sectors <cit.>. However, for the non-Hermitian cases α≠0, the worldline configurations with nontrivial winding w_opt=αβ/2π have the largest weight. Moreover, the location of the most probable sector moves farther away from w=0 as β increases. As a result, we need to ensure that all sectors, if not the sectors around w_opt=αβ/2π in particular, are appropriately represented in the sampling and calculations.
However, such worldline winding numbers are essentially topological quantities, and updates that alter winding are scarce and rarely accepted in QMC-SSE calculations. Consequently, we may encounter a problem with ergodicity: the configurations are stuck near the initial w far away from w_opt, leading to incomprehensive sampling and, therefore, inaccurate evaluations, as we demonstrated in Sec. <ref> and Table <ref>. For more ergodic QMC-SSE calculations, we may introduce a remedy by enhancing the transition rates between different worldline winding-number sectors, which we discuss next.
§.§ Enhanced ergodicity between winding-number sectors
To enhance the ergodicity between different winding-number sectors, we dig into the proposed updates in the directed loop update algorithm. The vertices are at the center of the proposed updates to the worldlines. There are four possible legs for the exit given an entrance leg into a vertex; if the exit and entrance legs are identical, the proposed loop experiences a bounce process <cit.>. Intuitively, we wish to minimize or at least reduce the bounce probability to allow the loop to propagate and proliferate and end up with more global loops so that they may alter the winding number more efficiently. However, we do not have many degrees of freedom for maneuvering: parameters like N, β, J_z, δ, and Δ J are all physically relevant. Fortunately, there are model-independent parameters, such as ϵ, which we can tune to adjust the bounce probability and enhance ergodicity without causing changes in physics.
Without loss of generality, we consider vertex W_3 with the entrance leg in the lower left as an example, whose probability of updated vertex with corresponding exit leg is:
P(W_3 → W_j)=W_j/W_11+W_13+W_3,
where W_11, W_13, and W_3 are the (weights of) vertices associated with the exit legs in the upper right, the upper left, and the lower left, respectively (the exit leg in the lower right has no corresponding vertex and thus zero matrix element); see Fig. <ref>a and Eq. <ref>. In particular, the probability of the bounce process, where the exit and entrance legs are identical, and the vertex remains unchanged, is:
P_bounce=P(W_3 → W_3)=1-(-1)^b Δ J+δ/[1-(-1)^b Δ J+δ]+4ϵ+J_z.
Therefore, we can reduce the bounce probability by increasing ϵ.
More comprehensively, we may estimate the average bounce probability semi-quantitatively as follows. As we discussed in Sec. <ref>, we can relate the operator expectation values ⟨Ĥ_a,b⟩=⟨ n_a,b⟩/β with their (average) instances ⟨ n_a,b⟩ appearing in the operator sequence S_M. Therefore, we have:
⟨ n_1,b⟩/β= ⟨Ĥ_1,b⟩ =ϵ+J_z/4-J_z⟨ S_b^zS_b+1^z⟩,
⟨ n_2,b⟩/β= ⟨Ĥ_2,b⟩ =1/2(1-δ-Δ J(-1)^b)⟨ S_b^+S_b+1^-⟩,
⟨ n_3,b⟩/β= ⟨Ĥ_3,b⟩ =1/2(1+δ-Δ J(-1)^b)⟨ S_b^-S_b+1^+⟩.
Further, we can divide ⟨ n_1,b⟩ of the diagonal operator Ĥ_1,b into that of its four vertices: ⟨ n_11,b⟩ / ϵ = ⟨ n_12,b⟩ /ϵ = ⟨ n_13,b⟩ / (ϵ+J_z/2)= ⟨ n_14,b⟩ / (ϵ + J_z/2) following Eq. <ref>. As a result, we can roughly establish the ratio of each type of vertices in QMC-SSE samples from the correlation functions:
⟨ n_11,b⟩/β = ⟨ n_12,b⟩/β = 2ϵ(ϵ+J_z/4-J_z⟨ S_b^zS_b+1^z⟩)/4ϵ+J_z,
⟨ n_13,b⟩/β = ⟨ n_14,b⟩/β =(2ϵ+J_z ) (ϵ+J_z/4-J_z⟨ S_b^zS_b+1^z⟩)/4ϵ+J_z.
Then, we can estimate the bounce probability:
P_bounce(i)=∑_j⟨ n_j,b⟩/∑_j'⟨ n_j',b⟩ P(W_j → W_j),
by averaging over the vertices with respect to their weights in Eqs. <ref> and <ref>.
We summarize the bounce probability and the ratio R_Δ w≠ 0 of worldline-winding-altering loops in directed loop updates among the QMC-SSE calculations for varying ϵ in Fig. <ref>. The semi-quantitative bounce probability in Eq. <ref> also presents a reasonable estimation. For ϵ=0, the bounce probability is nearly 0.9, and R_Δ w≠ 0 is nearly zero, hampering effective transitions between different worldline winding-number sectors; in comparison, the bounce probability drops below 0.4 for ϵ∈ [0.5, 1.0], and subsequently, R_Δ w≠ 0 approaches nearly 10%, providing enhanced ergodicity in QMC-SSE sampling.
Indeed, introducing a finite ϵ=0.5 enhances ergodicity under PBCs and yields consistent results in the QMC-SSE calculations. As summarized in Fig. <ref>, the QMC-SSE results on non-Hermitian quantum spin chains witness satisfactory consistency with the ED benchmarks upon setting ϵ=0.5, with remarkable improvements over and contrast with the ϵ=0 results plagued by nontrivial worldline winding. Such characteristic disparities in efficiency on changing winding numbers are also apparent in Table <ref>. The remedy also works on relatively large non-Hermitian quantum systems, where, with global and topological distinctions, the barrier between different worldline winding sectors and the ergodicity issue is intuitively more severe. For instance, we compare the QMC-SSE results for J_z=0 under PBCs with the non-Hermitian free-fermion models upon the Jordan Wigner transformation and obtain consistent results on relatively large systems (N=64), see Fig. <ref>, suggesting the nontrivial worldline winding no longer poses an apparent obstacle. We note that such a remedy is not unique or exclusive, as there exist other ways to enhance ergodicity between different winding-number sectors, such as periodically proposing updates that insert specific vertices leading to a new worldline with ± 1 winding number.
§.§ Investigation on model conditions for nontrivial worldline winding
Previously in Sec. <ref>, we have shown in the toy model that, unlike Hermitian models, the most dominant worldline winding is no longer necessarily the w=0 sector in non-Hermitian quantum systems. Such nontrivial winding numbers may cause difficulty in ergodicity and deviations in expectation values (Sec. <ref>). Here, through numerical studies of various non-Hermitian quantum spin chains with PBCs and enhanced ergodicity (Sec. <ref>), we keep track of the worldline winding numbers w during our QMC-SSE calculations and analyze the systematic conditions of such nontrivial worldline winding. Importantly, the conditions of nontrivial worldline winding coincide with nontrivial point-gap topology, which directs the NHSE under OBCs.
We summarize the results for varying δ and fixed J_z=Δ J=0 and β=100 in Fig. <ref>. The resulting models are equivalent to free-fermion models with vanishing line gaps following the Jordan-Wigner transformation. Like the gapless non-Hermitian toy model in Eq. <ref>, the winding number distributions, normalized as a ratio r_w=N_w/∑_wN_w, display a Gaussian-shaped pattern; notably, the fitted peak of the distribution sits at w_opt=0 for δ=0 and gradually shifts to the right w_opt>0 as the non-Hermitian parameter δ - the amplitude difference between the right hopping Ĥ_2,b and the left hopping Ĥ_3,b - increases. Such linear relation is comparable to Eq. <ref> of the gapless toy model.
Then, we study the impact of different values of Δ J, and summarize the evolution of the dominant worldline winding number w_opt, i.e., the Gaussian-fit peak location in the w distribution in Fig. <ref>. Interestingly, we observe w_opt≠ 0 if and only if |δ| > |Δ J|. This parameter space coincides with the nontrivial point-gap topology, where the non-Hermitian free-fermion chains will display the NHSE under OBCs. On the contrary, when |δ| < |Δ J|, we have w_opt=0 despite of nonzero non-Hermitian parameter δ. Correspondingly, the QMC-SSE method really needs a boost from enhanced ergodicity for larger δ, especially when δ surpasses Δ J, consistent with the behaviors otherwise in Fig. <ref>.
It is also interesting to examine the winding-number distributions for various system sizes N, which we illustrate in Fig. <ref>. While the width of the distribution relies on N, the dominant winding number w_opt hardly spots any difference. In large systems, such nontrivial winding consistently introduces global worldlines that traverse the systems and communicate regions far apart, potentially giving rise to long-range quantum entanglement. On the other hand, as we carefully inspect w_opt versus δ for a finite Δ J=0.3, the contrast of zero versus finite w_opt across the transition at δ_C=Δ J becomes clearer for larger systems, making w_opt a better signature for nontrivial point-gap topology as discussed in Fig. <ref>.
Unlike the NHSE, which works only for single-particle eigenstates at zero temperature and without interacting, the nontrivial worldline winding is a quantum phenomenon that straightforwardly generalizes to finite temperatures and interacting systems. For example, we analyze the evolution of worldline winding number distributions in QMC-SSE samples of quantum spin chains for increasing β. The resulting Gaussian-shaped distributions in Fig. <ref> display broadening widths and increasing peak winding number w_opt for larger β. Especially, w_opt increases linearly with β for models without the line gap Δ J. These features are consistent with the toy-model results in Eq. <ref> and also indicate that, unlike Hermitian quantum systems, we cannot focus solely on the w=0 winding-number sector commonly used for QMC-SSE initialization nor equate different winding number sectors in the low-temperature limit β→∞, as we discussed in Sec. <ref>.
Analysis based upon worldline winding also applies to interacting fermion systems, equivalent to quantum spin chains with nonzero J_z after the Jordan-Wigner transformation in Eq. <ref>. For instance, we study the worldline winding-number distributions in the QMC-SSE calculations for various δ and J_z and summarize the dominant w_opt in Fig. <ref>. In addition to the non-Hermitian parameter δ, the interaction parameter J_z also visibly influences the non-Hermitian topological physics. The w_opt results are also consistent with the (finite-size extrapolations of) many-body spectrum-flow-based identifications <cit.>, also plotted in Fig. <ref> as the dotted lines. However, such evaluations commonly require the full spectra and exponential computational costs and thus are applicable only for smaller interacting quantum systems; see Appendix B for detailed results. We also note that identifying and analyzing such a non-Hermitian quantum many-body system is beyond the NHSE, which requires non-interacting eigenstates under OBCs under the right single-particle eigenstate basis. Consequently, the nontrivial worldline winding offers a novel and efficient characterization of non-Hermitian topology on interacting quantum systems.
The nontrivial worldline winding number also extends straightforwardly to non-Hermitian quantum systems without translation symmetries. Finally, rather than offering a binary “yes or no" verdict on the point-gap topology, the finite values of w_opt, if any, offer a more quantitative measure of the extent of non-Hermitian topology at play. In summary, nontrivial worldline winding offers a broader range of applicability for studying and identifying nontrivial point-gap topology in non-Hermitian quantum systems.
§ CONSEQUENCE OF NONTRIVIAL WORLDLINE WINDING: NON-HERMITIAN ENTANGLEMENT ENTROPY
§.§ Entanglement entropy in non-Hermitian quantum systems
The nontrivial worldline winding also has immediate physical consequences. For example, such nontrivial winding guarantees worldlines' global presence and inevitable passages across boundaries [Fig. <ref>(c)], introducing communications and thus extra quantum entanglement between the regions, even those far apart. We expect these effects to manifest in the real-space entanglement entropy of non-Hermitian quantum systems. In particular, we focus on the Renyi (entanglement) entropy <cit.>:
S_A^(n)=1/1-nln(Trρ̂_A^n ),
where ρ̂_A is the (reduced) density matrix of the subsystem A. We set n=2 since the second Renyi entropy S^(2)_A can be evaluated relatively straightforwardly for both free-fermion models <cit.> and interacting systems via the QMC-SSE method <cit.>.
However, the definitions of entanglement entropy in non-Hermitian quantum systems remain ambiguous. A simple generalization from the Hermitian case suggests S^(2)_A=-ln( Trρ̂_A^2 ) <cit.>; however, ρ̂_A is no longer Hermitian, neither is it positive-semidefinite or even real-valued, and the resulting S^(2)_A is complex-defined making its meaning as an entanglement measure obscure. On the other hand, the formalism S^(2)_A=-ln( Tr |ρ̂_A|^2 )=-ln[ Tr( ρ̂^†_A ρ̂_A ) ] guarantees a positive semi-definite entropy, yet the absolute value is a drastic, non-analytic process. For clarity, we will present results following both definitions, and for each definition, check out the differences Δ S:
Δ S=S^(2)_A,PBC-S^(2)_A,OBC,
between S^(2)_A,PBC under PBCs and S^(2)_A,OBC under OBCs with trivial winding w=0, and locate the non-Hermitian entanglement entropy contributions accompanying nontrivial worldline winding, and in turn, nontrivial point-gap topology.
§.§ Non-Hermitian free-fermion systems
Here, we focus on non-Hermitian 1D free-fermion models, equivalent to non-Hermitian quantum spin chains with J_z=0. The Hamiltonians take a quadratic form:
Ĥ= ∑_i,j c^†_i ℋ_ij c_j = ∑_n ϵ_n |ψ_n^R⟩⟨ψ_n^L|,
where c_i (c_i^†) is the fermion annihilation (creation) operator at site i, and |ψ_n^R⟩ and ⟨ψ_n^L| are the single-particle biorthogonal basis obtainable from ℋ's decomposition. For free fermions, we can obtain the single-particle (reduced) density operator ρ̂_A for region A from the correlation matrix C_ij= ⟨ c^†_i c_j ⟩ <cit.>, where i, j∈ A, and subsequently, the second Renyi entropy:
S_A^(2) = -∑_nln[ξ^2_n+(1-ξ_n)^2],
S_A^(2) = -∑_nln[|ξ_n|^2+(1-|ξ_n|)^2],
where ξ_n are the eigenvalues of the density operator (correlation matrix). To suppress the potential impacts of the edge physics <cit.>, we define A as the central region between the (N/4)^th site and (3N/4+1)^th site on a chain of length N.
The resulting differences between entanglement entropy under PBCs and OBCs are summarized in Fig. <ref>. For various Δ J, we observe consistently vanishing differences Δ S and Δ S for -Δ J<δ<Δ J, where the non-Hermitian quantum systems' worldline winding and the point-gap topology is trivial. Interestingly, positive values of Δ S and Δ S emerge for |δ|>Δ J, indicating additional entanglement contributions from nontrivial worldlines with non-zero w_opt. Drastic changes in Δ S and Δ S occur in between, which may help locate the topological transitions. Similar studies of non-Hermitian entanglement entropy also apply to quantum systems with interactions and finite temperatures.
We also analyze the finite-size scaling of the Renyi entropy difference Δ S in Fig. <ref>. For non-Hermitian quantum systems without nontrivial worldline winding and point-gap topology, Δ S tends to zero in the thermodynamic limit as expected. However, in the presence of nontrivial worldline winding, Δ S possesses a non-zero value and a rising tendency in the N→∞ limit, consistent with the results and further asserting the conclusions in Fig. <ref>. Notably, such entanglement entropy from nontrivial worldline winding follows a logarithmic scaling with respect to the system size N, resembling quasi-long-range entanglement under the Area law with a logarithmic correction in gapless quantum systems <cit.>, which is consistent with our intuition, as the global worldlines persist to large systems (Fig. <ref>) and introduce additional entanglement between regions, even far-apart ones.
§ DISCUSSION
In summary, we have uncovered the emergent dominance of nontrivial worldline winding in non-Hermitian quantum systems under PBCs, even with interactions, finite temperatures, and various system sizes, which may possess essential impacts on the worldline winding and topology. The emergence is in line with, thus offers a broader and more quantitative measure for the non-Hermitian point-gap topology. Unlike the NHSE associated with the right eigenstates, the nontrivial worldline winding exhibits its physical effects as biorthogonal observables, including additional non-Hermitian entanglement entropy. We note that the correspondences between nontrivial worldline winding, point-gap topology, the NHSE, and the quasi-long-range entanglement entropy contributions, though making intuitive sense due to their simultaneous global natures, are our hypothesis and established either numerically or based upon toy models. An interesting future direction is to derive more rigorous connections theoretically.
In the QMC calculations for non-Hermitian quantum systems, such nontrivial worldline winding, together with the barrier between different winding-number sectors, may hamper the ergodicity and proper convergence. For non-Hermitian quantum spin chains, we propose a simple algorithmic remedy to enhance ergodicity between different winding-number sectors. We note that the nontrivial worldline winding is a general phenomenon with clear-cut physical significance and undoubtedly beyond the QMC-SSE formalism, even if we have discussed the worldlines in the QMC-SSE method and used QMC-SSE results for illustrations. Indeed, we have showcased and analyzed the presence of nontrivial worldline winding in the non-Hermitian toy model with the path-integral approach and the non-Hermitian free-fermion models with the exact solutions under the single-particle bases. It will be interesting to investigate analogous worldline winding physics in other QMC and non-QMC algorithms.
Finally, we have focused on non-Hermitian quantum systems in 1D and the simplest point-gap topology. We note the fascinating possibilities at higher dimensions, with rich categories of non-Hermitian topological phenomena at the research frontier and diverse boundary conditions for worldline windings and braidings. Recently, the NHSE in higher dimensions and its interplay with boundary conditions has attracted much attention <cit.>. However, numerical difficulties in non-Hermitian Hamiltonians, e.g., boundary sensitivity and instability <cit.>, may hamper studies and progress, especially in higher dimensions. Nontrivial worldline winding offers a novel, physically intuitive perspective and better numerical stability under PBCs on such problems. The efficiency and compatibility of the QMC-SSE method in higher dimensions also offer practical research facilities in non-Hermitian quantum systems with interactions and finite temperatures.
Acknowledgment: We acknowledge helpful discussions with Lei Wang and Kun Ding. We also acknowledge support from the National Key R&D Program of China (No.2022YFA1403700) and the National Natural Science Foundation of China (No.12174008 & No.92270102).
|
http://arxiv.org/abs/2307.01063v1
|
20230703144104
|
Synthesising Full-Information Protocols
|
[
"Dietmar Berwanger",
"Laurent Doyen",
"Thomas Soullard"
] |
cs.LO
|
[
"cs.LO"
] |
Localized Questions in
Medical Visual Question Answering
Sergio Tascon-Morales, Pablo Márquez-Neila, Raphael Sznitman
August 1, 2023
================================================================
We lay out a model of games with imperfect information that features
explicit communication actions, by which the
entire observation history of a player is revealed to another player.
Such full-information
protocols are common in asynchronous distributed systems; here, we
consider a synchronous setting with a single active player who may
communicate with multiple passive observers in an indeterminate
environment. We present a procedure for solving the basic
strategy-synthesis problem under regular winning conditions.
We present our solution in an abstract framework of games with
imperfect information and we split the proof in two conceptual
parts: (i) a generic reduction schema from imperfect-information
to perfect-information games, and (ii) a specific construction
for full-information protocols that satisfies the requirement of the
reduction schema.
§ INTRODUCTION
The aim of reactive synthesis is to construct an input-output sequential system
that satisfies a given specification.
Reactive synthesis can be viewed as a repeated game between a player and its environment
over infinitely many rounds, where in each round the environment chooses an input,
and the player chooses an output.
The outcome is an infinite run of interleaved inputs and outputs, and the objective is given by the specification
that defines the set of winning runs for the player.
Synthesis then reduces to deciding whether there exists a winning strategy
(that ensures all outcomes satisfy the specification), and if so to constructing
one <cit.>. The synthesised winning strategy defines
an implementation of a reactive system that is correct by construction.
For distributed systems, consisting of several components, it is
natural to consider games with multiple players, where each player
is an input-output component to be synthesized.
The game is synchronous if the components share a global clock that
triggers the rounds of the game, and asynchronous if the local clocks
of the components are independent.
Unfortunately, synthesis is undecidable for both synchronous <cit.> and asynchronous <cit.>
multi-player games, even for simple objectives such as reachability.
Decidability is retained by restricting the communication architecture,
for example to games without communication fork (which ensures a linear
order of the information available to the players) in the synchronous setting <cit.>
and to games with decomposable communication graph <cit.>, such as acyclic <cit.>,
series-parallel <cit.>, or connectedly communicating <cit.> architecture in the asynchronous
setting.
For synchronous synthesis, decidability holds in a more general setting
where the architecture is not fixed and may change in the course
of the game, as long as
the information available to the players is hierarchically (i.e., linearly) ordered,
even if the order may change along a run <cit.>.
If moreover communication is extended from bounded-size messages to the entire causal
history of the sender, synthesis remains decidable for systems with two components <cit.>.
Communicating the causal view of a process is the central mechanism in the
asynchronous setting of Zielonka automata <cit.>, and more generally in
full-information protocols, where communication entails full disclosure
of all available information of the sender, which in particular requires unbounded message size.
In this context, we consider a model of full-information protocols in a synchronous setting
where the communication architecture is not fixed and evolves along the run.
Communication is thus not limited to a fixed alphabet as it may encompass the entire
causal view of the sender.
A crucial feature of our model is that the specification
is a language over
the alphabet consisting of the input of all components and the output
of a single component, called the active player.
Specifications that do not constrain the communication between
components are sometimes called input-output or external <cit.>.
Intuitively the other players, called the observers, may only decide the amount of information
they would transmit
upon communicating with each other, including with the active player,
but that is not part of the system's output.
Since the output of the observers is not constrained
by the specification, we may assume that the largest possible amount
of information is transmitted by the observers, namely their entire causal view.
This reduces the synthesis problem
in full-information protocols to deciding the existence (and to constructing) a
winning strategy just for the active player, which we show is decidable.
Consider the following paradigmatic scenario in a full-information
protocol with two observers.
In the first k rounds of the game,
the environment sends an arbitrary sequence
u ∈{0,1}^* of bits to the first observer, and another sequence
v ∈{0,1}^* to the second observer, such that u and v
are of the same length k. The active player receives
the sequence {}^k followed by a signal ♯ in the (k+1)-th round
that enables two communication links, from the two observers
to the active player. At this point, the specification requires
that the main player chooses the action eq if u = v,
and the action neq otherwise. As communication enables
the entire causal view of the sender to be transmitted, the active player
can compare the two sequences u and v, hence there exists a
winning strategy in this game, no matter the value of k (chosen by the environment).
On the other hand, the size of the messages sent by the observers
to the active player is not bounded, as the number of bits in the sequences u and v
can be arbitrarily large.
We argue that in general the main obstacle to the decidability of reactive synthesis is the power of
imperfect information in the corresponding games. Information is acquired directly
by receiving observations from the environment (such as reading inputs, or
fetching sensor measurements), or indirectly through communication with other players.
On the basis of this information, a decision of which action (or output) to play
is made.
The available information is partial, which can be described
using an equivalence relation ∼ over the set Γ^* of histories (or finite runs),
where two equivalent histories τ∼τ' are said to be indistinguishable (from each other),
and the player considers that τ' is possible when the actual history is τ.
Strategies map all histories in an equivalence class [τ]_∼ = {τ' |τ' ∼τ} (τ∈Γ^*), called an information set,
to the same action. Information sets naturally form an infinite tree with root [ϵ]_∼ = {ϵ}
where the children of an information set [τ]_∼
are the sets [τ'c]_∼ such that τ' ∼τ and c ∈Γ.
The special case of full-information protocols with no observer
corresponds to partial-observation games à la Reif, where the information
of the active player is just a projection (obtained by a transduction) of the actual history,
called an observation sequence.
Partial-observation games are known to
be decidable for ω-regular objectives <cit.>.
The argument for decidability can be obtained by considering the perfect-information game
played on the information tree, where the equivalence relation ∼
relates two histories if they are mapped to the same observation sequence.
The branching degree of the information tree is bounded by the number
of observations, which is finite. The objective of the partial-observation game, defined over
infinite runs in Γ^ω, can be transferred to the information
tree by defining the set of winning infinite branches in the tree (i.e., infinite sequences of
equivalence classes, or infinite observation sequences). A branch π is winning
if all runs in Γ^ω with observation sequence π are winning.
Given a deterministic automaton that recognizes the ω-regular objective
in a partial-observation game, and given δ its transition
function, where δ(τ) is the state of the automaton reached upon reading history τ∈Γ^*,
we can construct an automaton over the observation alphabet that recognizes the winning branches,
using a subset construction that maintains, along a finite observation sequence ρ,
the set (ρ) = {δ(τ) |τ∈ U_ρ} where U_ρ
is the set of histories having observation sequence ρ.
This set can be updated upon reading the next observation.
In a more abstract view, the key ingredient for decidability is that the information
tree has bounded branching and is the unraveling of a finite-state machine,
enabling the automata-based approach to synthesis <cit.>.
Full-information protocols are more challenging, as
the information tree may have unbounded branching, like in our paradigmatic
example: the set U_k of all histories of length k over {0,1} are pairwise indistinguishable,
in fact U_k is an information set; on the other hand all histories
τ♯ for τ∈ U_k where ♯ is the communication signal,
are pairwise distinguishable, hence
U_k has at least 2^k children in the information tree, as illustrated in <ref>.
The unbounded branching degree of the information tree implies that
the synthesis problem for full-information protocols is equivalent
to solving a game of perfect information played on a tree that cannot
be obtained as the unraveling of a finite-state machine.
More generally, a natural expressiveness question is to compare
the set of information trees definable in a given game model.
We argue that indistinguishability relations are a convenient tool to evaluate and
compare the power of imperfect information in games.
For example, indistinguishability relations that induce an information tree
with unbounded branching cannot be expressed by observation-based
models à la Reif <cit.>. We generalize this result and show
that the number of observers in full-information protocols induces
a strict (infinite) hierarchy, in which the first level (corresponding to no observers)
consists of the observation-based indistinguishability relations à la Reif.
On the other hand, we show the decidability of the synthesis problem for full-information protocols
with arbitrary number of observers.
We present a reduction schema to finite-state games with perfect information
that preserves the existence of a winning strategy.
The reduction is parameterized by a homomorphism h defined on the set Γ^* of histories,
which will naturally induce a correspondence between the strategies
of the original game and of the reduced game.
We present a generic proof of correctness of the reduction, based on a small
number of elementary abstract properties of h, and on a characterization of the
equivalence between the two games, which is a bisimulation in our case.
Let us illustrate the generic approach in the special case of partial-observation games.
Consider the map h defined by h(τ) = δ(τ), {δ(τ') |τ' ∼τ},
and observe that the following properties hold:
(1) h has a finite range (since so does δ),
(2) h is a morphism, that is h(τ) = h(τ') implies
h(τ c) = h(τ' c),
(3) h is a refinement of δ, that is h(τ) = h(τ') implies
δ(τ) = δ(τ'), and
(4) h is rectangular, that is h(τ) = h(τ') implies
h([τ]_∼) = h([τ']_∼) (extending h in a pointwise fashion to
sets of histories).
Note that the first three properties are satisfied by h=δ and
that the fourth property (rectangularity) trivially holds in perfect-information games
where information sets are singletons.
As soon as we can effectively construct a map h with the above four properties,
which we call a rectangular morphism,
the reduction allows to get rid of imperfect information, in the sense that the constructed game H of perfect information
has the same (type of) objective as the original game G. This entails decidability
of synthesis for imperfect-information games with a class of objectives whenever synthesis for
perfect-information games with the same class of objectives is decidable.
The states of the game H are of the form h([τ]_∼) and we show that
h([τ]_∼) (in H) is bisimilar to [τ]_∼ (in the, possibly unbounded branching, information
tree of G), in a sense made precise in the paper.
In particular, for partial-observation games we recover the subset construction (ρ) = δ([τ]_∼)
for an information set U_ρ = [τ]_∼
by noticing that
h([τ]_∼) = {(p,P) | p ∈ P} where P = δ([τ]_∼),
which is isomorphic to P itself, that is to (ρ).
We construct a rectangular morphism h for full-information protocols
(with arbitrary number of observers), establishing the decidability of
the synthesis question.
This approach separates the technical difficulty of effectively constructing h
for a specific application
and the generic proof
of correctness of the reduction schema itself, given a rectangular morphism.
The generic proof aims at a certain form of generality for all games
of imperfect information, beyond full-information protocols.
In particular, we study the properties of the reduction in an abstract
framework and show that it induces a bisimulation between the information
tree and the constructed game of perfect information. A refined statement
is that the bisimulation is functional (i.e., it is a p-morphism), which entails
strong structural properties such as transfer of strategies back and forth
between the two games, way beyond the mere preservation of existence of a winning
strategy.
As a concluding remark, this approach can be viewed as
identifying a sufficient condition, namely the effective construction of a rectangular
morphism and the decidability of games with perfect information, to ensure decidability of the synthesis problem.
We leave open the natural, and possibly less relevant, question whether the condition is necessary as well,
as we view the approach as way to axiomatize the correctness of the reduction scheme.
The proof is a reduction of the game on the information tree,
thus with possibly unbounded branching, to a finite-state game.
We extract from the structure of the proof a generic approach to
establish the correctness of such a reduction by identifying
properties that are sufficient to preserve the answer to the
synthesis problem. While the properties may not be necessary to
solve the synthesis problem, we argue that they are a simple and
natural requirement for our approach to work, as we establish
a strong relationship between the original and the reduced games
in the form of a bisimulation.
In the course of the interaction, the component acquires
information about the current run of the system. On basis
of this information, it can choose its actions,
which contribute to the further run of the system.
The outcome of the interaction is thus an infinite run,
and the question is how
the component should map information to actions, such that
any outcoming run satisfies a given specification.
Information is acquired by observing the system run,
e.g, by reading inputs received from the environment, or by
fetching sensor measurements.
Such observations are transient. They reflect data of
the current system state or of the momentary transition.
However, this data can be stored and processed by the
component towards choosing its course of action.
Often, the observations yields only an imperfect representation of the actual run.
The interaction can be viewed conveniently as a game
against the environment. Straightforward if the component is
a single process: acts as a player with a strategy that returns
an action for each observation history.
However, distributed systems, components consist of
multiple processors,
each its own actions and observations
the information is distributed, available to the component,
but not to the process.
If we consider a single process in the perspective of
reactive synthesis, it interacts not only with the environment
of the component but also with other processes within the
component.
On the other hand, all processes of the component lie
in the scope of the synthesis task — their strategies will be
constructed.
Unlike the monolithic case, information generated exogenously.
here information available to one process can be transmitted
between processes.
Signaling:
Information generated not only by the environment, but
also other processes in the component under construction
Information available to the component (as a mind construct),
but not directly to the process
§.§ Information transmission
May help, one process to transmit
One way to achieve this is by signaling: Actions can
have observations as a side effect
Upper bound: at most the information that it holds Lower
bound: constraints of the system, costly, bandwidth, can
communicate at all
Implicitly: information action; difficult non-terminating
behaviour; transmission is strategic choice, more than one
decision-maker, undecidable
Study communication in distributed systems explicitly,
Communication events: all information available to one
process becomes readable
Best-case scenario: process transmits all the information
that it holds, involves no strategic choice
Hope for a decidable setting
Two-phase synthesis
Concrete model: discrete system, evolution determined
by occurrences of moves Moves issue (imperfect) observations
to each player Actions restrict set of possible moves
Objectives: sequences of infinite moves
Finite state.
New: Full-information protocols. Moves can trigger
communication events process a to b If this occurs, process
b obtained access to all information held by player a
Intuitively, message all inputs, but also all the input
sequences of any process c to which b had access
Notice messages size grows, no longer finite alphabet
Classical synthesis problem in this setting
Decidable.
Method: transform game into an equivalent one with
finite alphabet. General discussion about game equivalence
morphisms. Information model: indistinguishability relations
Morphism finite range that commutes.
Construct such a morphism.
Consequences: synthesis with finite messages, involves
extra processes Non-elementary complexity, preserves more
than necessary. Good news: first generic method on information
trees with unbounded branching
Causal memory Full-information protocols
§ DEFINITIONS
§.§ Basic notions
Given a function f: X → Y and a set Z ⊆ X,
we denote by f(Z)= {f(z) | z ∈ Z } the set of images of elements in Z.
We use finite automata as a model of acceptor of finite words,
and Mealy automata as a model of transducer.
They share a common underlying structure of the form Q, Γ, q_, δ,
called a semi-automaton consisting of a finite set Q of states,
a finite input alphabet Γ,
a designated initial state q_∈ Q,
and a transition function δ: Q ×Γ→ Q.
We extend the transition function to a function δ: Q ×Γ^* → Q
defined, for every state q ∈ Q, by δ(q, ϵ) = q for the empty word ϵ,
and, recursively δ(q, τ c) = δ(δ( q, τ), c ), for all words
obtained by the concatenation of a word τ∈Γ^* and a letter c ∈Γ.
The synchronous product of two semi-automata Q, Γ, q_, δ
and P, Γ, p_, δ' is the semi-automaton
Q × P, Γ, (p_,q_), Δ where Δ((q,p), c) =
(δ(q,c),δ'(p,c)) for all q ∈ Q, p ∈ P, and c ∈Γ.
A deterministic finite automaton (for short, ) is a
tuple = (Q, Γ, q_, δ, F)
expanding a semi-automaton by a set F ⊆ Q of accepting states.
A finite input word τ∈Γ^* is accepted by
if δ(q_, τ) ∈ F.
A Mealy automaton is
a tuple (Q, Γ, Σ, q_, δ, λ)
where (Q, Γ, q_, δ) is a semi-automaton, Σ
is a finite output alphabet, and
λ: Q ×Γ→Σ is an output function.
We extend the output function to a function λ: Γ^+ →Σ
by setting λ(ϵ) = ϵ and λ(τ c) = λ(δ(q_, τ), c)
for all words τ∈Γ^* and letters c ∈Γ.
We say that a function on Γ^* is regular if there exists a Mealy automaton
that defines it.
Given an input word τ = c_1 c_2 … c_n ∈Γ^*,
let λ̂(τ) = λ(c_1) λ(c_1 c_2) …λ(c_1 c_2 … c_n)
be the output sequence consisting of the output of all prefixes of τ.
We extend λ̂ to infinite words π = c_1 c_2 …∈Γ^ω
by setting λ̂(c_1 c_2 …) = λ(c_1) λ(c_1 c_2) … as expected.
Notational conventions:
For f X → Y, we write f( Z ):= { f( z ) | z ∈ X }, for all subset Z ⊆ X;
Tuple or sequence, componentwise f( (u, w) ) = f( u ), f( w).
f( x_0, x_1, …, x_ℓ ) = f( x_0 ), f( x_1 ), … f( x_ℓ).
Finally, extend any function f:Γ^* →Σ defined on histories to plays by concatenating the images of the prefixes:
f(c_1 c_2 …) = f( c_1) f( c_1, c_2) …,
for every play c_1 c_2 …∈Γ^ω. (Omitting the hat).
Regular FIP: regular observation functions, for each player i ∈ I:
input Γ output Σ^i
For each player, we specify a Mealy automaton
= (Q, Γ, Σ, q_, δ, λ),
with input alphabet Γ and output alphabet Σ.
Informally, the automaton runs along with the play, receiving
the played moves as input, and sending in return a new observation
symbol to the player, such that at every history τ
he observes λ( q_, τ). Due to the assumption of perfect recall,
the information of the player at history τ is thus described by the
cumulated output λ̂(q_, τ), which we call
observation history (of τ).
Formally, we define the observation function from
as a map βΓ^* →Σ^* with
β( τ ) = λ̂(q_,τ). To every observation
sequence η∈β( Γ^* ) we associate an information set
U_η := {τ∈Γ^* | β( τ ) = η}.
The information partition represented
by is the collection of these information sets.
Winning condition:
Priority labeling: histories, Mealy machine λΓ^* →ℕ
Parity condition: sufficient for finite games with regular
winning condition (later, after unfolding).
Operational description of a repeated game:
, (^i)_i ∈ I, (move, information, payoff structure).
A play e_1, e_2, … is winning if the sequence of
priorities ( e_1 ), ( e_2 ) … satisfies the parity condition: the least priority seen occurring infinitely often
is even.
§.§ Games with imperfect information
We consider games played in infinitely many stages between a player and Nature.
Abstract repeated games.
In every stage, a one-shot base game is played as follows:
the player chooses an action a from a given action set A,
then Nature chooses a move c from a given move set Γ.
The choice of Nature is constrained by a given function : Γ→ A
to the subset {c ∈Γ|(c) = a} of moves supported by the action a.
The game is well defined if the function is surjective.
A play is an infinite sequence
of moves π = c_1 c_2 …∈Γ^ω.
We denote by π(ℓ) = c_1 c_2 … c_ℓ
the prefix of length ℓ of π, with π(0) = ϵ.
Prefixes of plays are called histories and we denote by τ
the length of a history τ∈Γ^* and by (τ)
the last move in τ (if τ≥ 1).
Winning condition.
The objective of the player is specified by a winning condition, a
set W ⊆Γ^ω of plays declared to be winning.
Of special interest is the class of ω-regular languages that
extends regular languages to infinite words, and provides a robust
specification language to express commonly used specifications <cit.>.
It is convenient to specify winning conditions as (1) a logical specification
L ⊆ C^ω over an alphabet C of colors, which
is independent of the game and its alphabet of moves, and (2) a
regular coloring function λ: Γ^+ → C that induces
the winning condition W = {π∈Γ^ω|λ̂(π) ∈ L}.
In this setting, the condition L can be fixed and defines the type of game
while the function λ can be
specified by a Mealy machine that is part of the game instance (e.g., as given
in the input of the synthesis algorithm).
For example parity games, which are a canonical way of representing games
with ω-regular winning conditions <cit.>, correspond to C = and
L = {n_1 n_2 …∈^ω|lim inf_i →∞ n_i is even}.
Imperfect information.
The choice of action of the player is based on their available information,
which is modeled by a partition U of the set of histories;
the parts of U are called information sets (of the player).
The intended meaning is that if the actual history belongs to an
information set u ∈ U, then the player considers every history in u
possible.
The particular case where all information sets in the partition
are singletons characterises the setting of perfect information.
Our model is synchronous, which means, intuitively,
that the player always knows how many stages have been played.
This amounts to asserting that all histories in an information
set have the same length;
in particular the empty history forms a singleton information set.
Further, we assume that the player has perfect recall —
he never forgets what he knew previously
and which actions he took.
Formally, if an information set contains nontrivial histories τ c and τ' c',
then the predecessor histories τ and τ' belong to the same information set
and the moves c and c' are supported by the same action.
An alternative representation of the partition U is to give the equivalence
relation ∼ ∈Γ^* × Γ^* such that τ∼τ'
if τ, τ' ∈ u for some u ∈ U, which is called an indistinguishability
relation <cit.>. It is a prefix-closed synchronous equivalence relation over histories
that specifies the pairs of histories that the player cannot distinguish.
Formally, an indistinguishability relation ∼ ∈Γ^* × Γ^* is
an equivalence relation that satisfies the following conditions,
for all τ, τ' ∈Γ^* and c, c' ∈Γ:
* if τ∼τ', then τ = τ' (indistinguishable histories have the same length),
* if τ c ∼τ' c', then τ∼τ' (the relation is prefix-closed),
* if τ c ∼τ' c', then (c) = (c') (the action is visible).
For an history τ∈Γ^*, we denote by
[τ]_∼ = {τ' ∈Γ^* |τ' ∼τ}
the information set containing τ.
Intuitively, by the first condition above
the player knows how many rounds have been played,
by prefix-closedness he has perfect recall,
and by the visibility of actions he can distinguish his own actions.
Assumption.
In this paper, we require the function λ defining the color of a history
to be information-consistent, that is constant over every information set:
λ(τ) = λ(τ') for all indistinguishable histories τ∼τ'.
We say that the induced winning condition is visible.
Strategies.
A decision function is a map f: Γ^* → A from histories to actions.
We say that a play c_1 c_2 … follows f if
( c_t ) = f(c_1 … c_t-1), for every stage t > 0 (and similarly for a history).
We denote by (f) the set of all plays that follow f.
Further, we say that an
information set u ∈ U is reachable
if there exists a history in u that follows f.
A strategy is a decision function that is information consistent.
Given a winning condition W ⊆Γ^ω, the strategy
is winning if all plays that follow belong to W,
that is () ⊆ W. When the winning condition is induced
by a logical specification L ⊆ C^ω (and a regular function
λ: Γ^+ → C that is clear from the context),
we also say that is winning for L.
Game description.
Given an action set A, a move set Γ, and a function : Γ→ A,
a game with imperfect information consists of a tuple
= A,Γ,,∼, λ and a winning condition L ⊆ C^ω,
where ∼ is an indistinguishability
relation and λ is a coloring function.
In the special case of perfect-information games, characterised by
the indistinguishability relation ∼ being the identity (or equivalently
by the informations sets [τ]_∼ = {τ} being singletons
for all τ∈Γ^*), we omit the relation ∼ in the tuple .
Synthesis problem.
For a fixed winning condition L ⊆ C^ω,
the synthesis problem asks, given a game with imperfect information,
whether there exists a winning strategy for L in .
§ FULL-INFORMATION PROTOCOLS
In the standard model of partial-observation games <cit.>, the indistinguishability
relation ∼ is induced by an observation function β: Γ^* →Σ
(where Σ is a finite set of observations),
such that τ∼τ' if β̂(τ) = β̂(τ').
Intuitively, the player receives at every nonempty history τ c
the observation symbol β(τ c), and by the assumption of perfect recall,
remembers the sequence β̂(τ) of previous observations.
An equivalent definition of ∼ is given by the following characterization:
τ c ∼τ'c' iff
τ∼τ' and β(τ c) = β(τ' c').
It follows that, given an information set u, there are at most
Σ information sets u' such that τ c ∈ u'
for some τ∈ u and c ∈Γ, that is the information
tree has bounded branching.
In a full-information protocol, the main player (called player 0)
is accompanied by n players
who receive an observation symbol at every round, given
by an observation function β_i: Γ^* →Σ for each player
i = 0, …, n. However, only player 0 is able to make strategic choices;
the other players 1, …, n are observers, they do not make choice, but may
communicate with other observers or with the main player.
Let I = {0,1,…,n} be the set of all players.
Communication is specified by relations R_σ⊆ I × I
indexed by σ∈Σ: when player i receives observation σ,
he also receives the entire view of all players j ∈ R_σ(i) = {j | (i,j) ∈ R_σ},
which consists of all observations of players in R_σ(i) as well
as (recursively) the view of players in R_σ(i).
Intuitively, a link (i,j) ∈ R_σ specifies a one-way communication
with receiver i and sender j upon observation σ. We refer to such links
as direct links. If at the end of a history there is a direct link from player
i to player j and a direct link from player j to player k, then a communication
is established from player i to player k, even if the protocol does not
specify a direct link i to k. We refer to such links as indirect links.
We represent the information available to the player and observers along a history
τ = c_1 c_2 … c_ℓ by a graph (τ) = (V,E),
called the view graph, where:
* V = I ×{0,1,…, ℓ} is the set of nodes, and a node (i,t) ∈ V
represents the viewpoint of player i after t rounds;
* E ⊆ V × V is the set of edges, where an edge (i,t),(j,u)
intuitively means that after t rounds, player i has access to the view of player j
at round u; the set E contains the edges (i,t),(i,t-1)
for all i ∈ I and 1 < t ≤ℓ, which correspond to looking into the past,
and the edges (i,t),(j,t)
for all i,j ∈ I and 1 ≤ t ≤ℓ
such that j ∈ R_σ(i) where σ = β_i(c_1 c_2 … c_t),
which correspond to communicating the view of player j to player i.
Two histories τ,τ' ∈Γ^*
are indistinguishable for player i,
denoted τ∼_i τ', if τ = τ' and
β_j(τ(t)) = β_j(τ'(t)) for
all nodes (j,t) reachable from (i,τ) in the view graph (τ).
Note that the definition implies that if τ∼_i τ', then
the reachable nodes from (i,τ) in (τ) and
in (τ') coincide.
We say that the histories τ, τ' are indistinguishable
for a coalition J ⊆ I, denoted τ∼_J τ',
if they are indistinguishable for all players of the coalition,
that is τ∼_i τ' for all i ∈ J.
<ref> shows a view graph for a FIP with 4
players (the main player and three observers). The figure shows the edges
corresponding to communications, but we omit the edges corresponding
to looking into the past. Given the view graph
of τ = c_1 c_2 c_3 c_4 c_5 c_6 c_7 … in <ref>,
the view of player 0 after c_6 is illustrated in <ref>,
and after c_7 in <ref>.
A full-information protocol F = I, (_i)_i∈ I,(R_σ)_σ∈Σ
with n observers over move alphabet Γ and observation alphabet Σ
consists of a set I = {0,1,…,n} of players, Mealy machines _i
defining the observation functions β_i: Γ^* →Σ of each player i ∈ I,
and the relations R_σ⊆ I × I defining the communication links
between the players on observations σ∈Σ.
By extension, a full-information protocol is a game
A, Γ, , ∼, where the indistinguishability relation ∼ is
∼_0 defined by F.
Moreover, we require that two moves with different actions have different observation,
if (c) ≠(c'), then β_0(τ c) ≠β_0(τ' c') for
all histories τ, τ' ∈Γ^* and moves c,c' ∈Γ,
ensuring that the action is visible to the player.
It is then easy to see that ∼ is indeed an indistinguishability relation.
Note that FIP games with one player and no observer (I= {0}) correspond
to the special case of partial-observation games <cit.> where
the indistinguishability relation is represented by a single (regular) observation
function.
§ GRAPH GAMES AND MORPHISMS
The key tool to strategy synthesis for infinite games is the
automata-theoretic procedure founded on the works of Büchi and Landweber <cit.>,
and of Rabin <cit.>.
Setting out from an automaton that recognises the set of strategies in a game and a second one
that recognises the winning condition,
the procedure constructs a new automaton that recognises the set of winning strategies.
The emptiness test for the constructed automaton is decidable,
answering the question of whether winning strategies exist.
Moreover, by Rabin's Basis Theorem <cit.>,
every nonempty automaton accepts a regular tree,
which corresponds to the unfolding of a finite graph –
this allows to effectively construct a winning strategy defined by a Mealy machine.
An essential feature of the automata-theoretic approach is that strategies
are presented as trees with bounded, finite branching.
In our setting, however,
the information trees which support strategies might have
unbounded degree. Indeed, it was shown in <cit.>,
that a regular indistinguishability relation defines an information tree with finite branching
if, and only if, there exists an equivalent observation function.
Since FIP protocols are more expressive than observation functions, as we show
in Section <ref>,
this means that we cannot rely on tree automata to recognise the set of strategies of a FIP game in general.
To overcome this obstacle, we propose a construction that transforms
any FIP game into a game with perfect information, by preserving
the existence of winning strategies in the following sense:
(1) whenever a winning strategy exists in the original game, there exists one in the transformed game; (2) given a regular winning strategy for transformed game,
we can effectively construct a winning strategy for the original game.
To prepare the ground, we first discuss some general transformation of games with imperfect
information into games of perfect information that preserves
the existence of winning strategies, and present a sufficient
condition for the transformed game of perfect information to
be regular and thus solvable.
In Section <ref>, we describe a particular transformation
for solving the FIP synthesis problem.
§.§ Game graphs
It will be convenient to consider repeated games played on a graph,
which is a model equivalent to abstract repeated games <cit.>.
We briefly recall the definition of game graphs as repeated games.
Let A be a set of actions and C be a set of colors.
A (game) graph is a structure = (V, v_, (E_a)_a ∈ A, λ)
on a set V of nodes called the domain with a designated initial node v_∈ V,
a binary edge relation E_a ⊆ V × V for every action a ∈ A,
and a node-labeling function λ: V → C. We require that for
every node v ∈ V and action a ∈ A, the set E_a(v) = {w | (v,w) ∈ E_a}
of successors of v by a is nonempty.
Intuitively, a game on is played in rounds as follows.
Each round starts in a node, the first round starts in the initial node v_.
In each round, given the node v in which the round starts, the player chooses
an action a ∈ A, then Nature chooses a node w such that (v,w) ∈ E_a.
The next round starts in the node w.
As a repeated game, the game on is the perfect-information game
_ = A,Γ,,
with the set of moves Γ = A × V,
the function defined by (a,v) = a for all (a,v) ∈Γ,
and the Mealy machine that maps a history τ = (a_1,v_1) … (a_n,v_n)
to the color λ(v_n) if τ forms a path
from v_ in the graph , and to the color otherwise.
Given a winning condition L ⊆ C^ω for the game on ,
a play is declared winning in _ if it is mapped by to
a sequence in L or to a sequence containing , which
forces Nature to respect the edge relations of the game graph.
The domain of may be finite or infinite.
However, the graphs we consider have at most countable set of nodes, they are finitely branching,
that is, every node has finitely many successors,
and the range of the coloring function λ is finite.
In the sequel we construct game graphs as the quotient of Mealy automata, defined
as follows.
Let = (Q, Γ, Σ, q_, δ, λ) be a Mealy automaton.
Given an action map : Γ→ A and an equivalence relation
R⊆ Q × Q that respects λ (i.e., if (q,q') ∈ R,
then λ(q) = λ(q')), the quotient by R of
is the graph R = (V, v_, (E_a)_a ∈ A, λ)
where V is the set of all equivalence classes in Q, the initial node
is v_ = [q_]_R, and for all a ∈ A the edge relation E_a connects two
equivalence classes (u,u') whenever there exists a state q ∈ u
with a successor δ(q,c) ∈ u' for some c ∈Γ such that (c) = a.
By an abuse of notation, we define λ([q]_R) = λ(q) for all q ∈ Q.
§.§ Information tree
Our goal is to reduce large games, represented on an infinite domain, to smaller ones,
on a finite domain, for which the synthesis problem is solvable, and then transfer
winning strategies back to the original game.
The reduction should be conservative in the sense that, whenever a winning strategy
exists in the large game, there is one in the small game.
In our representation, every game with a non-trivial indistinguishability relation
is infinite.
To be effective, a reduction should hence yield a game with perfect information.
One way of transforming larger structures into smaller ones is by
collapsing elements that we do not wish to distinguish.
Our first step is to represent games with imperfect information
as a game of perfect information played on the information tree,
which is an infinite graph whose nodes are information sets.
The information tree for a game = A,Γ,,∼,λ
is the transition graph
() on the domain U = { [τ]_∼|τ∈Γ^* },
with initial node u_ = []_∼,
edge sets
E_a^ := {( [τ]_∼, [τ c]_∼) |(c) = a },
for each action a ∈ A,
and with coloring
λ^( [τ]_∼ ) = λ( τ ) for all τ∈Γ^*
(which is well-defined since λ is information-consistent).
Note that () is a tree due to the perfect-recall property of R.
Therefore, every node u ∈ U identifies a unique path from u_
to u and we view strategies in the game payed on ()
as functions : U → A
rather than : U^+→ A.
The information tree can also be obtained as the quotient by ∼ of
the (infinite-state) automaton with state space Q = Γ^*
and transition function defined by δ(τ, c) = τ c
for all τ∈Γ^* and c ∈Γ.
Although structurally different, a game with imperfect information and the perfect-information
game played on its information tree are the same game, in the following sense.
For every game with indistinguishability relation ∼,
there is a bijection that maps every strategy in the information tree ()
to an (information-consistent) strategy of the original game ,
such that (τ) = ([τ]_∼), for all histories τ∈Γ^*.
Moreover, the outcomes of corresponding strategies agree on the
colors: λ̂(() = λ̂^(()).
By the correspondence between and ,
for each play c_1 c_2 …∈(),
the sequence a_1 [c_1]_∼ a_2 [c_1 c_2]_∼…,
where a_i = (c_i) for all i ≥ 1 is a play in the game on ()
that follows and forms a path from [ϵ]_∼, thus
the priority labels λ^([c_1 … c_i]_∼) = λ(c_i)
are equal in every stage i ≥ 1.
To show, conversely, that λ̂^(()) ⊆λ( (),
consider a play π = a_1 v_1 a_2 v_2 … in (),
and for every i ≥ 0 let τ_i ∈ v_i be a history of the class v_i,
and note that τ_i follows in and λ(τ_i) = λ^(π(i)).
The set of all histories τ_i forms an infinite subtree of Γ^*, which has degree
bounded by Γ, and thus contains an infinite path θ
by König's lemma <cit.>.
It follows that λ̂^(π) = λ̂(θ) and
θ∈ (), which establishes the desired inclusion.
§.§ Bisimulation
To construct transformations that allow taking strategies back and forth between
games systematically, we use the classic notion of bisimulation <cit.>.
A bisimulation between two graphs and $̋ with the usual vocabulary,
is a relationZ ⊆V^×V^$̋ such that,
every related pair of nodes (u, v) ∈ Z agree on the color
λ^( u ) = λ^(̋ v ) and
(Zig)
for each action a ∈ A and every edge (u, u') ∈ E_a^,
there exists an edge (v, v') ∈ E_a^$̋ such that(u', v') ∈Z, and
(Zag)
for every actiona ∈Aand every edge(v, v') ∈E_a^$̋,
there exists an edge (u, u') ∈ E_a^ such that (u', v') ∈ Z.
One can verify easily that the union of two bisimulations is again a
bisimulation, and thus the coarsest bisimulation between two transition graphs
can be obtained by taking the union of of all bisimulations between them:
two nodes u ∈ V^ and v ∈ V^$̋ are bisimilar, denoted byu ≈v,
if there exists a bisimulation that contains(u, v).
By extension, we say that two graphsand$̋ are bisimilar
if their initial nodes are bisimilar, v_^≈ v_^$̋.
As a basic notion of dynamic equivalence,
bisimulation has been studied widely and in different variants <cit.>.
For games with perfect information on graphs, it is folklore that winning strategies
are preserved across bisimilar representations.
Given two bisimilar game graphs and $̋ and a logical specificationL ⊆ C^ω,
the following equivalence holds:
there exists a winning strategy forLinif and only if
there exists a winning strategy forLin$̋.
Moreover, if there exists a functional bisimulationk: V^×V^$̋
containing the initial nodes of the two graphs and $̋,k(v_^) = v_^$̋,
then there exists a pruning of the unfolding of that is isomorphic
to the unfolding of $̋. A functional bisimulation is called a p-morphism <cit.>.
§.§ Rectangular morphisms
Throughout this section, we fix a game= A,Γ,,∼, λand a winning conditionL ⊆C^ω.
Intuitively, we aim at constructing a finite-state abstraction of the
information tree, as a graph bisimilar to the information tree.
The greatest difficulty is that the information tree may be of unbounded branching,
while a finite-state abstraction must have bounded branching by definition.
The key is to be able to describe the navigation through the information tree in the
universe of histories: given an information setuidentified by an historyτ∈u(that isu = [τ]_∼), the successors ofucan be identified
the historiesτ' cobtained by taking a companionτ' ∼τand then a successorτ' cby appending a movec ∈Γ.
Our finite-state abstraction is induced by a finite-valued functionh:Γ^* →Psuch that we can compute, given the value ofh(τ), the following elements useful
to navigate through the information tree:
* the set of values {h(τ') |τ' ∼τ},
* the set of values {h(τ c) | c ∈Γ}, and
* the value of λ(τ).
Note that these values should be computable without knowing the value ofτ,
so that we can faithfully navigate in the universeh(Γ^*) ⊆P(in the sequel we assume w.l.o.g thatP = h(Γ^*)).
This is possible when the functionhis a rectangular morphism
for, that is satisfying the following properties,
for allτ, τ' ∈Γ^*andc ∈Γ:
Rectangularity if h(τ) = h(τ'), then h([τ]_∼) = h([τ']_∼),
Morphism if h(τ) = h(τ'), then h(τ c) = h(τ' c),
Refinement if h(τ) = h(τ'),
then λ(τ) = λ(τ').
Note that a finite-valued morphism onΓ^*is a regular function.
A variant of the refinement property is to require thathis a refinement of
the automatondefiningλ, that is,
ifh(τ) = h(τ'), thenδ(q_, τ) = δ(q_, τ')whereq_is the initial state of.
This variant implies the original refinement property.
ifhis a morphism andis the minimal automaton definingλ,
then the two properties (refinement and the variant) are equivalent.
In the sequel, we extendλto the setPand defineλ(p) = λ(τ)for allτsuch thath(τ) = p, which is well defined by the refinement property ofh.
We show that the solution of the synthesis problem for games with imperfect information boils down
to the construction of a functionhsatisfying the
four conditions of being(1)rectangular,(2)a morphism,(3)a refinement, and(4)finite-state. It is easy to define functions satisfying any three of the four conditions, namely:
* all but rectangular (2,3,4): h_1(τ) = δ(q_, τ);
* all but a morphism (1,3,4): h_2(τ) = δ(q_, τ), {δ(q_, τ') |τ' ∼τ};
* all but a refinement (1,2,4): h_3 is constant.
* all but finite-state (1,2,3): h_4(τ) = τ;
The proof thath_isatisfies conditions{1,2,3,4} ∖{i}is
straightforward and left to the reader. We note that the rectangularity ofh_2is a corollary of the fact that for all functionsfonΓ^*,
the functionhdefined byh(τ) = f(τ), f([τ]_∼)for allτ∈Γ^*is rectangular (which is also straightforward
to prove).
Partial-observation games (i.e., FIP games with no observer) admit a rectangular
morphism of the formh_2whereδis the transition function of the
synchronous product of the Mealy automaton defining the coloringλand the Mealy automaton defining the observation functionβ_0.
The functionh_2is then morphic <cit.>: givenq = δ(q_, τ)andu = {δ(q_, τ') |τ' ∼τ}, thush_2(τ) = (q,u),
and given a movec, we can defineq' = δ(q, c)andu' = {δ(q,c') |∃c' ∈Γ: β_0(q,c') = β_0(q,c) },
and show thath_2(τc) = (q',u').
In the rest of this section, we fix a rectangular morphismhfor.
The role of rectangularity appears in the two crucial lemmas below, which lead
to the construction of a finite-state abstraction of the information tree.
The relation R^ = {( h(τ), h(τ') ) |τ∼τ' }
is an equivalence.
It is immediate that R^ is reflexive (as h is surjective) and symmetric.
To show that R^ is transitive, consider
τ_1 ∼τ_2 and τ_2' ∼τ_3
such that h(τ_2) = h(τ_2'),
and show that there exists τ_1' ∼τ_3' such that
h(τ'_1) = h(τ_1) and h(τ'_3) = h(τ_3).
By rectangularity, since h(τ_2) = h(τ_2') we have
h([τ_1]_∼) = h([τ_3]_∼) (call that set Y)
and since in particular h(τ_1) ∈ Y and h(τ_3) ∈ Y,
there exists τ_3' ∈ [τ_1]_∼ such that h(τ_3') = h(τ_3).
We can take τ_1' = τ_1 and the result follows.
The tight link between the equivalence classes of∼and ofR^is described in Lemma <ref>.
h([τ]_∼) = [h(τ)]_R^ for all τ∈Γ^*.
That h([τ]_∼) ⊆ [h(τ)]_R^ follows by definition of R^.
For the converse inclusion, let p ∈ [h(τ)]_R^ and show that
there exists τ' ∈ [τ]_∼ with h(τ') = p. By definition
of R^$̋ there existτ_1 ∼τ_1'such thath(τ_1) = h(τ)andh(τ_1') = p. It follows by rectangularity thath([τ_1]_∼) = h([τ]_∼)and thus there existsτ' ∈ [τ]_∼withh(τ') = h(τ_1') = pas required.
Consider the semi-automaton=̋P, p_,δ,λwherep_ = h(ϵ)andδ(p,c) = p'if there existsτ∈Γ^*such thath(τ) = pandh(τ c) = p', which is well defined by the morphism
property ofh.
The following result is an immediate consequence of Lemma <ref>.
The information tree () and the quotient by R^$̋ of$̋ are bisimilar.
Specifically, the functionkinduced byhthat maps each information setu = [τ]_∼to the setk(u) = [h(τ)]_R^is a p-morphism.
Using Theorem <ref>, the solution of the synthesis problem
for a winning conditionLboils down to showing the existence of a rectangular morphismh,
constructing the automaton$̋, and solving the perfect-information
game played on $̋ forL.
Note that this reduction holds for arbitrary winning conditionsL,
but is of practical interest only if the synthesis problem for
perfect-information games is decidable, which is the case ofω-regular
winning conditions <cit.>.
Although Theorem <ref> does not show that having
a rectangular morphism is required to solve the synthesis problem
for games with imperfect information, this approach appears to be sufficient
to show the decidability of partial-observation games and FIP games.
Moreover, as discussed above the requirement of rectangularity is very natural (if not necessary)
as we want to “simulate” the navigation through the information tree.
§ SOLVING FIP GAMES
We solve FIP games by constructing a rectangular morphism
and reducing to a game of perfect information using Theorem <ref>.
§.§ Pre-processing
To simplify the presentation, we show that every FIP can be transformed
into an equivalent one where the observation functions are trivial.
Intuitively, the move alphabet in the transformed FIP isΓ' = Σ^n+1whereΣis the observation alphabet of the original FIP,
andnis the number of observers. As the moves can now be any profile
of observations from the original FIP, we use the winning condition
to ensure that if the sequence of observations in the transformed
FIP is not possible in the original FIP, then the player wins.
We consider the product of all Mealy machines for the
observation functions in the original FIP, which defines a functionβ': Γ^* →Σ^n+1(whereΓis the set of moves
in the original FIP) such thatβ'(τ) = β_0(τ),
β_1(τ),…,β_n(τ)for allτ∈Γ^*and we consider the languageL = {β'(τ) |τ∈Γ^* }⊆ (Γ')^*,
which is a regular language (a recognisingLcan be obtained by
a standard subset construction on the Mealy machine
definingβ'). The winning condition in the transformed FIP
accepts all plays in(Γ')^*that have a winning pre-image byβ'in
the original FIP, as well as all plays in(Γ')^*that have a (finite)
prefix outsideL.
The transformed FIP is equivalent to the original one in the sense that
there exists a winning strategy in the transformed FIP if and only if there exists a
winning strategy in the original FIP.
L: skip the details of the transformation, and the proof? L: ok, but should we mention the proof is very similar to proof of
Lemma <ref>? and uses König's Lemma?
From now on, we assume without loss of generality
that the move alphabet isΓ = Σ^n+1and the observation
function for playeri(i ∈ I) is defined byβ_i(τ c) = c[i],
the component ofccorresponding to playeri,
for allτ∈Γ^*andc ∈Γ.
ForJ⊆ I, we denote byc[J] = (c[j])_j∈ Jthe observations
of the coalitionJ.
We present an alternative characterization
of the indistinguishability relations∼_i, without resorting to
view graphs.
Given a movec ∈Γand playeri ∈ I, we define the set_i(c)of players
communicating (directly, or via other players) with playerion movecas follows.
The set{i}× R_c[i](i)contains the (direct) communication links in which playeriis a receiver.
LetT = ⋃_i ∈ I{i}× R_c[i](i), and the reflexive transitive
closureT^*contains all (direct or indirect) communication links between the players,
so we define_i(c) = T^*(i).
ForJ ⊆ I, let_J(c) = ⋃_i ∈ J_i(c).
Note thatJ ⊆_J(c)for all coalitionsJand movesc ∈Γ(coalitions always communicate with themselves),
and for the coalitionK = _J(c), there is no communication
with other players,_K(c) = K. In fact_K(c) = _J(c)for allJ ⊆ K ⊆_J(c).
Finally, note that_J ∪ K(c) = _J(c) ∪_K(c)and thusis monotone with respect to coalitions:
ifJ ⊆ K, then_J(c) ⊆_K(c).
For all coalitions J ⊆ I, for all histories τ∈Γ^*
and moves c ∈Γ, the following properties hold:
* for all K ⊆ I such that J ⊆ K ⊆_J(c), we have [τ c]_∼_K = [τ c]_∼_J,
* for K = _J(c), we have [τ c]_∼_K = {τ'd |τ' ∼_K τ d[K] = c[K] }.
The characterization in Lemma <ref> is in fact an equivalent definition
of the indistinguishability relations∼_J, which may
be defined inductively by[τ c]_∼_J = [τ c]_∼_K =
{τ'd |τ' ∈ [τ]_∼_K d[K] = c[K] }whereK = _J(c).
Note that for the grand coalition, the relation∼_Iis the identity
that is,τ∼_I τ'is equivalent toτ = τ'.
§.§ A rectangular morphism for FIP games
We fix a FIP game = A,Σ^n+1,,∼,λ,
where∼is defined by Lemma <ref>,
andλis defined by a given Mealy machine = Q, Γ, C, q_ε, δ_, λ.
We define a functionhfor our gameand then show that it is a rectangular morphism
for.
The configurationh(τ)at a historyτis a vector indexed by all
coalitionsJcontaining the main player,0∈ J.
The componenth(τ)[J]corresponding to a coalitionJ ≠ Iis a knowledge set forJ, which consists of a set of configuration components
corresponding to all coalitionsKgreater thanJ, where the configurations
are calculated at historiesτ' ∼_J τindistinguishable fromτfor coalitionJ. ForJ = Ithe grand coalition, the configurationh(τ)[I]stores the state reached inupon readingτ.
It will be convenient to defineh(τ)[I]as a set (a singleton) for uniform
treatment as a knowledge set.
For allτ∈Γ^*, define:
h(τ)[I] = {δ_(q_, τ)},
and for all J⊊ I, define h(τ)[J] = {(h(τ')[K])_K ∈J|τ' ∼_J τ}.
whereJ = { K ⊆ I | J ⊊ K}, which defines the functionh: Γ^* → PwithP = ∏_{0}⊆ J ⊆ IΨ_JwhereΨ_I = 2^Qand, inductively,Ψ_J = 2^∏_K ∈JΨ_Kfor allJ ⊆ I.
It follows immediately from this definition that the componenth(τ)[J]corresponding to a coalitionJis information consistent for∼_J.
For all histories τ,τ'∈Γ^* and all coalitions {0}⊆ J⊆ I,
if τ∼_J τ', then h(τ)[J] = h(τ')[J].
We show that the functionhdefined above is a rectangular morphism for.
Thathis a refinement is immediate sinceh(τ)[I] = {δ_(q_, τ)},
and thathis rectangular is relatively straightforward. The proof thathis a morphism
is much more technical.
The function h is rectangular.
To show that h is rectangular, let h(τ) = h(ξ) and τ' ∼τ.
We construct ξ' ∼ξ such that h(ξ') = h(τ').
Since τ' ∼τ (and ∼ is ∼_0), the tuple
(h(τ')[K])_K ∈{0} belongs to h(τ),
and since h(τ) = h(ξ), there exists ξ' ∼ξ such that
the tuple h(ξ')[K] = h(τ')[K] for all K ∈{0}.
By Lemma <ref> (with J = {0}),
we also have h(ξ')[{0}] = h(τ')[{0}], and thus h(ξ') = h(τ'),
which concludes the proof.
To show thathis a morphism, we construct in the rest of this section an update functionΔ: P ×Γ→ Pand show thatΔ(h(τ),c) = h(τ c)for
allτ∈Γ^*andc ∈Γ.
To define the update functionΔ, we need an auxiliary operator
to deal with the effect of communication on the knowledge sets.
When a coalitionJsynchronizes with a setSof observers,
the knowledge of the coalitionJ ∪ Sis transferred to the
coalitionJ. The transfer is not a simple copy, as the
knowledge sets of different coalitions are not of the same type.
In particular, the knowledge ofJabout a (larger) coalitionKis transferred from the knowledge ofJ ∪ Sabout the coalitionK ∪ S.
We present a lifting operator that transforms the knowledge set ofJ ∪ Sinto a knowledge set forJ. The definition is inductive, assuming
that the operator is defined for all coalitions larger thanJ.
We define the function^S_J: Ψ_J ∪ S→Ψ_J,
for allψ∈Ψ_J ∪ S, as follows:
* if J = J∪ S, then is the identity: ^S_J(ψ) = ψ;
* otherwise (i.e., J ⊊ J∪ S), we proceed recursively:
^S_J(ψ) = {^S_J(φ,ψ) |φ∈ψ},
where ^S_J(φ,ψ) is a tuple of knowledge sets,
one for each coalition K ∈J larger than J:
^S_J(φ,ψ)[K] =
^S_K(ψ) if K ∪ S = J ∪ S,
^S_K(φ[K ∪ S]) otherwise (i.e., K ∪ S ⊋ J ∪ S).
Note thatφ∈ψ∈Ψ_J ∪ Sand therefore the knowledge
setφ[K ∪ S]is well defined only forK ∪ S ⊋ J ∪ Sand the knowledge set forK ∪ S = J ∪ Sis given byψitself.
We illustrate this definition with an example of configuration
in <ref>. With3playersI = {0,1,2},
there are4coalitions containing player0: the singleton{0},
the grand coalition{0,1,2}, and the two coalitions{0,1}and{0,2}.
The coalition{0,1}(and therefore also the grand coalition{0,1,2})
knows the current state, namelyq_3. However, the coalition{0,2}know only that the current state is eitherq_2orq_3, and player0sees three possibilities:
the current state isq_1and coalition{0,2}knows it,
or the current state isq_2and neither coalition{0,1}nor coalition{0,2}knows it,
or the current state isq_3, and coalition{0,1}knows it.
Note that if it is a possibility for player0that a
coalitionJ ⊋{0}seeskpossibilities (for examplek=2withJ = {0,1}in the left branch in <ref>), then thosekpossibilities
should appear in the configuration for player0(as the left and middle branch in
our example).
The lifting of this configuration after player0communicates with player1is shown in <ref>. Intuitively, the effect
of the lifting can be understood as replacing every coalitionJcontaining0byJ ∪{1}. For example, what player0knows about coalition{0,2}after the communication, is what coalition{0,1} = {0}∪{1}knows
about the grand coalition{0,1,2} = {0,2}∪{1}. It turns out in this
case that all coalitions know the current state isq_3.
<ref> show the lifting if player0communicates with player2instead.
It follows immediately from the definition that ^S_J = ^J ∪ S_J,
for all coalitions J,S. It is then easy to show that, equivalently,
if J ∪ S = J ∪ T, then ^S_J = ^T_J.
We use this property in the latter form.
Thefunction is compositional:
lifting for a coalitionJthat synchronizes withS ∪ Tcan be obtained by first lifting for the coalitionJ∪ Ssynchronizing withT, and then lifting for the coalitionJsynchronizing withS.
^S_J∘^T_J ∪ S= ^S ∪ T_J for all coalitions J,S,T.
The proof is by (descending) induction on J. The base case J = I
is trivial since all three operators ^S_J, ^T_J ∪ S,
and ^S ∪ T_J are then the identity.
For the induction case, assume that the lemma holds for all coalitions
of cardinality larger than J (in particular for all K∈J)
and show that it holds for coalition J.
_J^S ∘_J∪ S^T(ψ)
={^S_J(φ,^T_J∪ S(ψ)) | φ∈^T_J∪ S(ψ) },
={^S_J(φ,^T_J∪ S(ψ)) | φ∈{^T_J∪ S(φ,ψ) | φ∈ψ}},
={^S_J(^T_J∪ S(φ,ψ),^T_J∪ S(ψ) ) | φ∈ψ},
and considering each K∈J:
* if K∪ S = J∪ S,
^S_J(^T_J∪ S(φ,ψ),^T_J∪ S(ψ))[K]
=^S_K(^T_J∪ S(ψ)), by definition of ^S_J,
=^S_K(^T_K∪ S(ψ)), as K∪ S= J∪ S,
=^S ∪ T_K(ψ), by induction hypothesis.
* if K∪ S≠ J∪ S and K∪ S ∪ T=J∪ S∪ T,
^S_J(^T_J∪ S(φ,ψ),^T_J∪ S(ψ))[K]
=^S_K(^T_J∪ S(φ,ψ)[K∪ S]), by definition of ^S_J,
=^S_K(^T_K∪ S(ψ)), by definition of ^T_J∪ S,
=^S ∪ T_K(ψ) by induction hypothesis.
* if K∪ S≠ J∪ S and K∪ S ∪ T≠ J∪ S∪ T,
^S_J(^T_J∪ S(φ,ψ),^T_J∪ S(ψ))[K]
=^S_K(^T_J∪ S(φ,ψ)[K ∪ S]), by definition of ^S_J,
=^S_K(^T_K∪ S(φ[K∪ S∪ T])), by definition of ^T_J∪ S,
=^S ∪ T_K(φ[K∪ S∪ T]) by induction hypothesis.
In summary we get:
^S_J(^T_J∪ S(φ,ψ),^T_J∪ S(ψ))[K]=
^S ∪ T_K(ψ), if K∪ S ∪ T=J∪ S ∪ T,
^S ∪ T_K(φ[K∪ S∪ T]), if K∪ S ∪ T ≠ J∪ S ∪ T,
and so ^T_J∪ S(^S_J(φ,ψ),^T_J∪ S(ψ))=^T∪ S_J(φ,ψ),
which concludes the proof.
The update functionΔ: P ×Γ→ Pis defined component-wise for each
coalition. The definition is recursive: given a coalitionJ, we first update
all coalitionsK∈Jgreater thanJ. Then, we update the coalitionJas
follows. Given the actual movec, letS = _J(c)be the coalition that
transfers their knowledge toJ(through communication). The update is calculated
as the (lifting of) the knowledge of coalitionSupon reading a move movedthat the coalitionScannot distinguish fromc, that is such that the
observations for the players inSare the same,d[S] = c[S].
Note that for the grand coalitionS=I, the conditiond[I] = c[I]is
equivalent tod=c.
Given a statep ∈ Pand a movec ∈Γ, letΔ(P,c) = (δ^c_J(P[_J(c)]))_{0}⊆ J ⊆ Iwhereδ^c_Jis defined recursively as follows, for allψ∈Ψ_SwhereS = _J(c):
* if S = I is the grand coalition, then we update according to the last
observations of I followed by a lifting:
δ^c_J(ψ) = ^I_J({δ_(q,c) | q ∈ψ});
* otherwise, we lift the knowledge set of S, which is defined recursively:
δ^c_J(ψ) = ^S_J({(δ^d_K(φ[_K(d)]))_K ∈S|φ∈ψ, d[S] = c[S] }).L: alternative presentation: the argument _K(d)
in δ^d_K(φ[_K(d)]) sounds redundant.
We often use δ^c_J(ψ) with argument of the form
ψ = h(τ)[S] with τ∈Γ^* and S = _J(c),
which by unfolding the definition of h gives:
* if S = I, then
=1.5pt
[ δ^c_J(h(τ)[S]) = ^S_J({δ_(q,c) | q ∈ h(τ)[S]}) ; = ^S_J({δ_(q,c) | q ∈{δ_(q_ϵ,τ) }}) ; 2rsince h(τ)[S] = h(τ)[I]; = ^S_J({δ_(δ_(q_ϵ,τ),c)}) ; = ^S_J({δ_(q_ϵ,τ c) }). ; ]
* if S ≠ I, then
=1.5pt
[ δ^c_J(h(τ)[S]) = ^S_J({(δ^d_K(φ[_K(d)]))_K ∈S|φ∈ h(τ)[S], d_S = c_S }) ; = ^S_J({(δ^d_K(h(τ')[_K(d)]))_K ∈S|τ' ∼_S τ, d[S] = c[S] }) ; 2rsince h(τ)[S] = { (h(τ')[K])_K ∈S|τ' ∼_S τ}.; = ^S_J({(δ^d_K(h(τ')[_K(d)]))_K ∈S|τ' d ∼_S τ c }) ; 2rby Lemma <ref>(<ref>); ]
For all histories τ∈Γ^* and moves c ∈Γ, we have:
Δ(h(τ),c) = h(τ c).
The function h is a morphism.
We show that Δ(h(τ),c)[J] = h(τ c)[J] for all coalitions J.
We proceed by (descending) induction on the cardinality of J.
The base case is for J = I, that is J = I:
=1.5pt
[ Δ(h(τ),c)[I] = δ^c_I(h(τ)[I]) ; = ^S_I({δ_(q_ϵ,τ c) }) ; 2rby Remark <ref> where S = ^*_I(c) = I; = {δ_(q_ϵ,τ c) } ; 2rsince I = I ∪ S and thus ^S_I is the identity; 2l= h(τ c)[I] by definition of h; ]
For the induction step, consider a coalition J ⊊ I and assume that the property holds
for all coalitions of cardinality larger than J, in particular for all K ∈J,
that is Δ(h(τ),c)[K] = h(τ c)[K] for all τ∈Γ^* and c ∈Γ.
By definition of Δ, the induction hypothesis boils down to
h(τ c)[K] = δ^c_K(h(τ)[_K(c)]).
Given coalition J, history τ, and move c, let S = _J(c).
We consider several cases:
* if S = I, then
=1.5pt
[ Δ(h(τ),c)[J] 2l= δ^c_J(h(τ)[I]) since J ∪ S = I; 2l= ^S_J({δ_(q_ϵ,τ c) })by Remark <ref>,; 2r since S = I; 2l= { (^S_K({δ_(q_ϵ,τ c) })_K ∈J}; 2rby definition of ,; 2rsince K ∪ S = I = J ∪ S for all K ∈J; 2l= { (δ^c_K(h(τ)[I]) )_K ∈J}by Remark <ref>; 2l= { (h(τ c)[K])_K ∈J}by induction hypothesis; 2rsince _K(c) = I; 2l= { (h(τ' d)[K])_K ∈J|τ' d ∼_I τ c }; 2rsince∼_I is the identity; 2l= { (h(τ' d)[K])_K ∈J|τ' d ∼_J τ c }; 2rby Lemma <ref>(<ref>); = h(τ c)[J] ; ]
* if S ≠ I and J = S, then:
=1.5pt
[ Δ(h(τ),c)[J] 2l= δ^c_J(h(τ)[J]); = ^S_J({(δ^d_K(h(τ')[_K(d)]))_K ∈J|τ' d ∼_J τ c }) ; 2rby Remark <ref>; = {(h(τ' d)[K])_K ∈J|τ' d ∼_J τ c } ; 2rsince J = J ∪ S and thus ^S_J is the identity,; 2rand by induction hypothesis; = h(τ c)[J] ; ]
* if S ≠ I and J ≠ S, then:
=1.5pt
[ Δ(h(τ),c)[J] 2l= δ^c_J(h(τ)[S]); = ^S_J(ψ) ; where ψ = {(δ^d_K(h(τ')[_K(d)]))_K ∈S|τ' d ∼_S τ c } ; 2rby Remark <ref>; where ψ = {(h(τ' d)[K])_K ∈S|τ' d ∼_S τ c } ; 2rby induction hypothesis; = {^S_J(φ, ψ) |φ∈ψ} ; 2rby definition of as J ⊊ S; = {^S_J((h(τ' d)[K])_K ∈S, ψ) |τ' d ∼_S τ c } ; = {^S_J((h(τ' d)[K])_K ∈S, ψ) |τ' d ∼_J τ c } ; 2rby Lemma <ref>(<ref>); ]
We now show that ^S_J((h(τ' d)[K])_K ∈S, ψ)[L] = h(τ' d)[L] for all L ∈J,
which concludes the proof as we get Δ(h(τ),c)[J] = { (h(τ' d)[l])_L ∈J|τ'd ∼_J τ c } = h(τ c)[J].
We consider several cases:
* if L ⊆ S (i.e., L ∪ S = S), then
[ ^S_J((h(τ' d)[K])_K ∈S, ψ)[L] = ^S_L(ψ); = ^S_L({(δ^e_K(h(τ”)[_K(e)]))_K ∈S|τ” e ∼_S τ c }); = δ^c_L(h(τ)[_L(c)])by Remark <ref> sinceS = _L(c); = h(τ c)[L] by induction hypothesis; = h(τ' d)[L] by Lemma <ref> sinceτ'd ∼_L τ c by Lemma <ref>(<ref>); ]
* otherwise (i.e., L ∪ S ≠ S) let T = _L ∪ S(d). It will be useful to remark that S ⊆ L ∪ S ⊆ T (as coalitions communicate with themselves), thus S ∪ T = T; and that S = _J(c) = _J(d) ⊆_L(d) (as J ⊆ L), thus _L(d) = _L ∪ S(d) = T; and by transitivity, we get S∪ T = _L(d). We proceed as follows:
If T = I then:
[ ^S_J((h(τ' d)[K])_K ∈S, ψ)[L] = ^S_L(h(τ' d)[L ∪ S]); = ^S_L(δ^d_L ∪ S(h(τ')[T]))by induction hypothesis; = ^S_L(^T_L ∪ S({δ_(q_ϵ,τ”e) |τ” e ∼_I τ' d}))by Remark <ref>; = ^S ∪ T_L({δ_(q_ϵ,τ”e) |τ” e ∼_I τ' d})by Lemma <ref>; = δ^d_L(h(τ')[_L(d)])by Remark <ref> asS ∪ T = _L(d); = h(τ' d)[L]by induction hypothesis; ]
* otherwise (i.e., L ∪ S ≠ S and T ≠ I), this case is similar to <ref>:
[ ^S_J((h(τ' d)[K])_K ∈S, ψ)[L] = ^S_L(h(τ' d)[L ∪ S]); = ^S_L(δ^d_L ∪ S(h(τ')[T]))by induction hypothesis; = ^S_L(^T_L ∪ S({(δ^e_K(h(τ”)[_K(e)]))_K ∈T|τ” e ∼_T τ' d })); 1rby Remark <ref>; = ^S ∪ T_L({(δ^e_K(h(τ”)[_K(e)]))_K ∈T|τ” e ∼_T τ' d }); 1rby Lemma <ref>; = δ^d_L(h(τ')[_L(d)])by Remark <ref> asS ∪ T = _L(d); = h(τ' d)[L]by induction hypothesis; ]
By Lemma <ref> and Corollary <ref>,
the functionhis a rectangular morphism for FIP games.
By Theorem <ref> (using Lemma <ref> and
Lemma <ref>) we can reduce FIP games
to a game of perfect information, and thus solving FIP games is decidable.
The size of the perfect-information game isn-fold exponential in the size
of the FIP game, so this reduction gives a non-elementary upper bound
for solving FIP games.
The synthesis problem for FIP games with a parity winning condition is decidable.
Theorem <ref> extends to all (visible) winning conditions for which
perfect-information games are decidable, such as mean-payoff, discounted sum, etc.
§ EXPRESSIVENESS
We compare the expressive power of full-information protocols to
define indistinguishability relations. We show that FIP are
strictly more expressive than the traditional partial-observation setting,
which corresponds to FIP with no observer.
We further generalize this result and show that the number of observers in a FIP
induces a strict hierarchy in terms of expressive power.
Finally, the general framework of two-tape automata <cit.> is
strictly more expressive than FIP (with an arbitrary number of observers).
Two-tape automata are over alphabetΓ×Γthat recognize synchronous relations overΓ,
that is, relations between words of the same length.
The relation recognised by such an automatonconsists of all pairs
of wordsc_1 c_2 … c_ℓ, c'_1 c'_2 … c'_ℓ∈Γ^*such that(c_1, c_1') (c_2, c_2') … (c_ℓ, c_ℓ') ∈ L().
With a slight abuse of notation, we also denote this relation byL().
We say that a synchronous relation is regular if it is recognised by a .
It is decidable in polynomial time whether the relation recognised by a given
two-tape automaton is an indistinguishability relation <cit.>.
As a first example (inspired by <cit.>), consider the following scenario over a set of movesΓ = {a,b,c}:
there is an observer with perfect information (their indistinguishability relation
is recognised by the two-tape of <ref>), and a player who does
not distinguishaandb, but can observec(their indistinguishability relation
is recognised by the two-tape of <ref>). In a FIP where the player
communicates with the observer onc, the indistinguishability relation is
recognised by the two-tape of <ref>.
Informally, two histories are indistinguishable for the FIP if
they are equal up to the lastc. The induced information tree has unbounded branching
as all histories of the same lengthnthat do not containcare indistinguishable,
henceu_n = {a, b}^nis an information set, and for every historyτ∈ u_nthe historyτ cforms a singleton information set. Thereforeu_nhas at least2^nsuccessors, for everyn.
A regular observation function (or equivalently, a FIP with no observer) induces
an information tree with bounded branching <cit.>, which
implies that FIP are strictly more expressive than the traditional partial-observation games.
We show that two-tape automata are strictly more expressive than FIP.
First we show that two-tape automata can recognise the indistinguishibility
relation of a FIP.
Every indistinguishability relation defined by a FIP can be recognised by a two-tape .
Given a FIPF = I, (_i)_i∈ I,(R_σ)_σ∈Σwithnobservers, where the Mealy machines_idefine observation functionsβ_ifor each playeri ∈ I, and the relationsR_σdefine the communication
links on observationσ, we construct a two-tapethat defines the
indistinguishability relation∼of the FIPFas follows.
For each playeri ∈ I, consider the two-tape_ithat accepts
a pair(τ,τ')of histories ifβ̂_̂î(τ) = β̂_̂î(τ').
The automaton_ialso stores the last observation produced byβ_i(if
it is the same in the two input histories).
Now consider the synchronized product of the automata_ifori ∈ Iand its transition relationδ. Construct the transition relationδ'over the same state space asδ, where givenp = δ(q,c),
we definer = δ'(q,c)as follows.
Consider the least setJ ⊆ Icontaining alli ∈ Isuch that either the entryp[i]is the rejecting state, or(i,j) ∈ R_σandj ∈ Jwhereσis the
observation of playeristored inp.
The stateris defined byr[i] = q_ifi ∈ J,
andr[i] = p[i]otherwise, whereq_is an absorbing rejecting state.
A stateqis accepting inif the entryq[0]corresponding
to player0is accepting. Intuitively, two histories are indistinguishable
if player0receives the same observations for both of them, and the two
histories are indistinguishable for all players with whom player0communicates (possibly indirectly).
For each Mealy machine _i, construct the two-tape _i recognising
the relation {(τ, τ') |β̂_̂î(τ) = β̂_̂î(τ')}
induced by equality of observation sequences for player i ∈ I.
Consider the synchronized product of, for all i ∈ I, the two-tape automaton _i
and two copies of _i as two-tape automata, one reading the first tape,
and the other one reading the second tape. In , we can determine
which players distinguish the two histories based on their own observation,
and for the other players what was the last observation (which must the same
on both histories). The last step is to modify the transition relation of
to take into account the communication. Given the last observation for every player,
we can construct the (transitive closure of the) communication links between the players.
If such a link connects some player i to some player j and the state of
player j (i.e., the state of _j in the product) is rejecting,
then we replace the state of player i by the rejecting sink of _i.
The accepting states are those where the component of _0 (the automaton
for player 0) is accepting (i.e., all states except the rejecting sink).
The construction is illustrated in <ref> where
the stateq_3corresponds to(q_1,q_2)(from the automata_0of
<ref> and_1of <ref>),
andq_4corresponds to(q_1,q_)where player1has distinguished the histories, but player0did not as there was no communication with_1yet.
Inq_4, whenever a communication occurs (viac), the histories get
distinguished by player0.
We show that two-tape automata are strictly more expressive than FIP.
Consider the move alphabetΓ = {a,b,c,#}where#is used as a separator,
and let two historiesτ,τ' ∈Γ^*be indistinguishable if their suffix
after the last position where they both contain a separator#(or from the initial position
if no such position exists) are equal, or none of them contains the letterc.
Intuitively, along a history the symbol#separates block of letters over{a,b,c}.
Within a block the lettersaandbare indistinguishable until a lettercoccurs, which reveals the current block (similar to the example of <ref>).
Note that the lettersaandbin all previous blocks remain indistinguishable
forever.
This indisdinguishability relation is defined by the two-tape in
<ref>. Intuitively, it cannot be defined by a FIP because
whenever a lettercoccurs in a history, player0would need
to communicate with some observer who can see the sequence ofa's andb's
in the current block (player0's own observations are not
sufficient to define the indisdinguishability relation, as it has unbounded branching).
However, we can never reuse the same observer for the next block because
communicating with such an observer would reveal information to which
player0does not have access.
We need a fresh observer for each block, and since an history may contain an
arbitrarily large number of blocks, a finite number of observers would not be
sufficient.
There exists an indistinguishability relation defined by a two-tape that
cannot be defined by any FIP (no matter the number of players).
Consider the two-tape in <ref>.
We note that, for all n∈, the set of words {a,b}^n is such that
for all histories τ∈Γ^*,
if (τ,τ) leads to the initial state q_1,
then the words in the set u_τ = τ{a,b}^n are pairwise indistinguishable.
Moreover, if a letter c occurs, the words τ{a,b}^nc become all pairwise distinguishable.
Towards contradiction, assume that there exists a FIP F that defines ∼
(i.e., such that ∼_0 = ∼). Let N be the number of players in F.
For every history τ∈Γ^*, consider the communication set
(τ) ⊆ I containing all players with which player 0 may
communicate (directly or indirectly) along continuations τ w for all w ∈Γ^*
(which is tedious to define formally).
We construct a sequence τ_0,τ_1, …, τ_N of histories τ_n ∈Γ^*
such that for all n ≥ 1:
* the communication sets are strictly decreasing, (τ_n) ⊊(τ_n-1), and
* the communication sets are nonempty, (τ_n) ≠∅.
Since the size of the communication sets is bounded by the total number N of players,
this implies a contradiction for τ_N.
We now show how to construct the histories τ_n.
First, as an intermediate proposition, we show that if (τ_n,τ_n)
leads to the initial state q_1 in the two-tape automaton,
then (τ_n) ≠∅.
For k > Σ^N, consider the histories of the form τ_n {a,b}^k c.
Since there are Σ^N tuples of observations for N players,
by the pigeonhole principle there exist two sequences w,w'∈{a,b}^k such
that the two histories τ_n w c and τ_n w' c have the
same (last) observation for all players, β_i(τ_n w c) = β_i(τ_n w' c)
for all players i∈ I.
In particular, as τ_n w ∼_0 τ_n w' are indistinguishable histories for
player 0, we have β̂_0(τ_n w c) = β̂_0(τ_n w' c).
Hence
upon reading the last c, there
must be a (direct or indirect) communication between player 0 and some
player i ∈ I who distinguishes the two histories, τ_n wc _i τ_n w'c.
It follows by definition of (τ_n) that i ∈ Com(τ_n), thus (τ_n) ≠∅.
Since all players have the same (last) observation on τ_n wc and τ_n w'c,
we can assume w.l.o.g. that the distinction occured earlier, τ_n w _i τ_n w'.
We now present the construction.
Let τ_0 = ϵ, and construct τ_n+1 from τ_n such that (τ_n+1) ⊊(τ_n) and (τ_n+1,τ_n+1) leads
to the initial state, inductively as follows.
Consider the history τ_n+1 = τ_n w d and note that for all continuations z ∈Γ^*,
the histories τ_n+1 z ∼τ_n w' d z are indistinguishable
because the pair (τ_n w d,τ_n w' d) leads to the state q_1.
As we know that the histories τ_n w _i τ_n w' are distinguishable
for player i, so are the histories τ_n w d z _i τ_n w' d z
for all continuations z ∈Γ^*.
Since τ_n w d z ∼τ_n w' d z for all z ∈Γ^*,
player 0 cannot communicate with player i after the history τ_n+1 = τ_n w d
(as otherwise it would let player 0 distinguish indistinguishable histories).
Hence i ∉(τ_n+1).
Since τ_n is a prefix of τ_n+1, we have
(τ_n+1⊆(τ_n) and since i ∈(τ_n)
we conclude that (τ_n+1⊊(τ_n) and it is easy to
chekc that (τ_n+1,τ_n+1) leads to the initial state q_1 as required.
The same idea can be used to show that increasing the number of players in FIP
increases the expressive power, that is for alln≥ 2, there exists
an indistinguishability relation that can be defined by a FIP withnplayers
but not by any FIP withn-1players. <ref>
shows a two-tape that defines such an indistinguishability relation forn=4.
Intuitively it is obtained by “unfolding” the automaton of <ref>
inton-1copies, redirecting the transitions on(#,#)to the next copy
of the automaton, except in the last copy where the transitions on(#,#)are self-loops. The reader can verify that player0needs3other players
to track the moves in the first (at most)3blocks separated by#along
a history.
Given a move alphabetΓ, denote by_nthe set of indistinguishability relations
definable by a FIP withnplayers.
The number of players in FIP induces a strict hierarchy, _n ⊊_n+1 for all
n ≥ 1.
<ref> illustrates the construction of a
witness relation ∼^n such that ∼^n ∈_n+1 and ∼^n ∉_n.
The proof that ∼^n ∉_n follows the same line as the proof of Lemma <ref>, and the proof that ∼^n ∈_n+1
is straightforward.
The expressive power _n of FIP with n players forms a strict hierarchy,
which is strictly contained in the set _ 2DFA of regular indistinguishability relations
(definable by two-tape automata):
_1 ⊊_2 ⊊…⊊_n ⊊…⊊_ 2DFA.alpha
|
http://arxiv.org/abs/2307.00919v1
|
20230703104134
|
Why do CNNs excel at feature extraction? A mathematical explanation
|
[
"Vinoth Nandakumar",
"Arush Tagade",
"Tongliang Liu"
] |
cs.CV
|
[
"cs.CV",
"cs.AI",
"I.2.0; I.4.0"
] |
=1
plain
theoremTheorem[section]
proposition[theorem]Proposition
lemma[theorem]Lemma
corollary[theorem]Corollary
definition
definition[theorem]Definition
assumption[theorem]Assumption
remark
remark[theorem]Remark
|
|
|
|
tr.2ex
Z̃
X̃
r̃
Ũ
ℙ
𝔼
𝔹
𝒟
sign
supp
card
rank
#1
skeptic
NPN
v⃗e⃗c⃗
Tr
TRC
𝒮
skeptic
|
http://arxiv.org/abs/2307.02454v1
|
20230705172717
|
Transgressing the boundaries: towards a rigorous understanding of deep learning and its (non-)robustness
|
[
"Carsten Hartmann",
"Lorenz Richter"
] |
cs.LG
|
[
"cs.LG"
] |
The East-West Asymmetry of Particle Intensity in Energetic Storm Particle Events
Dimitris G. Angelakis
August 1, 2023
================================================================================
The recent advances in machine learning in various fields of applications can be largely attributed to the rise of deep learning (DL) methods and architectures. Despite being a key technology behind autonomous cars, image processing, speech recognition, etc., a notorious problem remains the lack of theoretical understanding of DL and related interpretability and (adversarial) robustness issues. Understanding the specifics of DL, as compared to, say, other forms of nonlinear regression methods or statistical learning, is interesting from a mathematical perspective, but at the same time it is of crucial importance in practice: treating neural networks as mere black boxes might be sufficient in certain cases, but many applications require waterproof performance guarantees and a deeper understanding of what could go wrong and why it could go wrong.
It is probably fair to say that, despite being mathematically well founded as a method to approximate complicated functions, DL is mostly still more like modern alchemy that is firmly in the hands of engineers and computer scientists. Nevertheless, it is evident that certain specifics of DL that could explain its success in applications demands systematic mathematical approaches.
In this work, we review robustness issues of DL and particularly bridge concerns and attempts from approximation theory to statistical learning theory. Further, we review Bayesian Deep Learning as a means for uncertainty quantification and rigorous explainability.
§ INTRODUCTION
According to <cit.>, machine learning is a “marriage of statistics and computer science that began in artificial intelligence”. While statistics deals with the question of what can be inferred from data given an appropriate statistical model, computer science is concerned with the design of algorithms to solve a given computational problem that would be intractable without the help of a computer.
Artificial intelligence and, specifically, machine learning have undergone substantial developments in recent years that have led to a huge variety of successful applications, most of which would not have been possible with alternative approaches. In particular, advances in deep learning (i.e. machine learning relying on deep neural networks) have revolutionized many fields, leading, for instance, to impressive achievements in computer vision (e.g. image classification, image segmentation, image generation), natural language processing (semantic text understanding, text categorization and text creation, automatic question answering) and reinforcement learning (agents and games, high-dimensional optimization problems); cf. <cit.> and the references therein.
Moreover, deep learning is nowadays increasingly applied in multiple scientific branches as an acceptable tool for conducting inference from simulated or collected data. For example, in the medical field, the development of drugs <cit.> or the analysis of tomography <cit.> are enhanced with deep learning. In molecular simulations, ground-state properties of organic molecules are predicted <cit.>, equilibrium energies of molecular systems are learnt <cit.> or multi-electron Schrödinger equations are solved <cit.>. Speaking of which, the numerical treatment of high-dimensional partial differential equations with neural networks has undergone vast improvements <cit.>, allowing for applications in almost all sciences. In biology, cell segmentation and classification have been studied with certain convolutional neural networks <cit.>, in signal processing speech separation is approached with temporal versions of these <cit.>, and in finance relevant stock pricing models are solved with deep learning <cit.>. In remote sensing, temporal recurrent neural networks are for instance used for crop classification <cit.> and image segmentation promises automatic understanding of the increasing amount of available satellite data <cit.>. The list of successful deep learning applications is long and there are many more fields in which they have made significant contributions and still promise exciting advances that we shall omit here for the sake of brevity.
It is probably fair to say that, like statistics, deep learning (or machine learning in general) aims at drawing inferences from data. But unlike statistics, it avoids being overly explicit regarding the underlying model assumptions. In statistics, either the model assumptions or the complete model are set prior to making inferences, whereas the neural networks in deep learning are mostly seen as black boxes that are essentially able to `learn' the model. In this sense, deep learning delegates what <cit.> called the “problem of the reference class” to a computer algorithm, namely, the problem of deciding what model class to use when making a prediction of a particular instance or when assigning a probability to a particular event. While this might be understandable – or even desirable – from the user's point of view, it poses risks and might bring dangerous side-effects:
* In most of the applied deep learning models, there is a lack of explainability, meaning that even though their inference from data might work well, the mechanisms behind the predictions are not well understood. As the ambition in all sciences it to understand causal relationships rather than pure correlations, this might neither be satisfying nor lead to further deeper understandings in corresponding fields.
* Without understanding the details of a model, potential robustness issues might not be realized either. For example, who guarantees that certain deep learning achievements easily translate to slightly shifted data settings and how can we expect neural network training runs to converge consistently?
* Finally, often the ambition of a prediction model to generalize to unseen data is stated on an `average' level and we cannot make robust statements on unexpected events, which might imply dangerous consequences in risk-sensitive applications. In general, there is no reliable measure for prediction (un-)certainty, which might lead to blind beliefs in the model output.
Even when it comes to the success stories of deep learning, many achievements and properties of the models can simply not be explained theoretically, e.g. why does one of the most naive optimization attempts, stochastic gradient descent, work so well, why do models often generalize well even though they are powerful enough to simply memorize the training data and why can high-dimensional problems be addressed particularly efficiently? Not only is it important from a practical point of view to understand these phenomena theoretically, as a deeper understanding might motivate and drive novel approaches leading to even more successful results in practice, but it is also important for getting a grip on the epistemology of machine learning algorithms. This then might also advance pure `trial and error' strategies for architectural improvements of neural networks that sometimes seem to work mostly due to extensive hyperparameter finetuning and favorable data set selections; cf. <cit.>.
In this article, we will argue that relying on the tempting black box character of deep learning models can be dangerous and it is important to further develop a deeper mathematical understanding in order to obtain rigorous statements that will make applications more sound and more robust. We will demonstrate that there are still many limitations in the application of artificial intelligence, but mathematical analysis promises prospects that might at least partially overcome these limitations.
We further argue that, if one accepts that explainable DL must not be understood in the sense of the deductive-nomological model of scientific explanation, Bayesian probability theory can provide a means to explain DL in a precise statistical (abductive) sense.
In fact, a comprehensive theory should guide us towards coping with the potential drawbacks of neural networks, e.g. the lack of understanding why certain networks architectures work better than others, the risk of overfitting data, i.e. not performing well on unseen data, or the lack of knowledge on the prediction confidences, in particular, leading to overconfident predictions on data far from the training data set.
Even though we insist that understanding deep learning is a holistic endeavor that comprises the theoretical (e.g. approximation) properties of artificial neural networks in combination with the practical numerical algorithms that are used to train them, we refrain from going beyond the mathematical framework and exploring the epistemological implications of this framework. The epistemology of machine learning algorithms is a relatively new and dynamic field of research, and we refer to recent papers by <cit.> and <cit.>, and the references given there.
§.§ Definitions and first principles
We can narrow down the definition of machine learning to one line by saying that its main intention is to identify functions that map input data x ∈𝒳 to output data y ∈𝒴 in some good way, where 𝒳 and 𝒴 are suitable spaces, often identified with ^d and , respectively. In other words, the task is to find a function f:𝒳→𝒴 such that
f(x) = y.
To illustrate, let us provide two stereotypical examples that appear in practice. In a classification task, for instance, x ∈𝒳 could represent an image (formalized as a matrix of pixels, or, in a flattened version, as a vector x ∈^d) and y ∈𝒴 = {1, …, K } could be a class describing the content of the image. In a regression task, on the other hand, one tries to predict real numbers from the input data, e.g. given historical weather data and multiple measurements, one could aim to predict how much it will rain tomorrow and y∈𝒴 = _≥ 0 would be the amount of rain in milliliters.
From our simple task definition above, two questions arise immediately:
* How do we design (i.e. find) the function f?
* How do we measure performance, i.e. how do we quantify deviations from the desired fit in (<ref>)?
Relating to question 1, it is common to rely on parametrized functions f(x)=f_θ(x), for which a parameter vector θ∈^p specifies the actual function. Artificial neural networks (ANNs) like deep neural networks are examples of such parametrized functions which enjoy specific beneficial properties, for instance in terms of approximation and optimization as we will detail later on. The characterizing feature of (deep) neural networks is that they are built by (multiple) concatenations of nonlinear and affine-linear maps:
We define a feed-forward neural network Φ_σ:^d →^m with L layers by
Φ_σ(x) = A_L σ(A_L-1σ(⋯σ(A_1 x + b_ 1) ⋯) + b_L-1) + b_L,
with matrices A_l ∈^n_l× n_l-1, vectors b_l ∈^n_l, 1 ≤ l ≤ L, and a nonlinear activation function σ: → that is applied componentwise. Clearly, n_0=d and n_L=m, and the collection of matrices A_l and vectors b_l, called weights and biases, comprises the learnable parameters θ.
In practice, one often chooses σ(x) = max{ x, 0} or σ(x) = (1 + e^-x)^-1, since their (sub)derivatives can be explicitly computed and they enjoy a universal approximation property <cit.>.
Even though the organization of an ANN in layers is partly inspired by biological neural networks, the analogy between ANNs and the human brain is questionable and often misleading when it comes to understanding the specifics of machine learning algorithms, such as its ability to generalize <cit.>, and it will therefore play no role in what follows. We rather regard an ANN as a handy representation of the parametrized function f_θ that enjoys certain mathematical properties that we will discuss subsequently. (Note that closeness in function space does not necessarily imply closeness in parameter space and vice versa as has been pointed out in <cit.>.) Clearly, alternative constructions besides the one stated in <Ref> are possible and frequently used, depending on the problem at hand.
§.§ Probabilistic modelling and mathematical perspectives
Now, for actually tuning the parameter vector θ in order to identify a good fit as indicated in (<ref>), the general idea in machine learning is to rely on training data (x_n, y_n)_n=1^N ⊂𝒳×𝒴. For this, we define a loss function ℓ:𝒴×𝒴→_≥ 0 that measures how much our predictions, i.e. function outputs f(x_n), deviate from their targets y_n. Given the training sample, our algorithm can now aim to minimize the empirical loss
ℒ_N(f) = 1/N∑_n=1^N ℓ(f(x_n), y_n),
i.e. an empirical average over all data points. Relating to question 2 from above, however, it turns out that it is not constructive to measure approximation quality by how well the function f can fit the available training data, but rather to focus on the ability of f to generalize to yet unseen data. To this end, the perspective of statistical learning theory assumes that the data is distributed according to an (unknown) probability distribution on 𝒳×𝒴. The training data points x_n and y_n should then be seen as realizations of the random variables X and Y, which admit a joint probability distribution, so
(X, Y) ∼.
We further assume that all pairs (x_n,y_n) are distributed identically and independently from one another (i.i.d.). The expectation over all random (data) variables of this loss is then called expected loss, defined as
ℒ(f) = [ℓ(f(X), Y) ],
where the expectation [·] is understood as the average over all possible data points (X,Y). The expected loss measures how well the function f performs on data from ℙ on average, assuming that the data distribution does not change after training. It is the general intention in machine learning to have the expected loss as small as possible.
To fix ideas, let us consider a toy example in d=1. We assume that the true function is given by f(x) = sin (2π x) and that the data x is distributed uniformly on the interval [0, 2]. In <Ref> we display the function f along with N=100 randomly drawn data points (x_n, y_n)_n=1^N, where y_n is once given by the deterministic mapping y_n = f(x_n) and once by the stochastic mapping y_n = f(x_n) + η_n, where η_n ∼𝒩(0, 0.01) indicates noise, by denoting 𝒩(μ, σ^2) a normal (i.e. Gaussian) distribution with mean μ and variance σ^2. The stochastic mapping induces the probability measure ℙ, i.e. the joint distribution of the random variables (X, Y) ∈𝒳×𝒴, which we plot approximately in the right panel. Note that (even for simple toy problems) can usually not be written down analytically.
For a further analysis, let us give names to three different functions that minimize a given corresponding loss (assuming for simplicity that all minima are attained, even though they may not be unique):
f^B ∈_f ∈ℳ(𝒳, 𝒴)ℒ(f), f^* ∈_f ∈ℱℒ(f), f_N ∈_f∈ℱℒ_N(f).
The first quantity, f^B, is the theoretically optimal function among all mathematically reasonable (or: measurable) functions (cf. Appendix <ref>), denoted here by the set ℳ(𝒳, 𝒴), the second quantity, f^*, is the optimal function in a specified function class ℱ (e.g. the class of neural networks), and the third quantity, f_N, is the function that minimizes the empirical error on the training data.
With regard to the second quantity above, finding a suitable function class ℱ requires balancing two conflicting goals: on the one hand, the function class should be sufficiently rich to enjoy the universal approximation property, i.e. the ability to represent any theoretically optimal function f^B up to a sufficiently small approximation error that is still considered acceptable.[What is considered an acceptable approximation error depends on the problem at hand.] On the other hand, the function class should not be overly complex, in order to avoid overfitting which may lead to a function f (e.g. a classifier) that poorly generalizes beyond known data.
Let us make this point more precise, and let us say that we have some training algorithm that has produced a function f on the training data (x_n, y_n)_n=1^N (see Appendix <ref> for details).
We can decompose the deviation of the function f from the theoretically optimal solution f^B into four different terms that correspond to three different error contributions – generalization, optimization and approximation error:
ℒ(f) - ℒ(f^B) = ℒ(f) - ℒ_N(f)_generalization error + ℒ_N(f) - ℒ_N(f^*)_optimization error + ℒ_N(f^*) - ℒ(f^*)_generalization error + ℒ(f^*) - ℒ(f^B)_approximation error.
Specifically, if we set f=f_N, the above decomposition reveals what is known as the bias-variance tradeoff, namely, the decomposition of the total error (as measured in terms of the loss) into a contribution that stands for the ability of the function f^*∈ℱ to best approximate the truth f^B (bias) and a contribution that represents the ability to estimate the approximant f^* from finitely many observations (variance), namely[Here we loosely understand the word `truth' in the sense of empirical adequacy following the seminal work of van Fraasen <cit.>, which means that we consider the function f^B to be empirically adequate, in that there is no other function (e.g. classifier or regression function) that has a higher likelihood relative to all unseen data in the world; see also <cit.>. The term `truth' is typical jargon in the statistical learning literature, and one should not take it as a scientific realist's position.]
ℒ(f_N) - ℒ(f^B) = ℒ(f_N) - ℒ(f^*)_estimation error (variance) + ℒ(f^*) - ℒ(f^B)._approximation error (bias)
We should stress that it is not fully understood yet in which cases overfitting leads to poor generalization and prediction properties of an ANN as there are cases in which models with many (nonzero) parameters that are perfectly fitted to noisy training data may still have good generalization skills; cf. <cit.> or Section <ref> below for further explanation.
A practical challenge of any function approximation and any learning algorithm is to minimize the expected loss by only using a finite amount of training data, but without knowing the underlying data distribution ℙ. In fact, one can show there is no universal learning algorithm that works well for every data distribution (no free lunch theorem). Instead, any learning algorithm (e.g. for classification) with robust error bounds must necessarily be accompanied by a priori regularity conditions on the underlying data distribution, e.g. <cit.>.[As a consequence, deep learning does not solve Reichenbach's reference class problem or gives any hint to the solution of the problem of induction, but it is rather an instance in favor of the Duhem-Quine thesis, in that any learning algorithm that generalizes well from seen data must rely on appropriate background knowledge <cit.>; cf. <cit.>.]
Let us come back to the loss decomposition (<ref>). The three types of errors hint at different perspectives that are important in machine learning from a mathematical point of view:
* Generalization: How can we guarantee generalizing to unseen data while relying only on a finite amount of training data?
* Function approximation: Which neural network architectures do we choose in order to gain good approximation qualities (in particular in high-dimensional settings)?
* Optimization: How do we optimize a complicated, nonconvex function, like a neural network?
Besides these three, there are more aspects that cannot be read off from equation (<ref>), but turn out to become relevant in particular in certain practical applications. Let us stress the following two:
* Numerical stability and robustness: How can we design neural networks and corresponding algorithms that exhibit some numerical stability and are robust to certain perturbations?
* Interpretability and uncertainty quantification: How can we explain the input-output behavior of certain complicated, potentially high-dimensional function approximations and how can we quantify uncertainty in neural network predictions?
In this article, we will argue that perspectives 4 and 5 are often overlooked, but still in particular relevant for a discussion on the limitations and prospects in machine learning. Along these lines, we will see that there are promising novel developments and ideas that advance the aspiration to put deep learning onto more solid grounds in the future.
The article is organized as follows. In <Ref> we will review some aspects of neural networks, admittedly in a very a non-exhaustive manner, where in particular Sections <ref>–<ref> will correspond to perspectives 1–3 stated above. <Ref> will then demonstrate why (non-)robustness issues in deep learning are particularly relevant for practical applications, as illustrated by adversarial attacks in <Ref>. We will argue in <Ref> that successful adversarial attacks on (deep) neural networks require careful thinking about worst-case analyses and uncertainty quantification. <Ref> therefore relates to perspectives 4 and 5 from above. Next, <Ref> will introduce the Bayesian perspective as a principled framework to approach some of the robustness issues raised before. After introducing Bayesian neural networks, we will discuss computational approaches in <Ref> and review further challenges in <Ref>. Finally, in <Ref> we will draw a conclusion.
§ DEEP NEURAL NETWORKS: ODDITIES AND SOME SPECIFICS
One of the key questions regarding machine learning with (deep) neural networks is related to their ability to generalize beyond the data used in the training step (cf. perspective 1 in <Ref>). The idea here is that a trained ANN applies the regularities found in the training data (i.e. in past observations) to future or unobserved data, assuming that these regularities are persistent. Without dwelling on technical details, it is natural to understand the training of a neural network from a probabilistic viewpoint, with the trained ANN being a collection of functions, that is characterized by a probability distribution over the parameter space, rather than by a single function.
This viewpoint is in accordance with how the training works in practice, since training an ANN amounts to minimizing the empirical loss given some training data, as stated in equation (<ref>), and this minimization is commonly done by some form of stochastic gradient descent (SGD) in the high-dimensional loss landscape[The empirical risk J_N(θ)=ℒ_N(f_θ), considered as a function of the parameters θ is often called the loss landscape or energy landscape.], i.e. batches of the full training set are selected randomly during the training iterations (see also <Ref>).
As a consequence, the outcome of the training is a random realization of the ANN and one can assign a probability distribution to the trained neural network.
§.§ Generalization, memorization and benign overfitting
If we think of the parametrized function that represents a trained neural network as a random variable, it is natural to assign a probability measure Q(f) to every regression function f.
So, let Q^B=Q(f^B) be the target probability distribution (i.e. the truth), Q^*=Q(f^*) the best approximation, and Q_N=Q(f_N) the distribution associated with the N training points that are assumed to be randomly drawn from .
We call f(t)∈ℱ the function approximation that is obtained after running the parameter fitting until time t (see Sec. <ref> and Appendix <ref> below for further details) – f(t) therefore models the training for a specified amount of training iterations. Ideally, one would like to see that Q(f(t)) resembles either the truth Q^B or its best approximation Q^* as the training proceeds; however, it has been shown that trained networks often memorize (random) training data in that <cit.>
lim_t→∞ Q(f(t)) = Q_N .
In this case, the training lets the model learn the data which amounts to memorizing facts, without a pronounced ability to generate knowledge. It is interesting to note that this behavior is consistently observed when the network is trained on a completely random relabelling of the true data, in which case one would not expect outstanding generalization capabilities of the trained ANN <cit.>. Finally, it so happens that Q(f(t)) does not converge to Q_N, in which case it diverges and thus gives no information whatsoever about the truth.
A phenomenon that is related to memorizing the training data and that is well known in statistical learning is called overfitting. It amounts to the trained function fitting the available data (too) well, while not generalizing to unseen data, as illustrated in the bottom left panel of <Ref>. The classical viewpoint in statistics is that when the function has far more parameters than there are data points (as is common with deep neural networks) and if the training time is too large, overfitting might happen, as illustrated in <Ref>. An indication of overfitting can be that the generalization error is strongly growing while the empirical risk is driven almost to zero. To prevent this, an alternative to increasing the number of training steps, t, while the training data remains the same, is early stopping. It has been shown (e.g. <cit.>) that the empirical distribution can be close to the truth (in which case the ANN generalizes well), if the training is stopped after a sufficiently long, but not too long training phase. Figure <ref> shows the typical shape of the discrepancy between the trained network and the truth.
However, it turns out that there are also cases of benign overfitting, in which an ANN shows remarkable generalization properties, even though it is essentially fitting the noise in the training data. The phenomenon of benign overfitting, also known by the name of double descent, describes the empirical observation that the generalization error, as measured by the true risk, decreases again as the number of parameters is increased – despite severe overfitting (see Figure <ref>). Note that there is not contradiction between the double descent phenomenon and the traditional U-shaped risk curve shown in Figure <ref> as they hold under different circumstances and the double descent requires pushing the number of parameters beyond a certain (fairly large) threshold.
It has been conjectured that this phenomenon is related to a certain low rank property of the data covariance; nevertheless a detailed theoretical understanding of the double descent curve for a finite amount of training data is still lacking as the available approximation results cannot be applied in situations in which the number of parameters is much higher than the number of data points. Interestingly, double descent has also been observed for linear regression problems or kernel methods, e.g. <cit.>. Thus it does not seem to be a unique feature of ANNs; whether or not it is a more typical phenomenon for ANNs is an open question though <cit.>; see also <cit.> for an early reference in which the double descent feature of ANNs has been first described (for some models even multiple descent curves are conjectured <cit.>).
§.§ Curse of dimensionality
An important aspect of function approximation (and therefore related to perspective 2 stated in <Ref>) is the question of how complicated the function f_θ or, equivalently, how rich the function class ℱ needs to be. This becomes particularly interesting if the state space is high-dimensional and a notorious challenge is known as the curse of dimensionality. It describes the phenomenon that approximating a target function f^B or the corresponding probability distribution Q^B=Q(f^B) when 𝒳 is high-dimensional (i.e. when the number of degrees of freedom is large) requires a huge amount of training data to determine a regression function f_θ that is able to approximate the target.
As a rule of thumb, approximating a function f^B on 𝒳=^d or the associated probability measure Q^B with an accuracy of ϵ needs about
N=ϵ^-Ω(d)
sample points in order to determine roughly the same number of a priori unknown parameters θ, thereby admitting an exponential dependence on the dimension.[Here we use the Landau notation Ω(d) to denote a function of d that asymptotically grows like α· d for some constant α>0; often α=1,2.]
It is easy to see that the number of parameters needed and the size of the training set become astronomical for real-world tasks.
As an example, consider the classification of handwritten digits. The MNIST database (Modified National Institute of Standards and Technology database) contains a dataset of about 60 000 handwritten digits that are stored in digital form as 28× 28 pixel greyscale images <cit.>. If we store only the greyscale values for every image as a vector, then, the dimension of every such vector will be 28^2=784. By today's standards, this is considered a small system, yet it is easy to see that training a network with about 10^784 parameters and roughly the same number of training data points is simply not feasible, especially as the training set contains less than 10^5 data points.
In practice, the number of ANN parameters and the number of data points needed to train a network can be much smaller. In some cases, this inherent complexity reduction present in deep learning can be mathematically understood. Clearly, when the target function is very smooth, symmetric or concentrated, it is possible to approximate it with a parametric function having a smaller number of parameters. The class of functions that can be approximated by an ANN without an exponentially large number of parameters, however, is considerably larger; for example, Barron-regular functions that form a fairly large class of relevant functions can be approximated by ANNs in arbitrary dimension with a number of parameters that is independent of the dimension <cit.>; there are, moreover, results that show that it is possible to express any labelling of N data points in ^d by an ANN two layers and in total p=2N+d parameters <cit.>; cf. <cit.>. In general, however, the quite remarkable expressivity of deep neural networks with a relatively small number of parameters and even smaller training sets is still not well understood <cit.>.[Here, `relatively small' must be understood with respect to the dimension of the training data set. An ANN that was successfully trained on MNIST data may still have several hundred millions or even billions of parameters; nevertheless, the number of parameters is small compared to what one would expect from an approximation theory perspective, namely 10^784. However, it is large compared to the minimum number of parameters needed to fit the the data, which in our example would be p=2· 60 000+784=120 784, hence an ANN with good generalization capacities is typically severely overfitting, especially if we keep in mind that the effective dimension of the MNIST images that contains about 90% black pixels is considerably smaller]
§.§ Stochastic optimization as implicit regularization
Let us finally discuss an aspect related to the optimization of ANNs (cf. perspective 3 in <Ref>) that interestingly offers a connection to function approximation as well. Here, the typical situation is that no a priori information whatsoever about the function class to which f^B belongs is available. A conventional way then to control the number of parameters and to prevent overfitting is to add a regularization term to the loss function that forces the majority of the parameters to be zero or close to zero and hence effectively reduces the number of parameters <cit.>. Even though regularization can improve the generalization capabilities, it has been found to be neither necessary nor sufficient for controlling the generalization error <cit.>.
Instead, surprisingly, there is (in some situations proveable) evidence that SGD introduces an implicit regularization to the empirical risk minimization that is not present in the exact (i.e. deterministic) gradient descent <cit.>. A possible explanation of this effect is that the inexact gradient evaluation of SGD introduces some noise that prevents the minimization algorithm from getting stuck in a bad local minimum. It has been observed that the effect is more pronounced when the variance of the gradient approximation is larger, in other words: when the approximation has a larger sampling error <cit.>. A popular, though controversial explanation is that noisier SGD tends to favor wider or flatter local minima of the loss landscape that are conventionally associated with better generalization capabilities of the trained ANN <cit.>. How to unambigously characterize the `flatness' of local minima with regard to their generalization capacities, however, is still an open question. Furthermore, it should be noted that too much variance in the gradient estimation is not favorable either, as it might lead to slower training convergence, and it will be interesting to investigate how to account for this tradeoff; cf. <cit.>.
To illustrate the implicit regularization of an overfitted ANN by SGD, we consider the true function f(x) = sin(2 π x) and create N=100 noisy data points according to y_n = f(x_n) + 0.15 η_n, where x_n is uniformly distributed in the interval [0,2π] (symbolically: x_n∼𝒰([0, 2])) and η_n ∼𝒩(0, 1). We choose a fully connected NN with three hidden layers (i.e. L=4), each with 10 neurons.
Once we train with gradient descent and once we randomly choose a batch of size N_b = 10 in each gradient step. In <Ref> we can see that running gradient descent on the noisy data leads to overfitting, whereas stochastic gradient descent seems to have some implicit regularizing effect.
We have provided a potpourri of aspects related to the three perspectives generalization, function approximation and optimization, demonstrating subtleties of deep learning that have partly been understood with the help of rigorous mathematical analysis, while still leaving many open questions for future research. In the following, let us move towards perspectives 4 and 5 that we have stated in <Ref>. In particular, the following chapter will argue that relying on classical statistical learning theory might not be sufficient in certain practical applications and additional effort and analysis are needed in order to make deep learning more robust.
§ SENSITIVITY AND (NON-)ROBUSTNESS OF NEURAL NETWORKS
So far we have measured the performance of prediction models in an `average' sense. In particular we have stated the goal of a machine learning algorithm to minimize the expected loss
ℒ(f) = [ℓ(f(X), Y) ],
where the deviations between predictions and ground truth data are averaged over the (unknown) probability distribution . Statements from statistical learning theory therefore usually hold the implicit assumption that future data comes from the same distribution and is hence similar to that encountered during training (cf. <Ref>). This perspective might often be valid in practice, but falls short of atypical data in the sense of having a small likelihood, which makes such an occurence a rare event or a large deviation.
Especially in safety-critical applications one might not be satisfied with average-case guarantees, but rather strives for worst-case analyses or at least for an indication of the certainty of a prediction (which we will come back to in the next section). Moreover, it is known that models like neural networks are particularly sensitive with respect to the input data, implying that very small, barely detectable changes of the data can drastically change the output of a prediction model – a phenomenon that is not respected by an analysis based on expected losses.
§.§ Adversarial attacks
An extreme illustration of the sensitivity of neural networks can be noted in adversarial attacks, where input data is manipulated in order to mislead the algorithm.[This desire to mislead the algorithm is in accordance with Popper's dictum that we are essentially learning from our mistakes. As <cit.>) mentions in the seminal speech Duldsamkeit und intellektuelle Verantwortlichkeit on the occasion of receiving the Dr. Leopold Lucas Price of the University of Tübingen on the 26th May 1981: “[…] es ist die spezifische Aufgabe des Wissenschaftlers, nach solchen Fehlern zu suchen. Die Feststellung, daß eine gut bewährte Theorie oder ein viel verwendetes praktisches Verfahren fehlerhaft ist, kann eine wichtige Entdeckung sein.”] Here the idea is to add very small and therefore barely noticeable perturbations to the data in such a way that a previously trained prediction model then provides very different outputs. In a classification problem this could for instance result in suggesting different classes for almost identical input data. It has gained particular attention in image classification, where slightly changed images can be misclassified, even though they appear identical to the original image for the human eye, e.g. <cit.>.
Adversarial attacks can be constructed in many different ways, but the general idea is usually the same. We discuss the example of a trained classifier: given a data point x∈^d and a trained neural network f_θ, we add some minor change δ∈^d to the input data x, such that f_θ(x + δ) predicts a wrong class. One can differentiate in targeted and untargeted adversarial attacks, where either the wrong class is specified or the misclassification to any arbitrary (wrong) class is aimed at. We focus on the former strategy as it turns out to be more powerful. Since the perturbation is supposed to be small (e.g. for the human eye), it is natural to minimize the perturbation δ in some suitable norm (e.g. the Euclidean norm or the maximum norm) while constraining the classifier to assign a wrong label y≠ y to the perturbed data and imposing an additional box constraint. In the relevant literature (e.g. <cit.>), an adversarial attack is constructed as the solution to the following optimization problem:
minimize δ subject to f_θ(x + δ) = y and x+δ∈[0,1]^d .
Note that we have the hidden constraint f_θ(x)=y, where y≠y and the input variables have been scaled such that x∈[0,1]^d.
In order to have an implementable version of this procedure, one usually considers a relaxation of (<ref>) that can be solved with (stochastic) gradient descent-like methods in δ; see e.g. <cit.>.
Roughly speaking, generating an adversarial attack amounts to doing a (stochastic) gradient descent in the data rather than the parameters, with the aim of finding the closest possible input x to x that gets wrongly classified and to analyze what went wrong.[Again, quoting <cit.>: “Wir müssen daher dauernd nach unseren Fehlern Ausschau halten. Wenn wir sie finden, müssen wir sie uns einprägen; sie nach allen Seiten analysieren, um ihnen auf den Grund zu gehen.”.]
[Adversarial attack to image classification]
Let us provide an example of an adversarial attack in image classification. For this we use the Inception-v3 model from <cit.>, which is pretrained on 1000 fixed classes. For the image in the left panel of <Ref> a class is predicted that seems close to what is in fact displayed. We then compute a small perturbation δ, displayed in the central panel, with the goal of getting a different classification result. The right panel displays the perturbed image x+δ, which notably looks indistinguishable from the original image, yet gets classified wrongly with the same Inception-v3 model. The displayed probabilities are the so-called softmax outputs of the neural network for the predicted classes and they represent some sort of certainty scores.
§.§ Including worst-case scenarios and marginal cases
Adversarial attacks demonstrate that neural networks might not be robust with respect to unexpected input data and the next question naturally is how this issue can be addressed. In fact, multiple defense strategies have been developed in recent years in order to counteract attacks, while it is noted that a valid evaluation of defenses against adversarial examples turns out to be difficult, since one can often find additional attack strategies afterwards that have not been considered in the evaluation <cit.>. One obvious idea for making neural networks more robust is to integrate adversarial attacks into the training process, e.g. by considering the minimization
min_θ[ max_δ∈Δℓ(f_θ(X + δ), Y)] ,
where Δ = {δ : δ≤ε} is some specified perturbation range <cit.>. Depending on the application, however, convergence of this min-max problem can be cumbersome. At present, the study of adversarial attacks is a prominent research topic with many questions still open (e.g. the role of regularization <cit.>), and it has already become apparent that principles that hold for the average case scenario might not be valid in worst-case settings anymore; cf. <cit.>. To give an example, there is empirical evidence that overfitting might be more harmful when adversarial attacks are present, in that overparametrized deep NNs that are robust against adversarial attacks may not exhibit the typical double descent phenomenon when the training is continued beyond the interpolation threshold (cf. Figure <ref>); instead they show a slight increase of the generalization risk when validated against test data, i.e. their test performance degrades, which is at odds with the observations made for standard deep learning algorithms based on empirical risk minimization <cit.>.
Another way to address adversarial attacks is to incorporate uncertainty estimates in the models and hope that those then indicate whether perturbed (or out of sample) data occurs. Note that the question as to whether some new data is considered typical or not (i.e. an outlier or a marginal case) depends on the parameters of the trained neural network which are random, in that they depend on the random training data. As a principled way of uncertainty quantification we will introduce the Bayesian perspective and Bayesian Neural Networks (BNNs) in the next section. We claim that these can be viewed as a more robust deep learning paradigm, which promises fruitful advances, backed up by some already existing theoretical results and empirical evidence. In relation to adversarial attacks, there have been multiple indications of attack identifications <cit.> and improved defenses <cit.> when relying on BNNs. In fact, there is clear evidence of increasing prediction uncertainty with growing attack strength, indicating the usefulness of the provided uncertainty scores. On the theoretical side, it can be shown that in the (large data and overparametrized) limit BNN posteriors are robust against gradient-based adversarial attacks <cit.>.
§ THE BAYESIAN PERSPECTIVE
In the previous chapter we demonstrated and discussed the potential non-robustness of neural networks related, for example, to small changes of input data by adversarial attacks. A connected inherent problem is that neural networks usually don't know when they don't know, meaning that there is no reliable quantification of prediction uncertainty.[Freely adapted from the infamous 2002 speech of the former U.S. Secretary of Defense, Donald Rumsfeld: “We […] know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don't know we don't know.”] In this chapter we will argue that the Bayesian perspective is well suited as a principled framework for uncertainty quantification, thus holding the promise of making machine learning models more robust; see <cit.> for an overview.
We have argued that classical machine learning algorithms often act as black boxes, i.e. without making predictions interpretable and without indicating any level of confidence. Given that all models are learnt from a finite amount of data, this seems rather naive and it is in fact desirable that algorithms should be able to indicate a degree of uncertainty whenever not `enough' data have been present during training (keeping in mind, however, that this endeavor still leaves certain aspects of interpretability such as post-hoc explanations <cit.>) open. To this end, the Bayesian credo is the following: we start with some beforehand (a priori) given uncertainty of the prediction model f. In the next step, when the model is trained on data, this uncertainty will get `updated' such that predictions `close' to already seen data points become more certain. In mathematical terms, the idea is to assume a prior probability distribution p(θ) over the parameter vector θ of the prediction model rather than a fixed value as in the classical case. We then condition this distribution on the fact that we have seen a training data set 𝒟 = (x_n, y_n)_n=1^N.
The computation of conditional probabilities is governed by Bayes' theorem, yielding the posterior probability p(θ | 𝒟), namely by
p(θ | 𝒟) = p(𝒟 | θ)p(θ)/p(𝒟),
where p(𝒟 | θ) is the likelihood of seeing data 𝒟 given the parameter vector θ and p(𝒟) = ∫_^p p(𝒟, θ) dθ is the normalizing constant, sometimes called evidence, assuring that p(θ | 𝒟) is indeed a probability density. The posterior probability can be interpreted as an updated distribution over the parameters given the data 𝒟. Assuming that we can sample from it, we can then make subsequent predictions on unseen data x by
f(x) = ∫_^p f_θ(x) p(θ | 𝒟) dθ≈1/K∑_k=1^K f_θ^(k)(x)
where θ^(1),…,θ^(K) are i.i.d. samples drawn from the Bayesian posterior p(θ | 𝒟), i.e. we average predictions of multiple neural networks, each of which having parameters drawn from the posterior distribution.
[BNN based on different amounts of data]
Let us say we want to learn the function f(x) = sin(2π x) and have a certain amount of training data 𝒟 = (x_n, y_n)_n=1^N available, where the label is given by y_n = f(x_n) + η_n, with noise η_n ∼𝒩(0, 0.01). Intuitively, the fitted neural network should be closer to the true function as well as more certain in its predictions the more data points are available. We consider a BNN trained with a mean field variational inference attempt and a Gaussian prior on the parameters (see next section for details). In <Ref> we display the mean prediction function as defined in (<ref>) as well as a confidence set defined by two standard deviations on the sample predictions. In the left panel we display the evaluation before training, i.e. without relying on any available data points, and note that the average prediction function is rather arbitrary and the uncertainty rather high, as expected. The central panel repeats the same evaluation, where now the BNN is trained on N = 5 data points. We can see an improved prediction function and a decreased uncertainty. Finally, the right panel displays a BNN trained on N = 100 data points, where now the mean prediction function is quite close to the true function and the uncertainty almost vanishes close to the data points, yet remains large wherever no training data was available. The BNN is therefore able to output reasonable uncertainty scores, depending on which data was available during training.
§.§ Bayesian neural networks in practice
Even though simple at first glance, the Bayes formula (<ref>) is non-trivial from a computational point of view and can in almost all cases not be computed analytically. The challenging term is p(𝒟), for which, given the nested structure of neural networks, the integral has to be approximated numerically. Classical numerical integration, however, is infeasible too, due to the high dimension of the parameter vector θ. We therefore have to resort to alternative attempts that aim to approximate the posterior distribution p(θ | 𝒟).
An asymptotically exact method for creating samples from any (suitable) probability distribution is called Hamiltonian Monte Carlo (also: Hybrid Monte Carlo), which is based on ideas from Statistical Physics and the observation that certain dynamical systems admit an equilibrium state that can be identified with the posterior probability that we seek to compute <cit.>. For our purposes this attempt seems to be a method of choice when aiming for high approximation quality; however, it does not scale well to high dimensions and is therefore practically useless for state-of-the-art neural networks. A similar idea is to exploit the so-called Langevin dynamics in combination with subsampling of the data points <cit.>. This method scales much better, but it is biased since the data subsampling perturbs the stationary distribution. A quite different attempt is called dropout, which builds the posterior approximation into the neural network architecture and implicitly trains
multiple models at the same time <cit.>. Finally, another popular method is based on variational inference, where the true posterior is approximated within a family of simpler probability densities, e.g. multidimensional Gaussians with diagonal covariance matrices <cit.>. Depending on the approximation class, this method scales well, but approximation quality cannot be guaranteed.
Each of the methods mentioned above has advantages and disadvantages and many questions are still open. As a general remark, there is indeed repeated evidence that, ignoring the approximation challenges for the moment, the Bayesian framework works well in principle for quantifying the prediction uncertainties of neural networks. Additionally, there are indications, based on empirical studies <cit.>, that the overall model performance might be improved when relying on predictions from BNNs in contrast to deterministic ANNs. On the other hand, many of the approximation steps that lead to a BNN are not well understood theoretically, and one can demonstrate empirically that they often lead to posterior approximations that are not accurate, e.g. <cit.>. Some of those failures seem to be due to systematic simplifications in the approximating family <cit.>. This phenomenon gets more severe, while at the same time harder to spot, when the neural networks are large, i.e. when the parameter vector is very high-dimensional. An accepted opinion seems to be that whenever BNNs do not work well, then it is not the Bayesian paradigm that is to blame, but rather the inability to approximate it well <cit.>. At the same time, however, there are works such as <cit.> that claim that for certain neural network architectures simplified approximation structures get better the bigger (and in particular the deeper) the model is.
§.§ Challenges and prospects for Bayesian Neural Networks
The previous section sought to argue that there is great potential in using BNNs in practice; however, many questions, both from a theoretical and practical point of view, are still open.
A natural formulation of BNNs can be based on free energy as a loss function that has been discussed in connection with a formal account of curiosity and insight in terms of Bayesian inference (see <cit.>): while the expected loss or risk in deep learning can be thought of as an energy that describes the goodness-of-fit of a trained ANN to some given data (where minimum energy amounts to an optimal fit), the free energy contains an additional entropy term that accounts for the inherent parameter uncertainty and has the effect of smoothing the energy landscape. The result is a trade-off between an accurate fit, which bears the risk of overfitting, and reduced model complexity (i.e. Occam's razor). From the perspective of statistical inference, e.g. <cit.>, the free energy has the property that its unique minimizer in the space of probability measures is the sought Bayesian posterior <cit.>. Selecting a BNN by free energy minimization therefore generates a model that, on average, provides the best explanation for the data at hand, and thus it can be thought of as making an inference to the best explanation in the sense of <cit.>; cf. also <cit.>.
Evidently, the biggest challenge seems to be a computational one: how can we approximate posterior distributions of large neural networks both well and efficiently?
But even if the minimizer or the Bayesian posterior can be approximated, the evaluation of posterior accuracy (e.g. from the shape of the free energy in the neighborhood of the minimizer) is still difficult and one usually does not have clear guarantees. Furthermore, neural networks keep getting larger and more efficient methods that can cope with ever higher dimensionality are needed.
Regarding the benefits of BNNs, there is an open debate on how much performance gains they actually bring in practice; cf. <cit.>. Uncertainty quantification, on the other hand, is valuable enough to continue the Bayesian endeavor, eventually allowing for safety-critical applications or potentially improving active and continual learning.
§ CONCLUDING REMARKS
The recent progress in artificial intelligence is undeniable and the related improvements in various applications are impressive. This article, however, provides only a snapshot of the current state of deep learning, and we have demonstrated that many phenomena that are intimately connected are still not well understood from a theoretical point of view. We have further argued that this lack of understanding not only slows down further systematic developments of practical algorithms, but also bears risks that become in particular apparent in safety-critical applications. While inspecting deep learning from the mathematical angle, we have highlighted five perspectives that allow for a more systematic treatment, offering already some novel explanations of striking observations and bringing up valuable questions for future research (cf. <Ref>).
We have in particular emphasized the influence of the numerical methods on the performance of a trained neural network and touched upon the aspect of numerical stability, motivated by the observation that neural networks are often not robust (e.g. with respect to unexpected input data or adversarial attacks) and do not hold any reliable measure for uncertainty quantification. As a principled framework that might tackle those issues, we have presented the Bayesian paradigm and in particular Bayesian neural networks, which provide a natural way of quantifying epistemic uncertainties. In theory, BNNs promise to overcome certain robustness issues and many empirical observations are in line with this hope; however, they also bring additional computational challenges, connected mainly to the sampling of high dimensional probability distributions. The existing methods addressing this issue are neither sufficiently understood theoretically nor produce good enough (scalable) results in practice such that a persistent usage in applications is often infeasible. We believe that the theoretical properties of BNNs (or ANNs in general) cannot be fully understood without understanding the numerical algorithms used for training and optimisation. Future research should therefore aim at improving these numerical methods in connection with rigorous approximation guarantees.
Moreover, this article argued that many of the engineering-style improvements and anecdotes related to deep learning need systematic mathematical analyses in order foster a solid basis for artificial intelligence[This view addresses the skeptical challenge of Ali Rahimi who gave a presentation at NeurIPS Conference in 2017 with the title “Machine learning has become alchemy”. According to Rahimi, machine learning and alchemy both work to a certain degree, but the lack of theoretical understanding and interpretability of machine learning models is major cause for concern.]. Rigorous mathematical inspection has already led to notable achievements in recent years, and in addition to an ever enhancing handcrafting of neural network architectures, the continuation of this theoretical research will be the basis for further substantial progress in machine learning. We therefore conclude with a quote from Vladimir <cit.>, one of the founding fathers of modern machine learning: “I heard reiteration of the following claim: Complex theories do not work, simple algorithms do. [...] I would like to demonstrate that in this area of science a good old principle is valid: Nothing is more practical than a good theory.”
Acknowledgements. This research has been partially funded by Deutsche Forschungsgemeinschaft (DFG) through the grant CRC 1114 ‘Scaling Cascades in Complex Systems’ (project A05, project number 235221301) as well as by the Energy Innovation Center (project numbers 85056897 and 03SF0693A) with funds from the Structural Development Act (Strukturstärkungsgesetz) for coal-mining regions.
§ TRAINING OF ARTIFICIAL NEURAL NETWORKS
Let ℱ be the set of neural networks f_θ=Φ_σ of a certain predefined topology (i.e. with a given number of concatenated activation functions, interconnection patterns, etc.) that we want to train. Suppose we have N data points (x_1, y_1),…,(x_N,y_N) where, for simplicity, we assume that y_n=f(x_n) is deterministic. For example, we may think of every x_n having a unique label y_n=± 1.
Training an ANN amounts to solving the regression problem
f_θ(x_n)≈ y_n
for all n=1,…,N. Specifically, we seek θ∈Θ that minimizes the empirical risk (also: loss landscape)
J_N(θ) = 1/N∑_n=1^N ℓ(f_θ(x_n),y_n)
over some potentially high-dimensional parameter set Θ.[Recall that we call the empirical risk J_N when considered as a function of parameters θ and ℒ_N when considered as a function of functions.] There are few cases in which the risk minimization problem has an explicit and unique solution if the number of independent data points is large enough. One such case in which an explicit solution is available is when f_θ(x)=θ^⊤ x is linear and l(z,y)=|z-y|^2 is quadratic. This is the classical linear regression problem.
For ANNs, an explicit solution is neither available nor unique, and an approximation to f_N≈ f^* must be computed by an suitable iterative numerical method. One such numerical method is called gradient descent
θ_k+1 = θ_k - η_k∇ J_N(θ_k) , k=0,1,2,3,… ,
where η_0,η_1,η_2 is a sequence of step sizes, called learning rate, that tends to zero asymptotically. For a typical ANN and a typical loss function, the derivative (i.e. the gradient)
∇ J_N(θ) = 1/N∑_n=1^N ∇_θℓ(f_θ(x_n),y_n)
with respect to the parameter θ can be computed by what is called backpropagation, essentially relying on the chain rule of differential calculus; see, e.g. <cit.>. Since the number of training points, N, is typically very large, evaluating the gradient that is a sum of N terms is computationally demanding, therefore the sum over the training data is replaced by a sum over a random, usually small subsample of the training data. This means that, for fixed θ, the derivative ∇ J_N(θ) is replaced by an approximation ∇J_N(θ) that is random; the approximation has no systematic error, i.e. it equals the true derivative on average, but it deviates from the true derivative by a random amount (that may not even be small, but that is zero on average). As a consequence, we can rewrite our gradient descent as follows:
θ_k+1 = θ_k - η_k∇J_N(θ_k) + ζ_k , k=0,1,2,3,… ,
where ζ_k is the random error invoked by substituting ∇ J_N(θ) with J_N(θ). Since ζ_k is unknown as it depends on the true derivative ∇ J_N(θ_k) at stage k that cannot be easily computed, the noise term in (<ref>) is ignored in the training procedure, which leads to what is called stochastic gradient descent (SGD):
θ_k+1 = θ_k - η_k∇J_N(θ_k) , k=0,1,2,3,… .
Since the right hand side in (<ref>) is random by virtue of the randomly chosen subsample that is used to approximate the true gradient, the outcome of the SGD algorithm after, say, t iterations will always be random.
As a consequence, training an ANN for given data and for a fixed number of training steps, t, multiple times will never produce the same regression function f_θ, but instead a random collection of regression functions. This justifies the idea of a trained neural as a probability distribution Q(f(t))=Q(f_θ(t)) rather than unique function f(t)=f_θ(t) that represents its random state after t training steps.
We should stress that typically, SGD does not converge to the optimal solution (if it converges at all), but rather finds a suboptimal local optimum (if any). From the perspective of mathematical optimization, it is one of the big mysteries of deep learning that despite being only a random and suboptimal solution, the predictions made by the resulting trained network are often suprisingly good <cit.>.
In trying to reveal the origin of this phenomenon, SGD has been analyzed using asymptotic arguments, e.g. <cit.>. These methods rely on limit theorems, e.g. <cit.>, to approximate the random noise term in (<ref>), and they are suitable to understand the performance in the large data setting. However, they are unable to adress the case of finite, not to mention sparse training data. Recently, the finite data situation has been analyzed using backward error analysis, and there is empirical evidence that SGD incorporates an implicit regularization which favors shallow minimization paths that leads to broader minima and (hence) to more robust ANNs <cit.>.
§ OPTIMAL PREDICTION AND BAYES CLASSIFIER
For prediction tasks, when the ANN is supposed to predict a quantity y∈ based on an input x∈^d, the generalization error is typically measured in the sense of the mean square error (MSE), with the quadratic loss
ℓ(f(x),y) = (f(x)-y)^2 .
Let
sgn(z) =
1 , z > 0
0 , z=0
-1 , z<0
be the sign function. Then, for the binary classification tasks, with y∈{-1,1} and a classifier f(x)=sgn(h(x)) for some function h^d→ the quadratic loss reduces to what is called the 0-1 loss (up to a multiplicative constant):
1/4ℓ(f(x),y) = 1_(-∞,0](yh(x)) =
0 , f(x)=y
1 , else .
In this case ℒ(f) = (Y≠ f(X)) is simply the probability of misclassification. We define the regression function
g(x) = [Y|X=x]
to be the conditional expectation of Y given the observation X=x. Then, using the properties of the conditional expectation, the MSE can be decomposed in a Pythagorean type fashion as
[(f(X)-Y)^2] =[(f(X)-g(X) + g(X) - Y)^2]
= [(f(X)-g(X))^2] + 2[(f(X)-g(X))(g(X) - Y)] + [(g(X) - Y)^2]
= [(f(X)-g(X))^2] + [(g(X) - Y)^2] .
The cross-term disappears since, by the tower property of the conditional expectation,
[(f(X)-g(X))(g(X) - Y)] = [[(f(X)-g(X))(g(X) - Y)|X]]
= [[(f(X)-g(X))g(X)|X]] - [[(f(X)-g(X))Y|X]]
= [(f(X)-g(X))g(X)] - [(f(X)-g(X))[Y|X]]
= 0 .
As a consequence, we have for all functions f:
ℒ(f) = [(f(X)-g(X))^2] + [(g(X) - Y)^2] ≥[(g(X) - Y)^2]
where equality is attained if and only if f=g. The findings can be summarized in the following two statements that hold with probability one:[If a statement is said to hold with probability one or almost surely, this means that it is true upon ignoring events of probability zero.]
(1) The regression function is the minimizer of the MSE, i.e. we have g=f^B, with unique
f^B(x) ∈_f ∈ℳ(𝒳, 𝒴)[(f(X)-Y)^2] .
(2) The MSE can be decomposed as
ℒ(f) = [(f(X)-[Y|X])^2] + ℒ^* ,
where the Bayes risk ℒ^*=ℒ(f^B) measures the variance of Y for given X=x around its optimal prediction
f^B(x)=[Y|X=x] .
The reasoning carries over to the classification task with Y∈{-1,1}, in which case
g(x)=(Y=1|X=x)-(Y=-1|X=x)
and the optimal classifier or Bayes classifier can be shown to be
f^B(x) = sgn(g(x)) =
1 , (Y=1|X=x) > (Y=-1|X=x)
-1 , (Y=1|X=x) < (Y=-1|X=x) .
apalike-limitsai
|
http://arxiv.org/abs/2307.00883v1
|
20230703092927
|
Augmenting Deep Learning Adaptation for Wearable Sensor Data through Combined Temporal-Frequency Image Encoding
|
[
"Yidong Zhu",
"Md Mahmudur Rahman",
"Mohammad Arif Ul Alam"
] |
cs.CV
|
[
"cs.CV",
"cs.AI"
] |
Augmenting Deep Learning Adaptation for Wearable Sensor Data through Combined Temporal-Frequency Image Encoding
1 Yidong Zhu, 1 Md Mahmudur Rahman, 1, 2 Mohammad Arif Ul Alam
1Computer Science, University of Massachusetts Lowell
2Medicine, University of Massachusetts Chan Medical School
August 1, 2023
====================================================================================================================================================================================
Deep learning advancements have revolutionized scalable classification in many domains including computer vision. However, when it comes to wearable-based classification and domain adaptation, existing computer vision-based deep learning architectures and pretrained models trained on thousands of labeled images for months fall short. This is primarily because wearable sensor data necessitates sensor-specific preprocessing, architectural modification, and extensive data collection. To overcome these challenges, researchers have proposed encoding of wearable temporal sensor data in images using recurrent plots. In this paper, we present a novel modified-recurrent plot-based image representation that seamlessly integrates both temporal and frequency domain information. Our approach incorporates an efficient Fourier transform-based frequency domain angular difference estimation scheme in conjunction with the existing temporal recurrent plot image. Furthermore, we employ mixup image augmentation to enhance the representation. We evaluate the proposed method using accelerometer-based activity recognition data and a pretrained ResNet model, and demonstrate its superior performance compared to existing approaches.
recurrent plot, image representation, frequency and temporal domain, image augmentation, activity recognition.
§ INTRODUCTION
The recent advancements in deep learning techniques have revolutionized problem-solving across various domains, encompassing generative, multitask, reinforcement, active, and transfer learning <cit.>. These advancements have significantly improved the efficiency and scalability of classification problems <cit.>. However, these deep learning models heavily rely on extensive amounts of collected data and pretraining, typically conducted on powerful computers for months, predominantly dominating the fields of computer vision and natural language processing. To extend the applicability of these models to wearable sensor data, several challenges need to be addressed, including preprocessing, artifact removal, noise reduction, and careful modification of advanced deep learning techniques. Additionally, substantial efforts are required for data collection to facilitate the pretraining process. Consequently, existing pretrained computer vision-based models, such as RestNet and AlexNET, are rendered ineffective in the context of wearable sensors.
To bridge this gap, researchers have proposed converting wearable sensor data into image representations, predominantly using Recurrent Plots in the temporal domain <cit.>. In this paper, we introduce a novel modified-recurrent plot-based image representation for wearable sensor data that incorporates both temporal and frequency domain information. Firstly, we design an efficient Fourier transform-based frequency domain angular difference estimation scheme for the recurrent plot of wearable sensor readings. Building upon this, we employ an image augmentation technique called mixup to combine the temporal and frequency domain images, resulting in a comprehensive representation. Our key contributions:
* In order to construct frequency-domain modified recurrence plots, we initially examine the phasic difference within a single channel of wearable sensor data. Next, we compute the angular difference between two data points, and subsequently represent the tri-channel signals as RGB images.
* Using temporal images derived from Modified Recurrence Plots <cit.>, we applied the MixUp augmentation technique to generate new images that encompass comprehensive information from both the time-domain and frequency-domain.
* Lastly, we assessed the effectiveness of our approach by conducting evaluations on publicly available wearable 3-axis accelerometer data for activity recognition. By utilizing a pretrained ResNet model, our method showcased superior performance, surpassing the capabilities of existing techniques.
§ RELATED WORKS
Activity recognition is vital in wearable technology, with applications in health monitoring, medicine, psychology, and security. Previous studies used various sensors (e.g., accelerometers, magnetometers, gyroscopes) in cameras, smartphones, and watches. Algorithms like Random Forest <cit.>, Support Vector Machines <cit.>, CNNs <cit.>, RNNs <cit.>, and LSTMs extract patterns from sensor data for accurate recognition. However, existing approaches require extensive preprocessing and hinder scalability. To address this, we propose a novel method: converting wearable sensor data to recurrence plot images and employing a pre-trained ResNet for improved scalability and performance.
Several studies have suggested transforming time series recognition problems into image classification tasks (<cit.>). Gramian Angular Field (GAF) and Markov Transition Field (MTF) have been introduced for single-channel time series data (Reference 1). GAF represents the time series data in a polar coordinate system, with values fluctuating among different angular points on surrounding circles as time progresses. MTF expands transition probabilities on the magnitude axis into a matrix, considering temporal positions. Activity recognition data from wearable sensors is commonly treated as time series datasets, and classification tasks often rely on time-domain methods. Two primary approaches exist: one involves heuristic handcrafted features, while the other focuses on the time-domain shape of signal instances. The former employs features such as statistics, frequency or wavelet transform, and energy, which are then used as inputs for models like support vector machines (SVM), random forests (RF), and hidden Markov models (HMMs). However, these methods heavily rely on feature extraction quality and domain-specific knowledge <cit.>. The latter approach often employs Dynamic Time Warping (DTW) to measure signal similarity, often combined with a k-nearest neighbor (k-NN) framework to improve performance <cit.>. However, DTW can be slow with large datasets, despite attempts to speed up the process.
All these studies solely consider time domain information and do not incorporate frequency domain information. Their accuracy remains stagnant at 93%. In contrast to these approaches, we propose the integration of time domain and frequency domain information using MixUp augmentation. This allows for the utilization of both domain information in classification tasks.
§ FREQUENCY DOMAIN INFORMATION ENCODING IN IMAGE
§.§ Recurrent Plot
The Recurrence Plot (RP) is a visualization tool used to study complex dynamic systems <cit.>. It represents nonlinear data points on phase space trajectories, depicting small-scale features like dots and lines, as well as large-scale textures such as homogeneity, periodicity, drift, and disruption. The RP is expressed as a matrix 𝐑, calculated from a trajectory data sample 𝐱, where each element R_i,j(ε) represents the L2 norm of the difference between data points 𝐱_i and 𝐱_j. To exploit correlation information, the RP is used to encode 3-axis signals as RGB channels of images. States in phase space can be represented by s_j = (x_j, x_j+1), where s_j ∈ℝ^2. The recurrence plot (RP) can be formulated by a recurrence matrix R, where R ∈ℝ^(N-1) × (N-1), whose element is the L2 norm of a state difference vector. It has the following formulation:
R_m,n = ||s_i - s_j||
§.§ Encoding Accelerometer Signals as Images Using
Modified Recurrence Plot on Frequency Domain
The recurrence matrix is symmetric with respect to the zero main diagonal. However, this symmetry will confuse the tendency of signals. To resolve this, researchers proposed a modified recurrent plot method for temporal domain <cit.>. It proposed to calculate angle between a base vector and the temporal state difference vector s_m - s_n to identify the sign of the recurrent plot of Equation <ref> as follows.
R_m,n = sign(m,n) ||s_i - s_j||
However, we further improve this method by incorporating frequency domain information in the recurrent plot. In this regard, first, we hypothesize, for frequency phase whose tendency is uphill,
their state difference vector falls in the first quadrant of
the Cartesian coordinate system, while for those in a downhill tendency, the state difference vector falls in the third quadrant.
By following this observation, we first calculate the Fourier transform of two temporal phases within their time-window resulting in the complex-valued frequency spectra. Then, we compute the phase of each frequency component noted as p_i and p_j corresponding to temporal phase s_i and s_j respectively. Now, we use the angle between a base vector v and the phase difference vector to distinguish different gradient directions. For example, if the angle between the base vector v = [1, 1] and the positive direction of x-axis is 1/4π, then all vectors with an angle bigger than 3/4π to v are in the third quadrant. Mathematically, a sign function is used whose formulation is given by
sign(m, n) =
-1, if (p^i-p_j).v/||p^i-p^j||.||v|| < cos(3/4π)
1, otherwise
where v = [1, 1]. Thus the modified recurrent plot for frequency domain is
R_m,n = sign(m,n) ||p_i - p_j||
We use Equation <ref> for each of our target sensor (accelerometer) channel (3-axis) to transform into three recurrent plot images. We combine these 3 images into a single matrix M (M ∈ℝ^(N-1) × (N-1) × 3). Then we normalize this matrix and encode in RGB image.
§.§ Mixup Augmentation of Temporal and Frequency Domain RP Plot of Wearables
Mixup image augmentation is a technique that combines pairs of images to create new augmented images <cit.>. Mathematically, given two input images x_1 and x_2, mixup generates a new augmented image x_a as follows:
x_a = λ· x_1 + (1 - λ) · x_2
Note that the λ values are values with the [0, 1] range and are sampled from the Beta distribution. We utilize Equation <ref> and Equation <ref> to generate temporal and frequency domain recurrent plot of multi-channel wearable sensor generated images into a single one.
§ EXPERIMENTAL EVALUATION
§.§ Dataset
Two distinct datasets were employed for conducting the experiments. The first dataset, known as Activities of Daily Living (ADL), was obtained from the UCI Machine Learning Repository <cit.> and is widely accessible. The second dataset, named ASTRI, was originally provided by the Hong Kong Applied Science and Technology Research Institute (ASTRI).
* ADL Dataset: This dataset comprises tagged wrist-worn accelerometer data collected from 16 volunteers. The data was recorded using a tri-axial accelerometer with a sampling rate of 32 Hz. It encompasses 14 different daily activities; however, for this experiment, only 7 activities were utilized. These 7 activities consist of a total of 689 samples, including climbing (102 samples), drinking water (96 samples), getting up from bed (101 samples), pouring water (100 samples), sitting down (96 samples), standing up (95 samples), and walking (99 samples).
* ASTRI Motion Dataset: This dataset involves activities such as walking, sitting, standing, squatting, and lying down performed by 11 participants, representing a diverse range of ages and genders. The data in this collection was captured using a single accelerometer integrated into a smart wristband, which could be worn on either the left or right hand. The accelerometer had a sampling rate of 52 Hz. The dataset consists of a total of 1080 samples, including walking (321 samples), standing (191 samples), squatting (189 samples), sitting (193 samples), and lying (187 samples).
§.§ Baselines Algorithms
We implemented various benchmarking activity recognition algorithms using wearable accelerometer sensor signals. These algorithms include Random Forest (RF) <cit.>, Support Vector Machines (SVM) <cit.>, Convolutional Neural Network (CNN) <cit.>, Dynamic Time Warping (DTW) + 1 Dimensional CNN <cit.>, DTW + Clustering <cit.>, Long Short Term Memory (LSTM) + Fully Connected Neural (FCN) network <cit.>, Temporal RP + ResNet (TRP+ResNet) <cit.>, and modified Temporal RP + ResNet (MTRP+ResNet) <cit.> algorithms.
To assess the individual contributions of our proposed method, we also implemented the Frequency domain RP plot with ResNet architecture (FRP+ResNet), as well as the Mixup Augmentation plot of temporal and frequency (Our Method). By including these additional variations, we aim to analyze the specific impact of different components in our approach and compare their performance against the baseline algorithms.
§.§ Results Analysis
We implemented both baseline algorithms and our proposed methods using various tools, including scikit-learn, libsvm, and TensorFlow in Python. To evaluate the performance, we used accuracy as the evaluation metric, calculated as the ratio of true positive and true negative predictions to the total number of predictions (accuracy = (TP + TN)/(TP + TN + FP + FN)). Additionally, we utilized the standard error, denoted by the plus-minus sign (±), as a measure of the distribution of errors. The standard error is calculated by dividing the standard deviation by the square root of the sample size. To calculate the accuracy, we followed a user-mixed approach that involved combining all episodes (data and labels), splitting the data into a 70:30 ratio for training and testing datasets, and performing training on the training data with 20% random data selected as validation data during each iteration of neural network training. Finally, we evaluated the performance on the test data.
Table <ref> and Table <ref> presents the details accuracy comparisons of our proposed method with different baseline algorithms. Here, the central comparisons can be observed among three major versions of our proposed frameworks, modified temporal RP (MTRP), Frequency domain RP (FRP) and mixup augmentation of MTRP and FRP with ResNet. We can observe that, for ADL dataset (Table <ref>), our proposed method outperforms all baseline algorithms in overall. However, only `Get up bed' activity performs highest with FRP and `Walk' performs highest with MTRP algorithms. On the other hand, for ASTRI Motion Dataset (Table <ref>), our proposed mixup temporal and frequency domain RP image augmentation outperforms all baseline algorithms in overall while MTRP outperforms for walking and sitting detection.
Table <ref> and Table <ref> provide detailed accuracy comparisons between our proposed method and different baseline algorithms. These tables highlight the comparisons made among three significant versions of our proposed frameworks: modified temporal RP (MTRP), Frequency domain RP (FRP), and mixup augmentation of MTRP and FRP with ResNet.
When examining the ADL dataset (Table <ref>), it becomes evident that our proposed method achieves superior performance compared to all baseline algorithms overall. However, it is worth noting that the activity 'Get up bed' achieves the highest accuracy with the FRP method, while the 'Walk' activity achieves the highest accuracy with the MTRP algorithm.
On the other hand, for the ASTRI Motion Dataset (Table <ref>), our proposed mixup augmentation of temporal and frequency domain RP images surpasses all baseline algorithms in terms of overall accuracy. Furthermore, the MTRP method outperforms the others specifically for walking and sitting detection tasks.
§.§ Conclusion and Limitations
This paper presents a pioneering attempt to convert time-series data into recurrent plot images in the frequency domain. It demonstrates that while the frequency domain image representation alone may not always provide the most informative results, combining it with the temporal domain recurrent plot image representation surpasses existing methods with the help of advanced pre-trained image recognition models like ResNet. These novel findings open up new possibilities for time-series signal processing and its adaptation with scalable deep neural network models based on image processing. However, it is important to note that our proposed model was validated only on wearable accelerometer sensor signals. To establish the effectiveness of the temporal and frequency domain image representation technique, further validation is required on various time-series data such as Electroencephalogram (EEG), Electrodermal Activity (EDA), Photoplethysmograph (PPG), Gyroscope, Magnetometer, and others. Additionally, while our proposed method was evaluated solely on classification problems, it is necessary to validate it with other scalable machine learning techniques such as active learning, opportunistic learning, transfer learning, and reinforcement learning. Our long-term goal with this paper is to develop Automatic Scalable Machine Learning (AutoScaleML), which can represent any time-series signal using appropriate image representations that combine both temporal and frequency domain information, enabling the utilization of scalable computer vision models.
00
reference1Lu et al., "Robust Single Accelerometer-Based Activity Recognition Using Modified Recurrence Plot," in IEEE Sensors Journal, vol. 19, no. 15, pp. 6317-6324, 1 Aug.1, 2019, doi: 10.1109/JSEN.2019.2911204.
reference2Bruno et al., "A public domain dataset for ADL recognition using wrist-placed accelerometers," in Proc. IEEE International Symposium on Robot and Human Interactive Communication, pp. 738-743, 2014.
reference3Wang and Oates, "Encoding time series as images for visual inspection and classification using tiled convolutional neural networks," in Proc. AAAI Conference on Artificial Intelligence, 2015, pp. 40-46.
reference4Della Mea et al., "A feasibility study on smartphone accelerometer-based recognition of household activities and influence of smartphone position," Informatics for Health and Social Care, vol. 42, no. 4, pp. 321-334, 2017.
reference5Wannenburg and Malekian, "Physical activity recognition from smartphone accelerometer data for user context awareness sensing," IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 47, no. 12, pp. 3142-3149, 2017.
reference6Khalifa et al., "Harke: human activity recognition from kinetic energy harvesting data in wearable devices," IEEE Transactions on Mobile Computing, vol. 17, no. 6, pp. 1353-1368, 2018.
reference7Mueen and Keogh, "Extracting optimal performance from dynamic time warping," in Proc. ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 2129-2130, 2016.
reference8Bagnall et al., "The great time series classification bake off: a review and experimental evaluation of recent algorithmic advances," Data Mining and Knowledge Discovery, vol. 31, no. 3, pp. 606-660, 2017.
i1Zhuang et al., "A Comprehensive Survey on Transfer Learning," Proc. IEEE 109(1): 43-76, 2021.
reference25Eckmann et al., "Recurrence plots of dynamical systems," Europhys. Lett., vol. 4, no. 9, pp. 973–977, 1987.
mixupZhang et al., "mixup: Beyond Empirical Risk Minimization," ICLR (Poster) 2018.
rfMehrang et al., "An activity recognition framework deploying the random forest classifier and a single optical heart rate monitoring and triaxial accelerometer wrist-band," Sensors, vol. 18, no. 2, pp. 613–626, Feb. 2018.
svmMoschetti et al., "Toward an unsupervised approach for daily gesture recognition in assisted living applications," IEEE Sensors J., vol. 17, no. 24, pp. 8395–8403, Dec. 2017.
cnnWang et al., "Time series classification from scratch with deep neural networks: A strong baseline," in Proc. Int. Joint Conf. Neural Netw. (IJCNN), May 2017, pp. 1578–1585.
dtw1cnnRakthanmanon et al., "Searching and mining trillions of time series subsequences under dynamic time warping," in Proc. ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining, Aug. 2012, pp. 262–270.
dtwclusterGiannoula et al., "Identifying temporal patterns in patient disease trajectories using dynamic time warping: A population-based study," Sci. Rep., vol. 8, no. 1, pp. 4216–4230, 2018.
lstmfcnKarim et al., "LSTM fully convolutional networks for time series classification," IEEE Access, vol. 6, pp. 1662–1669, 2018.
|
http://arxiv.org/abs/2307.02747v1
|
20230706031558
|
Computing Offloading and Semantic Compression for Intelligent Computing Tasks in MEC Systems
|
[
"Yuanpeng Zheng",
"Tiankui Zhang",
"Rong Huang",
"Yapeng Wang"
] |
cs.NI
|
[
"cs.NI",
"eess.SP"
] |
Computing Offloading and Semantic Compression for Intelligent Computing Tasks in MEC Systems
Yuanpeng Zheng1, Tiankui Zhang1, Rong Huang2 and Yapeng Wang3
1School of Information and Communication Engineering,
Beijing University of Posts and Telecommunications, Beijing 100876, China
2China Unicom Research Institute, Beijing, China
3Faculty of Applied Sciences, Macao Polytechnic University, Macao SAR, China
{zhengyuanpeng, zhangtiankui}@bupt.edu.cn, huangr27@chinaunicom.cn, yapengwang@mpu.edu.mo.
This work is supported by Beijing Natural Science Foundation (No.4222010).
August 1, 2023
==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
This paper investigates the intelligent computing task-oriented computing offloading and semantic compression in mobile edge computing (MEC) systems.
With the popularity of intelligent applications in various industries, terminals increasingly need to offload intelligent computing tasks with complex demands to MEC servers for computing, which is a great challenge for bandwidth and computing capacity allocation in MEC systems.
Considering the accuracy requirement of intelligent computing tasks, we formulate an optimization problem of computing offloading and semantic compression.
We jointly optimize the system utility which are represented as computing accuracy and task delay respectively to acquire the optimized system utility.
To solve the proposed optimization problem, we decompose it into computing capacity allocation subproblem and compression offloading subproblem and obtain solutions through convex optimization and successive convex approximation.
After that, the offloading decisions, computing capacity and compressed ratio are obtained in closed forms.
We design the computing offloading and semantic compression algorithm for intelligent computing tasks in MEC systems then.
Simulation results represent that our algorithm converges quickly and acquires better performance and resource utilization efficiency through the trend with total number of users and computing capacity compared with benchmarks.
Intelligent computing task, semantic compression, MEC, successive convex approximation.
§ INTRODUCTION
With the rapid development of mobile edge computing (MEC), more and more new applications such as computer vision, natural language processing, semantic communication, etc., are emerging constantly in MEC systems<cit.>.
As the increasing number of intelligent computing tasks, MEC needs to tackle with many problems with specific characteristics <cit.> which is different from traditional resource allocation problems.
However, few existing works consider the various requirements of those characteristics such as compression and computing accuracy <cit.>.
Therefore, how to efficiently offload to support the specific demands of intelligent computing tasks is still an unaddressed problem.
Hence, in the context of massive Internet of Things (IoT) devices deployment and limited terminal computing capacity, the computing tasks are increasingly complex and highly coupled with communication, existing works on computing offloading and semantic compression in MEC systems has become specific and multidimensional.
C. Wang et al.<cit.> considered computation offloading and content caching strategies in wireless cellular network with MEC and formulate the total revenue of the network.
With considering edge users and large data volume, G. Faraci et al.<cit.> formulated a power consumption and delay optimization problem in unmanned aerial vehicle (UAV) assisted MEC systems.
Nevertheless, in actual application scenarios, intelligent computing tasks have complicated characteristics and demands which need to be considered in the computing offloading and semantic compression in MEC systems.
As the increasing trend of artificial intelligence, intelligent computing tasks has brought more requirements to the wireless network especially the MEC field. The researches on intelligent computing tasks are getting more attention.
B. Gu et al.<cit.> investigated the fitting of modelling classification accuracy by verification of large data sets for intelligent computing tasks and find that power law is the best among all models.
Considering the scenario of intelligent computing tasks, H. Xie et al.<cit.> proposed a brand new framework of semantic communication where a deep learning based system for text transmission combined with natural language processing and semantic layer communication was constructed.
Apparently, conditions for research of computing offloading and semantic compression considering intelligent computing tasks are mature gradually both on application scenarios and modelling of tasks.
There are still some studies considering the demands and characteristics of intelligent computing tasks with semantic compression in MEC systems.
H. Xie et al.<cit.> investigated the deployment of semantic communication system based on edge and IoT devices where MEC servers computing the semantic model and IoT devices collect and transmit data based on semantic task model where semantic compression exists on transmission side.
Y. Wang et al.<cit.> proposed a semantic communication framework for textual data transmission and formulated an optimization problem whose goal is to maximize the total semantic similarity by jointly optimizing the resource allocation policy and determining the partial semantic information to be transmitted.
Obviously, the various demands and characteristics of intelligent computing tasks have brought many changes on computing offloading and semantic compression in MEC systems but researches on modelling intelligent computing tasks and applying it to offloading have not been considered yet according to above works.
Obviously, the key challenges of existing works mainly focus on intelligent computing, i.e., intelligent task processing in MEC systems.
Particularly, the combination of computing offloading and semantic compression in MEC systems considering the demands of intelligent computing tasks is still an unaddressed research area.
Based on above works, we focus on resource allocation when task offloading and semantic compression coexist in MEC system.
The main contributions of this paper are as follows.
We formulate an optimization problem of computing offloading and semantic compression considering accuracy requirement of intelligent computing tasks in MEC systems. We define the system utility which consists of the system revenue depending on computing accuracy and cost depending on task delay.
The highly coupled computing offloading and semantic compression problem is decoupled into two subproblems including computing capacity allocation subproblem and compression offloading subproblem which are solved by successive optimization approximation and convex optimization.
The simulation results verify that our algorithm converges quickly and acquires better performance and resource utilization efficiency through the trend with total number of users and computing capacity compared with benchmarks.
§ SYSTEM MODEL
In order to solve the resource allocation problem of computing offloading and semantic compression in MEC systems, we consider the fog radio access network (F-RAN) scenario and equip MEC servers on small base stations (SBS) to form the MEC systems. We set the total amount of users is U. The set of MEC systems is denoted by K^S = {1,...,k,...,K} and it is assumed that SBS k is associated with U_k mobile users. We let U^S_k = {1,...,u_k,...,U_k} denote the set of users associating with SBS k where u_k refers to the uth user which associates with the kth SBS. The set of computing tasks is denoted by M^S = {1,...,m,...,M} and we consider two types of computing including local computing and offloading to MEC computing in our systems as shown in Fig. 1.
In our model, step 1 in Fig.1 can be applied to feature extraction in semantic tasks, i.e., semantic compression. Let bandwidth resource of our system be B, computing capacity of local device be F^L_u_k, computing capacity of the MEC server be F_k and delay limit for computing task m be t̃_m.
§.§ Communication Model
In our system, every SBS in the network is equipped with the MEC server, so each user can offload its computing task to the MEC server through the SBS to which it is connected. We denote x_u_k∈{0,1}, ∀ u,k as the computing offloading indicator variable of user u_k. Specially, x_u_k = 1 if user u_k offload its computing task to the MEC server via wireless network and we have x_u_k = 0 if user u_k determine to compute its task locally on the mobile device. Therefore, we denote x = {x_u_k}_u_k ∈ U^S_k, k ∈ K^S as the offloading indicator vector.
In this paper, we consider that spectrum used by SBSs is overlaid and spectrum within one SBS is orthogonally assigned to every user. We only analyze uplink transmission and divide the total spectrum into N subcarriers, which is denoted as N^S = {1,...,n,...,N}. We denote ρ_u_kn∈{0,1},∀ u,k,n as subcarrier variables, where ρ_u_kn = 1 means subcarrier n is allocated to user u_k which is associated with SBS k and ρ_u_kn = 0 otherwise. Obviously, one subcarrier on an SBS can only be allocated to one user at a time. Then the uplink transmission rate of user u_k on subcarrier n given as
r_u_kn = B/Nlog_2( 1 + P_u_kng_u_kn/I_u_kn + σ^2), ∀ u,k,n,
where P_u_kn represents transmit power from user u_k to SBS k, g_u_kn represents wireless channel gain between user u_k and SBS k on subcarrier n, and I_u_kn = ∑_c∈ K^S,c≠ k∑^U_c_u'_c=1ρ_u'_cng_u'_cnP_u'_cn,∀ u,k,n represents co-channel interference of users associating with other SBSs on the same frequency of user u_k. σ^2 denotes the noise power of additive white Gaussian noise. Obviously, the uplink transmission rate of user u_k is denoted as r_u_k = ∑^N_n=1ρ_u_knr_u_kn,∀ u,k.
§.§ Computing Model
For the computing model, we consider each user u_k has a computing task m, and denote z_u_km∈{0,1}, ∀ u,k,m as the indicator variable of the computing task m of user u_k. Specially, z_u_km = 1 if the computing task of user u_k is m, otherwise z_u_km = 0.
In our model, we assume that z_u_km is already given as the user request and we have ∑^M_m=1 z_u_km = 1,∀ u,k.
We consider two types of computing approaches, i.e., local computing and task offloading.
1) Local Computing: For the local computing approach, the raw data of user u_k is given as a_u_k, and we can acquire the computing delay through the raw data a_u_k directly, which is given as
T^L_u_k = ∑ ^M_m=1z_u_kmF_u_km(a_u_k)/F^L_u_k, ∀ u,k,
where F_u_km(· ) represent the computing resource overhead of the data volume, and we consider it as linear relationship, i.e., F_u_km(a_u_k ) = β a_u_k + γ, where β and γ are linear paraments.
2) Task Offloading: For the task offloading approach, user u_k will compress the raw data a_u_k to
b_u_k = a_u_k/ε _u_k,∀ u,k,
where ε _u_k is denoted as compression ratio of user u_k and ε _u_k≥ 1, ∀ u,k. We denote ε = {ε_u_k}_u_k∈ U^S_k, k∈ K^S as the compression ration vector. Apparently, we have α_u_k = (1-x_u_k)a_u_k + x_u_kb_u_k. Then, the compressed data b_u_k is transmitted to SBS k to process and compute, and the transmission delay of the compressed data from user u_k in wireless link is given as
t^comm_u_k = b_u_k/r_u_k,∀ u,k.
Let f^O_u_k be computing capacity allocated to user u_k from SBS k and f^O = {f^O_u_k}_u_k∈ U^S_k, k∈ K^S be the computing capacity allocation vector, so that the computing delay of user u_k processing its computing task on SBS k is denoted as
T_u_km = ∑ ^M_m=1z_u_kmF_u_km(b_u_k)/f^O_u_k,∀ u,k.
In this paper, we adopt the same computing resource overhead formula for both raw data and compressed data to represent the same task processed in local devices and MEC servers. Therefore, the delay of computing of user u_k on SBS k is denoted as
t^comp_u_k = ∑^M_m=1 z_u_kmT_u_km, ∀ u,k.
We notice the fact that the downlink data volume of computing outcome is much smaller than uplink data volume, so we neglect the downlink transmission in this work.
§.§ Utility Function
To introduce the intelligent computing task feature in our model, we adopt the 3-parameters power law fitting formula between the data volume and the computing accuracy from <cit.> which is the widely used accuracy fitting formula of intelligent classification tasks including semantic compression currently <cit.>. For the convenience of subsequent modelling, we adopt a more simplified form and the computing accuracy of user u_k in our model is denoted as
y(α_u_k) = p - q α_u_k^-r,∀ u,k,
where α_u_k = (1-x_u_k)a_u_k + x_u_kb_u_k represents the data volume needed computing of user u_k and p,q,r are all fitting paraments. In this work, the limit of computing accuracy of computing task m is set as ỹ_m. The total task delay of user u_k in our model is
t_u_k = (1-x_u_k)T^L_u_k + x_u_k(t^comm_u_k+t^comp_u_k), ∀ u,k.
In this paper, we focus on maximum the system utility under computing accuracy constraint and task delay constraint. For each user u_k, we consider marginal utility of the combination of system revenue, i.e., computing accuracy and system cost, i.e., task delay.
We model system utility in the form of a logarithmic function of diminishing marginal utility with the tradeoff of system revenue and cost, therefore the system utility is given as
R = ∑_k∈ K^S∑_u_k∈ U^S_k ln( L y(α_u_k)/t_u_k), ∀ u,k,
where L is denoted as the weight parameter between system revenue and cost.
§ PROBLEM FORMULATION AND ALGORITHM DESIGN
In order to maximize the system utility, we formulate it as an optimization problem and decompose it into several convex optimization subproblems via successive convex approximation (SCA).
Then we design the corresponding iterative algorithm to solve the optimization problem.
§.§ Problem Formulation and Decomposition Solution
We adopt the system utility proposed in (<ref>) as the objective function of our optimization problem, and we formulate it as
max_x,f^O,ε R
s.t. (C1): x_u_k∈{0,1}, ∀ u,k,
(C2): ∑^K_k=1 x_u_k≤ 1, ∀ u,
(C3): ε_u_k≥ 1, ∀ u,k,
(C4):t_u_k≤∑^M_m=1 z_u_kmt̃_m, ∀ u,k,
(C5):y(α_u_k)≥∑^M_m=1 z_u_kmỹ_m, ∀ u,k,
(C6):∑^U_k_u_k =1f^O_u_k≤ F_k, ∀ u,k.
In (<ref>), constraint (C1) guarantees that the value of the computing offloading indicator variables is restrict to 0 and 1,
constraints (C2) and (C3) means one user can only choose one type of computing approach and the compressed data is less than or equal to the raw data,
constraints (C4) and (C5) are proposed to ensure the limits of the task delay and the computing accuracy are hold,
constraint (C6) guarantees that the sum of allocated computing capacity is not greater than total computing capacity of MEC servers.
Obviously, (<ref>) is a non-linear mixed integer programming and non-convex optimization problem, and such problems are usually considered as NP-hard problems. Therefore, we need to decompose it into several subproblems and make some transformation and simplification to solve it iteratively.
For convenience of solving (<ref>), we decompose it into two subproblems by the approach of given variables.
1) Computing Capacity Allocation Subproblem: Under given other variables except f^O, (<ref>) is simplified to
max_f^O∑_k∈ K^S∑_u_k∈ U^S_kln(LA^δ _u_k) - ln (A^β _u_k+t^comp_u_k)
s.t. (C4'): A^β _u_k+t^comp_u_k≤∑^M_m=1 z_u_kmt̃_m,∀ u,k,
(C6),
where the constant term A^δ_u_k = p - q ((1-x_u_k)a_u_k + x_u_kb_u_k)^-r, A^β _u_k = (1-x_u_k)T^L_u_k + x_u_kt^comm_u_k, and t^comp_u_k = ∑^M_m=1z_u_kmF_u_km(b_u_k)/f^O_u_k and in this way, (C4) in (<ref>) is converted to (C4') here. Therefore, (<ref>) is a convex optimization problem and can be solved directly by convex optimization method.
2) Compression Offloading Subproblem: We need to solve computing offloading indicator variable x and compression ratio variable ε under given f^O. For convenience of solving, we adopt binary variable relaxation and relax x into real variable as x_u_k∈{ 0,1 }.
The original problem (<ref>) is simplified to
max_x,ε∑_k∈ K^S∑_u_k∈ U^S_k ln( L y(α_u_k)/(1-x_u_k)B^δ_u_k+x_u_k/ε_u_kB^β_u_k)
s.t. (C1), (C2),(C3),
(C4”): (1-x_u_k)B^δ_u_k+x_u_k/ε_u_kB^β_u_k≤∑^M_m=1 z_u_kmt̃_m, ∀ u,k,
(C5),
where the constant terms B^δ_u_k = T^L_u_k and B^β_u_k = a_u_k/r_u_k + ∑^M_m=1z_u_kmF_u_km(a_u_k)/f^O_u_k, and y(α_u_k) = p - q ((1-x_u_k)a_u_k + x_u_k/ε_u_ka_u_k)^-r and in this way, (C4) in (<ref>) is converted to (C4”) here.
Normally, p,q and r satisfy that p>0, q>0 and 0≤ r ≤ 1. We adopt the method of variable substitution and let η_u_k = 1-x_u_k+x_u_k/ε_u_k. Obviously, η_u_k satisfies that 1-x_u_k≤η_u_k≤1 which will be constraint (C3') of the above problem and we can transform problem (<ref>) into
max_x,η∑_k∈ K^S∑_u_k∈ U^S_k ln( L (p-q*(a_u_kη_u_k)^-r)/(1-x_u_k)(B^δ_u_k-B^β_u_k)+B^β_u_kη_u_k)
s.t. (C1),(C2),
(C3'): 1-x_u_k≤η_u_k≤ 1, ∀ u,k,
(C4”): (1-x_u_k)(B^δ_u_k-B^β_u_k)+B^β_u_kη_u_k≤
∑^M_m=1 z_u_kmt̃_m, ∀ u,k,
(C5): p-q*(a_u_kη_u_k)^-r≥∑^M_m=1 z_u_kmỹ_m,∀ u,k.
Due to non-convexity of (<ref>), we adopt the method of SCA and let
v_u_k≥ ln( (1-x_u_k)(C^δ_u_k-B^β_u_k) + B^β_u_kη_u_k).
We perform first order Taylor expansion on the right side at point (x^j_u_k,η^j_u_k) and convert it to
v_u_k≥ ln( (1-x^j_u_k)(B^δ_u_k-B^β_u_k) + B^β_u_kη^j_u_k)+
(B^β_u_k-B^δ_u_k)(x_u_k-x^j_u_k) + B^β_u_k(η_u_k - η^j_u_k)/(1-x^j_u_k)(B^δ_u_k-B^β_u_k) + B^β_u_kη^j_u_k,
which will be constraint (C10) of the above problem. Therefore, (<ref>) is converted to
max_x,η∑_k∈ K^S∑_u_k∈ U^S_k ln( L(p-q*(a_u_kη_u_k)^-r)/v_u_k)
s.t. (C1),(C2), (C3'),(C4”),(C5),
(C7): (<ref>).
Then we can use convex optimization method for SCA iteration to solve (<ref>) by using standard CVX tools<cit.>.
§.§ Algorithm Design and Analysis
As mentioned above, we decompose the original NP-hard problem (<ref>) into two subproblems. Then we use the idea of the greedy algorithm to iterate the above solutions of two subproblems and arrive at the suboptimal solution for (<ref>), which is summarized in Algorithm 1.
In Algorithm 1, we adopt alternating iteration of three problems and obtain the solutions in closed forms by convex optimization. According to the greedy algorithm and convex optimization theory, iteration of two subproblems can ensure | N^q-N^q-1|≤θ, i.e., convergence quickly but only sub-optimality can be guaranteed <cit.>.
As we show above, the complexity of Algorithm 1 depends on two subproblems. In subproblem 1, since (<ref>) is a convex optimization problem, the complexity is O(U). In subproblem 2, (<ref>) need to be converted to (<ref>) through SCA and achieve solution in iteration algorithm, we assume the number of iterations is L_sub2, therefore the complexity is O(UL_sub2). We assume the number of total iteration is L_it, then the overall complexity of Algorithm 1 is O((U+UL_sub2)L_it).
In the way, the NP-hard optimization problem (<ref>) is decomposed into low-complexity subproblems and iteratively solved.
§ SIMULATION RESULT
In this section, we first set the simulation paraments and then show our simulation results to evaluate the performance of our proposed algorithm.
We consider system level simulation of uplink transmission in a small cell F-RAN according to the 3GPP normative document of small cell network, i.e., urban micro (UMi) model <cit.>.
In our model, we consider that four SBSs are deployed in a small cell area with a total coverage of 200m× 200m. The SBSs provide offloading association and resource allocation for users and note that the path loss depend on the link state of LoS and NLoS <cit.>.
In our system model, we consider computing accuracy where the paraments of them are set according to the most suitable fitting paraments <cit.>.
Part of simulation paraments are summarized in Tabel II.
According to the computing delay and accuracy requirements of some services of ultra reliable low latency communications <cit.>, we assume there are three task types in our simulation and the requirements are different. The delay and accuracy limits of tasks are shown in Tabel III.
In order to verify the performance of the proposed algorithm, we add the following schemes for comparison:
* Average Computing (AC): The scheme is that computing capacity of MEC servers is allocate averagely.
* Without Compression Ratio (WCR)<cit.>: According to the scheme in <cit.>, the compression ratio is not considered and computing offloading is processed directly.
We demonstrate the convergence of all schemes in Fig. 2 and we can see that the convergence of our proposed algorithm is fast in L_it iterations and the trend is basically fixed after convergence, which means our algorithm based SCA and iteration have a good stability and the astringency. From the convergence of comparison algorithm, we find that our proposed algorithm can acquire a better value of system utility and better optimization character in our system model considering joint allocation of communication resource and computing capacity.
The characteristics of system utility with total number of users U under different bandwidth, i.e., 10 MHz and 50 MHz, as Fig. 3 shows.
It is found that the system utility increases with the total number of users and the trend is slower when total number of users is greater than 35 in our proposed algorithm.
When total number of users is relatively small, the resources are sufficient and resource allocation is efficient, therefore the system utility increases quickly. Nevertheless, as total number of users is relatively large, the resources of system is limited and resource allocation will become inefficient, the growth tendency of system utility will slow down.
The comparison schemes all have this property but the trend is not notable, which is different for different algorithms.
We can see in this figure that the higher bandwidth has larger impact in our proposed algorithm than other comparison schemes which means our scheme have higher usage in bandwidth.
We compare system utility with computing capacity F_k of MEC servers under different bandwidth in Fig. 4. From the trend we can see that there is a maximum value of system utility in F_k = 200 Gigacycle/s in our proposed algorithm.
This is because we consider the computing accuracy limit in our system model, the system utility depends on computing accuracy and task delay and our proposed algorithm need make a trade-off between them. We can get a better trade-off when F_k is relatively small an reaches a certain value. However, the communication resource will be limited and affects the compression ration and limits computing accuracy when F_k continues to rise, therefore users would choose local computing which result in the decrease of the system utility.
This property also presents in comparison algorithm AC with the different maximum point, but in WCR where compression ratio is not considered, the trade-off does not existed while F_k is increasing.
Also, we can see that higher bandwidth do not have a significant impact on this trend of system utility.
Obviously, our methods can be applied to the practical systems that specific intelligent tasks, i.e., semantic compression and computing offloading coexist in MEC systems when there are requirements for computing accuracy and task delay and solve the decision problems of offloading and compression. Our algorithm can obtain higher revenue than traditional methods in this scenario. However, we do not consider communication decision and more general intelligent task computing in the model, resulting in the lack of generality of the application of the model, which will be studied in future works.
§ CONCLUSION
In this paper, we investigated the computing offloading and semantic compression for intelligent computing tasks in MEC systems.
Specially, considering accuracy requirement of intelligent computing tasks, we formulate an optimization problem of computing offloading and semantic compression and decomposed it into two subproblems which were solved iteratively through convex optimization and successive convex approximation.
Simulation results has demonstrated that our algorithm converges quickly and acquires better performance and resource utilization efficiency through the trend with total number of users and computing capacity compared with benchmarks.
99
IEEEtran
ref3
M. Chen, D. Gündüz, K. Huang, W. Saad, M. Bennis, A. V. Feljan, and H. V. Poor, "Distributed Learning in Wireless Networks: Recent Progress and Future Challenges", IEEE J. Sel. Areas Commun., vol. 39, no. 12, pp. 3579 - 3605, Dec. 2021.
ref1
H. Xie and Z. Qin, “A lite distributed semantic communication system for internet of things," IEEE J. Sel. Areas Commun., vol. 39, no. 1, pp. 142-153, Nov. 2020.
ref6
Y. Yang, C. Guo, F. Liu, C. Liu, L. Sun, Q. Sun, and J. Chen, “Semantic Communications With AI Tasks", arXiv preprint arXiv:2109.14170, Sep. 2021.
ref2
B. Gu, F. Hu and H. Liu, “Modelling classification performance for large data sets," International Conf. Web-Age Information Management, pp. 317-328, Springer, Berlin, Heidelberg, 2001.
ref4
C. Wang, C. Liang, F.R. Yu, and Q. Chen and L. Tang, “Computation offloading and compression in wireless cellular networks with mobile edge computing," IEEE Trans. Wireless Commun., vol. 16, no. 8, pp. 4924-4938, May. 2017.
ref5
G. Faraci, C. Grasso, and G. Schembra, “Design of a 5G Network Slice Extension With MEC UAVs Managed With Reinforcement Learning," IEEE J. Sel. Areas Commun., vol. 16, no. 7, pp. 2356-2371, Oct. 2020.
ref11
H. Xie, Z. Qin, G. Y. Li, and B. H. Juang. “Deep learning enabled semantic communication systems," IEEE Trans. Signal Process., vol. 69, pp. 2663-2675, Apr. 2021.
ref17
Y. Wang, M. Chen, T. Luo, W. Saad, D. Niyato, H. V. Poor, and S. Cui, “Performance Optimization for Semantic Communications: An Attention-based Reinforcement Learning Approach”, IEEE J. Sel. Areas Commun., vol. 40, no. 9, pp. 2598-2613, Sept. 2022.
ref30
W. Fan, Z. Chen, Z. Hao, Y. Su, F. Wu, B. Tang and Y.A. Liu. “DNN Deployment, Task Offloading, and Resource Allocation for Joint Task Inference in IIoT," IEEE Trans. Industr. Inform., Jul. 2022.
ref29
M. Grant, S. Boyd, and Y. Ye, “CVX: MATLAB software for disciplined convex programming,” 2014. [Online]. Available: http://cvxr.com/cvx/.
ref23
S. Ying, P. Babu, and D. P. Palomar, “Majorization-Minimization Algorithms in Signal Processing, Communications, and Machine Learning," IEEE Trans. Signal Process., vol. 65, no. 3, Feb. 2017.
ref25
3GPP, “Technical Specification Group Radio Access Network; Evolved Universal Terrestrial Radio Access (E-UTRA); Further advancements for E-UTRA physical layer aspects,” TR 36.814, Release 9, pp. 94-96, Mar. 2017.
ref27
S. Zarandi and H. Tabassum, “Delay minimization in sliced multi-cell mobile edge computing (MEC) systems,” IEEE Commun. Lett., vol. 25, no. 6, pp. 1964-1968, Jan. 2021.
ref10
J. Feng, Q. Pei, F. R. Yu, X. Chu, J. Du, and L. Zhu, “Dynamic Network Slicing and Resource Allocation in Mobile Edge Computing Systems," IEEE Trans. Veh. Technol., vol. 69, no. 7, pp. 7863-7878, Jul. 2020.
|
http://arxiv.org/abs/2307.02285v1
|
20230705133909
|
Monolithic atom interferometry
|
[
"Johannes Fiedler",
"Kim Lefmann",
"Wolf von Klitzing",
"Bodil Holst"
] |
quant-ph
|
[
"quant-ph",
"physics.optics"
] |
Focusing on what to decode and what to train: Efficient Training with HOI Split Decoders and Specific Target Guided DeNoising
Junwen Chen Yingcheng Wang Keiji Yanai
Department of Informatics, The University of Electro-Communications, Tokyo, Japan
chen-j@mm.inf.uec.ac.jp, wang-y@mm.inf.uec.ac.jp, yanai@cs.uec.ac.jp
August 1, 2023
===========================================================================================================================================================================================================
Atom and, more recently, molecule interferometers are used in fundamental research and industrial applications. Most atom interferometers rely on gratings made from laser beams, which can provide high precision but cannot reach very short wavelengths and require complex laser systems to function. Contrary to this, simple monolithic interferometers cut from single crystals offer (sub) nano-meter wavelengths with an extreme level of stability and robustness. Such devices have been conceived and demonstrated several decades ago for neutrons and electrons. Here, we propose a monolithic design for a thermal-beam molecule interferometer based on (quantum) reflection. We show, as an example, how a reflective, monolithic interferometer (Mach-Zehnder type) can be realised for a helium beam using Si(111)-H(1×1) surfaces, which have previously been demonstrated to act as very robust and stable diffractive mirrors for neutral helium atoms.
§ INTRODUCTION
The field of atom interferometry has expanded enormously over the last few decades. Atom interferometers are used in various applications, from magnetic and gravity sensing <cit.>, quantum metrology <cit.> to atomic clocks <cit.>. They may even be used as dark matter and gravitational wave detectors <cit.> also in space <cit.>. Compact, portable atom gravimeters for prospecting, oil survey and geophysical investigations have recently become commercially available <cit.>. Atom interferometers will also be useful as accelerometers for sub-sea navigation in submarines and, more recently, underwater drones <cit.>. This, however, will require very compact solutions, which are not presently available.
Atom interferometers use either cold atoms (including Bose-Einstein Condensates) <cit.> or thermal atoms beams <cit.>, and more recently hot thermal vapours <cit.>. Most optical interferometers have, by now, been realised as atom interferometers, including Young's double slit, Mach-Zehnder, Talbot-Lau, Ramsey-Bordé and Sagnac interferometers.
Historically, Young's double slit makes the simplest atom interferometer. The beam is split into two paths by passing through a double slit, and the interference pattern is observed on a screen further down the beam path. It was realised for atoms for the first time in 1991 using metastable helium atoms passing through a thin gold foil <cit.>.
The simplest split-path atom interferometer is arguably the Mach-Zehnder interferometer. It exploits the de Broglie wavelength of the atoms in a diffraction grating configuration with split beam paths. The first Mach-Zehnder atom interferometer was realised in 1991 <cit.> using a sodium beam and solid transmission diffraction gratings. Later in 1995, it was developed further by using metastable neon and argon and transmission diffraction gratings made of standing light waves <cit.>, in 2002 using ground-state lithium also with light-wave gratings <cit.> and later again using neutral helium with solid gratings. Results from the last mentioned instrument were never published, but it is mentioned in a review paper from 2009 <cit.>.
In the Talbot-Lau interferometer, the self-imaging property of a grating is exploited in near-field diffraction. The atom paths are not truly separated; therefore, this type of interferometer has been used extensively for experiments with heavy molecules where the de-Broglie wavelength is very small. The first Talbot-Lau atom interferometer was realised in 1994 <cit.>.
Where the Mach-Zehnder interferometer and the Talbot-Lau interferometers are adapted from light optics, the Ramsey-Bordé interferometer, first realised in 1949 by Norman Ramsey <cit.>, can only be used for atoms: the principle is diffraction by absorption of a single photon on a weakly allowed transition to split the wave package. In the 1980th, this interferometer has been further developed by Christian Bordé by using atomic recoil to create a beam splitter <cit.>. This interferometer type is currently the standard for high-precision measurements, such as atomic clocks.
In light optics, the Sagnac interferometer, also called ring interferometer, relies on a beamsplitter mirror to create two beams that travel equidistant paths in opposite directions through a ring structure guided by reflective mirrors. The two beams meet at the starting point, where they interfere and are made to exit the ring. The first atom interferometer using the Sagnac effect was realised in 1991 using a Ramsey–Bordé configuration of a state-labelled atom interferometer based on single-photon transitions, with a beam of atoms traversing two pairs of travelling wave fields. The laser fields within each pair are separated by a distance D, while the two pairs are separated by d and are counter-propagating with respect to each other <cit.>. By rotating the interferometer, the counter-propagating beams collect different phases along their optical paths leading to an interference pattern on the screen. Such a configuration provides an absolute measurement of the rotational speed.
The atomic structure of a single crystal offers a simple periodic diffractive grating. Thus, it could produce many different types of interferometers, where the monolithic construction guarantees extreme stability. Interferometers based on transmission through solid slabs of material have been demonstrated, e.g. X-rays <cit.>, neutrons <cit.> and electrons <cit.>. Unfortunately, these techniques are inapplicable to atoms, which interact too strongly with any solid material they travel through. Monolithic atom interferometers have been used widely in neutron scattering experiments observing gravitationally induced interference (in transmission) <cit.> and the quantised states of neutrons in the presence of gravitational fields with perfectly reflecting mirrors <cit.>. Neutrons are sensitive to external forces and, thus, suitable candidates for quantum sensing. However, such experiments require an extensive, costly infrastructure to create, control and detect the neutron beam. This also applies to cold atom interferometers. Thermal atom beams are easier to create and couple more robust to external fields due to the higher mass of the atoms. A further advantage of thermal atom interferometers is that they can operate continuously, dramatically decreasing the temporal resolutions.
Here, we propose a novel interferometer based on the reflection of atoms on monolithic single-crystal structures. The basic operation principle is depicted in Fig. <ref>: an incident beam of atoms is reflected by the crystal lattice (A) into two components, which impinge onto a second mirror and recombine on the third reflection.
In the past, atoms had been neglected, largely because the atoms most commonly used in interferometry (Rubidium, Rb <cit.>; Caesium, Cs <cit.>; Argon, Ar <cit.>; Sodium, Na <cit.>; Potassium, K <cit.>) will stick to surfaces under most conditions. Similarly, metastable atoms, which have also been used for interferometry (Argon, Ar <cit.>; Helium, He <cit.>), will decay upon impingement. A further practical challenge for a reflection-based interferometer is the contamination of the reflecting surface, which distorts the diffraction. For example, all metal surfaces will be covered in physisorbed molecules within hours, even in ultra-high vacuum <cit.>.
Noble gasses, including ground-state helium, H_2, HCl and other molecules, are known to scatter from various surfaces over a broad temperature range without sticking to them <cit.>. Over the last years, focusing mirrors for neutral, ground-state helium have been developed for neutral helium microscopes <cit.>. An important requirement for these mirrors is that they must remain stable in a vacuum for months. One of the solutions implemented was Si(111)-H(1 × 1) <cit.>. Detailed experiments on He and H_2 scattering were performed <cit.> and the interaction potential between Helium and Si(111)-H(1 × 1) calculated <cit.>. This interaction potential was then used to obtain the intensity of the different diffraction peaks for a range of conditions <cit.>.
The advantage of the Si(111)-H(1 × 1) surface from an experimental point of view is that it can be prepared chemically by dipping the Si(111) crystal in an HF solution <cit.>. This means a monolithic configuration with two reflecting surfaces facing each other can be fabricated at any spacing. The additional advantage of using the Si(111)-H(1 × 1) surface is the small lattice constant of a_ S=3.383 Å <cit.> which, together with the wavelength of, as an example, helium atoms in a room temperature beam: λ_ dB =0.55 Å, ensures a very big wave-package separation. Recent matter-wave interferometers typically split the wave package over a few milliradian <cit.>. In contrast, using the room-temperature helium beam described above, the proposed new interferometer splits the matter wave over 0.5 radians.
The atom interferometer we introduce here uses reflective atom-surface diffraction as a beam splitter. Further reflections from a parallel surface yield the recombination of the wave and thus the interference; see Fig. <ref>. We present a theoretical model determining the expected interference patterns and apply the model to the interference of helium atoms using Si(111)-H(1 × 1) surfaces, where we concentrate on describing the general principles by describing an ideal system with a perfectly coherent and monochromatic beam and an experimentally based model for the diffraction probabilities. We have chosen an experimentally realisable parameter set providing all possible superpositions occurring in such interferometer: single-path transmission, double-path superposition with vanishing phases and multipath interference. Finally, we discuss how a reflective interferometer based on quantum reflection can be addressed. The paper finishes with a conclusion and outlook on future work.
§ THE REFLECTIVE INTERFEROMETER
§.§ Geometric arrangement
The general arrangement of a monolithic reflection interferometer is depicted in Fig. <ref>. A slab is cut into a U-shaped monolith to form two parallel planar surfaces with a distance s being sufficiently large to achieve propagating waves inside the interferometer. A particle beam will be diffracted minimally three times at points A, B and C. The beam will be split at point A, and each part will be reflected at point B and recombined in point C, where they interfere.
In detail: a particle beam is sent via an incidence angle α towards one surface. It is reflectively split in point A into a range of diffraction orders determined by the incident beam angle α, the periodic surface structure described by the lattice spacing a_ S, and the beam wavelength λ through the well known reciprocal lattice equation <cit.>. We pick two orders, the first one with the reflection angle β
sinβ = sinα +n_1λ/a_ S ,
with an integer n_i∈ℤ (numerating the diffraction order), and the second one with reflection angle γ
sinγ = sinα +n_1'λ/a_ S .
At point A, the two selected diffraction orders propagate towards points B and B', where they are reflected towards point C and recombine. Point B denotes the reflection point one diffraction order from point A; thus, the corresponding incidence angle is β. To satisfy the recombination of the beam, the reflection angle δ has to be of a non-zeroth diffraction order expressed as
sinδ= sinβ + n_2λ/a_ S = sinα +(n_1+n_2)λ/a_ S .
Analogously, the reflection at point B' can be determined by
sinε = sinγ + n_2'λ/a_ S=cosα + (n_1'+n_2')λ/a_ S .
To satisfy the recombination of the beam at point C, the diffraction of the incoming beams need to occur under the same diffraction angle, which can be described mathematically by the relation
sinζ = sinδ + n_3λ/a_ S = sinα + (n_1+n_2+n_3)λ/a_ S ,
and
sinζ = sinε + n_3'λ/a_ S= sinα + (n_1'+n_2'+n_3')λ/a_ S .
These equations yield a constrain for the diffraction orders
n_3' = n_1+n_2+n_3-n_1'-n_2' .
In addition to this angular dependence, the distance between points A and C needs to be the same for both paths to satisfy the recombination of the beams. Figure <ref> illustrates this condition: the blue and red beamlines need to recombine in the same point C. Otherwise, they would be reflected without any spatial overlap to interfere directly. If they are reflected into parallel beams from different spots, they will interfere in the far field with a phase shift proportional to the spatial difference between both points. To achieve interference also in the optical near-field regime for the entire interferometer, the condition reads
tanβ+tanδ = tanγ+tanε .
Finally, we sum up six parameters characterising a reflective atom interferometer which have to satisfy the conditions (<ref>) and (<ref>). The latter can either be used for determining the incidence angle α or by rewriting the equation
tan(c+N_1) +tan(c+N_1+N_2)- tan(c+N_1')-tan(c+N_1'+N_2') =0 ,
with c=cosα and N_i = n_iλ/a_ S, one finds the following conditions leading to an α-independent solution:
n_1 = n_1' + n_2'∧ n_1' = n_1 +n_2 .
The interference pattern is due to the phase shift along the different optical paths ABC and AB'C. The path lengths can be determined via these angles for the path along point B
b=s(1/cosβ+1/cosδ) ,
and along the point B'
b'=s(1/cosγ+1/cosε) .
The interference occurs via the superposition of two waves with the same wave vector k, but are phase shifted with respect to the respective path lengths, b-b'. Hence, the phase shifts between the different paths are given by
φ = k(b-b') .
It can be observed in Eqs. (<ref>) and (<ref>) that the path lengths are proportional to the slab separation s and, thus, s should be tuned with respect to the wave vector to maximise the phase shift between both interfering beams.
Figure <ref> illustrates the positions of the different diffraction for different incidence angles α for a particular interferometer configuration. It can be seen that the diffraction orders are strongly separated. All lines are discontinued due to the finite length of the interferometer, which leads to some beams escaping the interferometer. These particles will likely hit the surface and fall into the interferometer; thus, they will not affect the interference patterns.
§.§ Reflection coefficients for the different beam paths inside the interferometer
In the last section, the conditions for interference were obtained. We now consider the intensity distribution in the interference signal, described via a reflection function.
This reflection function depends on the incidence and diffraction angle ϑ_1 and ϑ_2, respectively. We model the reflected beam via a Gaussian intensity distribution. Consequently, each diffraction order has a Gaussian profile which we normalise to the real-valued probability of each diffraction order ρ_n
r(ϑ_1,ϑ_2) = ∑_n ρ_n^-(ϑ_2 - θ_n)^2/2σ_n^2 ,
with the width of the diffracted signal σ_n and the position of the diffracted beam θ_n determined by Eq. (<ref>). The widths depend on the incidence angle and wavelength, σ_n=σ_n (λ,ϑ_1). These impacts are negligible for surface diffraction, the manuscript's content, due to the overall weak reflection signal <cit.>. The reflection coefficient (<ref>) only includes the inelastic scattering, that the wavelength of the outgoing wave is the same as that of the incoming wave, λ_ inc = λ_ out. Thus, the total reflected signal is smaller than one, ∫dϑ_2 r(ϑ_1,ϑ_2) <1.
In general, there are five lengths involved in such an interferometer: the wavelength (λ_ dB), the dimensions of the interferometer (length d and slap separation s) and the free-space propagation lengths (source to interferometer L_1 and interferometer to detector L_2). Typically, these dimensions are on different length scales λ_ dB≪ d,s < L_1, L_2. This consideration allows for the separation of length scales. Consequently, each particle will only interfere with itself inside the same optical path in the interferometer. Thus, we can treat each path inside the interferometer separately, and the collected diffraction image will follow from the Gaussian beam envelope. The partial waves will experience a different phase shift due to the optical path (<ref>). Thus, we can describe the reflection properties of the entire interferometer with a single modified reflection coefficient
r_ inter(ϑ_1,ϑ_2) = ∑_n_1n_2n_3ρ_n_1ρ_n_2ρ_n_3 f_n_1n_2n_3^ k b_n_1n_2^-[ϑ_2 - θ_n_1n_2n_3(ϑ_1)]^2/2σ^2 ,
with the wave vector of the matter wave, k = 2π/λ, and the indicator function f_n_1n_2n_3 factoring in the interferometer's geometry (which determines whether the beam can pass through the interferometer or not). The beam spread of all diffraction orders will usually be the same for a monochromatic wave, σ=σ_n for all diffraction orders n. Due to the tilted reflective surfaces with respect to the beam incidence, the detected spots will be slightly asymmetric, which we neglect for consideration in this manuscript. The position of the diffraction order is given by θ_n_1n_2n_3(ϑ_1) which is the three-times composition of Eq. (<ref>) simplifying to
θ_n_1n_2n_3(ϑ_1) = arcsin[sinϑ_1 +(n_1+n_2+n_3)λ/a_ S] .
§.§ The monolithic interferometer for He and Si(111)-H(1 × 1)
Let us consider a helium beam with de-Broglie wavelength λ_ dB = 0.55Å and a beam spread of 1 mrad at a distance of 1 m from the interferometer (propagation length L_1=1 m). This corresponds to a 1 mm beam waist (w= 1 mm) as the beam enters the interferometer. The lattice spacing of Si(111)-H(1×1) is a_ S = 3.383 Å <cit.>. We consider an incidence angle of 83 deg and the reflection coefficient (<ref>) with the amplitudes ρ_0=0.06, ρ_±1 = 0.03 and ρ_±2 =0.015. These scattering values correspond to the experimentally obtained data for a beam with a wavelength of 0.6 Å and an incidence angle of 52 deg reported in Ref. <cit.>. We changed the incidence angle because the result would be restricted to the zeroth order; see Fig. <ref>. Table <ref> shows the ratio of transmitted atoms into each diffraction channel. The reflection coefficients influence only the amplitude of the interference patterns, not the position of the peaks. Thus, the impact of the correct scattering amplitudes is neglectable. We have chosen the parameters to demonstrate several effects: the single-beam transmission, the two-path superposition and the multi-path interference. Here we restrict our considerations to the zeroth, first and second diffraction orders. Furthermore, we consider the reflecting plates to be 50 mm long and 5 mm separated from each other. The optical paths for this scenario are depicted in Fig. <ref>. It can be seen that the third-order diffraction beam (at 30.32 deg, blue line) consists of a single beam. It thus will not show any interference; the second-order beam (at 41.87 deg, orange line) is the superposition of two paths, as described in Sec. <ref>, but with equal optical paths which again will not interfere; and the first and zeroth order will show two separate signals each, that will lead to interference in the far field.
Due to the separation of the length scales and the fact that the atoms interfere with themselves and not with each other, we describe each interference pattern via a phase-shifted Gaussian wave in analogy to the Michelson interferometer. Thus, the interference pattern is described by the superposition of phase-shifted Gaussian waves
I(φ) ∝|∑_n a_n^ksinφ/2b_n|^2 ^-2 L_2^2sin^2φ/ w^2 ,
with the amplitudes a_n=ρ_n_1ρ_n_2ρ_n_3 and the optical path lengths b_n, which are given in table <ref>. The widths of the diffraction orders σ are small compared to the width of the Gaussian envelope, L_2sinσ≪ w, and, hence, can be neglected. It can be seen in Eq. (<ref>) that the interference fringes are determined by the wave vector, k=2π/λ_ dB. Thus, increasing the wavelength, either by increasing the particle's mass or velocity, will reduce the spacing between the interference fringes. The resulting interference patterns are plotted in Fig. <ref>. One can see that the diffraction at 30.32 deg and 41.87 deg will not show any interference features due to the equal optical path lengths of both optical paths. The remaining two spots will show interference effects with a contrast of 48.5% for the spot at 56.10 deg and 84.1% for the spot at 83.00 deg. The transmission rates of all channels can be found in table <ref>: 0.027% of the atoms will be diffracted under the angle of 30.32 deg, 0.0675% under 41.87 deg, 0.1688% under 56.10 deg, 0.216% under 83 deg. The remaining particles will not leave the interferometer. The intensity of a typical helium beam is so big <cit.> that a signal fraction of 10^-4 can easily be detected. The velocity spread will be the limiting quantity to measure the interference patterns for the helium atom interferometry configuration depicted in Fig. <ref>. The velocity spread of a supersonic helium beam depends on the beam temperature, the nozzle diameter and the reservoir pressure. This has been treated extensively in the literature; see, for example, Ref. <cit.>. A velocity spread causes two different effects: (i) a broadening of the interference fringes and (ii) a spatial movement of the entire interference pattern, as illustrated in Fig. <ref>. Finally, to observe interference, the velocity spread has to be sufficiently small to not cause a washing out of the interference fringes. Figure <ref> illustrates the positions of the diffraction order for different wavelengths of the incoming beam with a fixed incidence angle of 83 deg. It can be observed that the zeroth order will stay constant. The remaining orders strongly spread out with increasing wavelength. As in Fig. <ref>, the lines are not continuous due to the finite size of the interferometer. It can be seen in table <ref> that the interferometer splits the wave package at the first diffraction point over ≈ 0.71 rad.
§.§ Quantum reflection interferometer
Quantum reflection occurs on the attractive (outer) part of the atom-surface interaction potential <cit.> in contrast to surface scattering, where the reflection occurs on the repulsive (inner) part of the interaction potential <cit.>. It is called quantum reflection because, classically, reflection cannot occur with an attractive force interaction potential. Quantum reflection has the very big advantage that an extensive range of atoms and small, few-atomic molecules that would stick under surface diffraction conditions display quantum reflection. The disadvantage is that quantum reflection requires small perpendicular wave vectors. This means that, for a given wavelength, the spatial extension of a reflective interferometer must be larger than in the surface scattering configuration for the separated beams to recombine.
Quantum reflection is less sensitive than surface scattering to defects and surface contamination because it occurs at larger distances from the surface <cit.>. Very large specular reflection coefficients of the order of 50% <cit.> up to 90% <cit.> have been measured. Diffraction via quantum reflection was recently demonstrated experimentally <cit.> using helium dimers and trimers with periodically striped surfaces with micron-sized structures. The paper includes a comparison of the experimental result with scattering theory based on the diffraction angle distribution (<ref>), reported in Ref. <cit.>. There is reasonable agreement between theory and experiment.
§ CONCLUSIONS AND FUTURE WORK
This paper presents the first proposal for a reflective interferometer for atoms and molecules. We present calculations for a monolithic configuration based on experimental scattering results for a room-temperature helium beam from Si(111)-H(1× 1), showing that a beam splitting of more than 0.5 radians is achievable. Furthermore, we argue that quantum reflection diffraction is a viable option for extending the beams and surfaces that can be used and potentially increase the signal intensity. The interference of larger and complex molecules can be achieved by using different interaction potentials, such as evanescent fields <cit.>. A reflective atom or molecule interferometer, particularly in a monolithic configuration, opens several possibilities for applications, for instance, as an accelerometer, in investigating the coherence of matters near dielectric surfaces, as a continuous velocity selector etc.
The next obvious first step is to do a demonstration experiment of the new interferometer with a helium beam and to do detailed designs of quantum reflection setups. The latter will require the calculation of quantum (diffraction) reflection coefficients for a range of realistic system configurations.
§ ACKNOWLEDGMENTS
J.F. gratefully acknowledges support from the European Union (H2020-MSCA-IF-2020, grant number: 101031712).
unsrt
|
http://arxiv.org/abs/2307.02873v1
|
20230706091903
|
Computing Motion Plans for Assembling Particles with Global Control
|
[
"Patrick Blumenberg",
"Arne Schmidt",
"Aaron T. Becker"
] |
cs.RO
|
[
"cs.RO"
] |
Reference-based Motion Blur Removal
Learning to Utilize Sharpness in the Reference Image
Han Zou^1,2
Masanori Suganuma^1,2
Takayuki Okatani^1,2
^1Graduate School of Information Sciences, Tohoku University
^2RIKEN Center for AIP
{hzou, suganuma, okatani}@vision.is.tohoku.ac.jp
==================================================================================================================================================================================================================
We investigate motion planning algorithms for the assembly of shapes in the tilt model in which unit-square tiles move in a grid world under the influence of uniform external forces and self-assemble according to certain rules.
We provide several heuristics and experimental evaluation of their success rate, solution length, runtime, and memory consumption.
§ INTRODUCTION
In the tilt model micro particles move under a global control such as gravity or a magnetic force.
On actuation, particles move to the designated direction unless blocked by walls or other particles. Compatible particles bond on contact. As an example, see Fig. <ref>.
Previous work <cit.> considered the assembly problem that asks whether a given shape can be assembled in the tilt model.
However, these works are mostly theoretical and provide hardness proofs and runtime complexity.
In this paper, we search for efficient motion plans to assemble shapes in a given environment.
In particular, we provide several heuristics and an evaluation based on randomly-generated instances.
§.§ Related Work
Motion planning is a fundamental and well-studied research topic in the field of robotics.
The general problem, known as the reconfiguration problem, is to find a sequence of moves of one or multiple robots to transform an initial configuration, i.e., the set of all position of robots and obstacles, to a specific target configuration while avoiding collisions.
Even when all robots are rectangular, this problem is PSPACE-hard <cit.>.
To solve problems of this type, several heuristics have been developed:
search-based planning algorithms <cit.> such as A*, which build a graph structure over the configuration space and then use graph-search algorithms to find a path;
potential field method, in which an artificial potential field represents attractive and repulsive forces that guide the path of the robots <cit.>;
sampling-based methods like probabilistic roadmaps (PRM) <cit.> and rapidly exploring random trees (RRT) <cit.> that can be used to discover a path in high-dimensional configuration spaces (e.g. humanoid robots or multi-robot systems).
In this paper, we consider search-based algorithms as well as RRTs to solve motion planning problems in the tilt model.
The tilt model of motion planning, introduced in <cit.>, presents a possible solution for scenarios in which a collection of particles or robots that cannot be moved individually needs to be reconfigured.
When an external force is switched on, all particles move maximally (full tilt motion model as used in <cit.>) or one unit step (single-step tilt model as used in <cit.>) into the given direction of the force.
In this model, various types of problems have been investigated:
Assembly problem, i.e., can a given shape be constructed <cit.>;
Gathering problem, i.e., how fast can particles collected in a specific area <cit.>;
Occupancy Problem, i.e., can a specific location be reached by at least one particle <cit.>;
Relocation Problem, i.e., can a specific particle be moved to its designated position <cit.>; and Reconfiguration Problem, i.e., can a configuration be reached where every particle is in its goal position <cit.>.
Many of the tilt problems are at least NP-hard. In particular, the reconfiguration problem is PSPACE-complete <cit.>.
If the workspace can be designed in advance, it can be shown that rearranging shapes can be performed efficiently <cit.>.
To the best of our knowledge, no other work on the design and experimental evaluation of tilt-related motion planning algorithms in the context of shape assembly has been published.
Our simulation code and test instance generation are shared at GitHub[ https://github.com/RoboticSwarmControl/2023TiltMotionPlan/https://github.com/RoboticSwarmControl/2023TiltMotionPlan/].
§ PRELIMINARIES
A board B is a rectangular region of ℤ^2 with each position either marked as open or blocked. Blocked positions represent obstacles and cannot contain tiles. Furthermore, the induced grid graph G_B is defined as the graph in which the open positions of B are nodes and two nodes are connected if and only if their distance is 1. B is called connected if and only if G_B is connected.
A tile t= (p, g) is a unit square centered on open position p with edge glues g := (g_N(t), g_E(t), g_S(t), g_W(t)) of glues. g_d(t) denoting the glue on the edge of t facing in the cardinal direction d ∈{N,E,S,W}. Each glue is an element of a finite alphabet Σ, which also contains a special glue. A glue function G Σ→{0, 1} defines which glues stick to each other. Two glues g_1, g_2∈Σ stick to each other with respect to G if and only if G(g_1, g_2) = 1. Furthermore, G has the properties G(g_1, g_2) = G(g_2, g_1) and G(g_1, ) = 0 for all g_1, g_2∈Σ. Tiles bond if and only if they are located on adjacent positions and the glues on the shared edge stick together. The bond is considered permanent, and the tiles hereafter move as a unit.
A polyomino P is a finite set of tiles that forms a connected component in the graph in which bonded tiles are adjacent.
The position of P is defined as the position of the top-leftmost tile in P.
A configuration is a tuple C = (B, T), consisting of a board B and a set of tiles T on open positions.
A step is a transformation between two configurations moving every polyomino (including single tiles) by one unit in one of the cardinal directions unless blocked by an obstacle.
Polyominoes that do not move are called blocked.
For simplicity, a step can be denoted by the direction d ∈{N, E, S, W} in which the polyominoes are moved.
A step sequence is a series of steps.
Given a configuration C= (B, T) and a connected set of open positions X ⊆ B, the Polyomino Assembly Problem (PAP) asks for either a step sequence S such that consecutively applying the steps of S to C, results in a configuration in which a polyomino P exists that satisfies X = {p_1, p_2,…}, where p_1, p_2,… are the positions of the tiles in P, or a proof that the configuration is unreachable.
If there is a fixed tile t_fixed∈ T that does not move under step transformations, and tiles can only bond if the resulting polyomino contains t_fixed, then we call this problem the Fixed Seed Tile Polyomino Assembly Problem (FPAP).
Based on a reduction used in <cit.>, we can show that FPAP is PSPACE-complete, even if there are exactly as many tiles on the board as required for the target shape.
The Fixed Seed Tile Polyomino Assembly Problem is in PSPACE
Repeated, non-deterministic selection of control inputs until the target shape is assembled solves the problem and only requires us to store the current configuration. Therefore, the problem is in NPSPACE, which is equal to PSPACE.
The Fixed Seed Tile Polyomino Assembly Problem is PSPACE-hard
To show PSPACE-hardness we adapt a proof from <cit.> and reduce from the k-region relocation problem of the full-tilt model <cit.>. In this problem, k disjoint regions each contain a single tile. The goal is to move all k tiles to the 1 × 3 region at the bottom of their respective component using tilt transformations. Analogous to <cit.>, we start the reduction by connecting the 1 × 3 regions to a single bottom row and invert tile placement by placing tiles on all open spaces that did not contain a tile in the original instance of the k-region relocation problem. Additionally, we place a fixed seed tile k positions to the right and 1 down from the leftmost tile in the bottom row t_reloc, as shown in Figure <ref>. We assign glues as follows: The fixed seed tile has glue A on all four edges. t_reloc has glue B on all edges. Every other tile has glue C on all edges. Furthermore, we define the glue function G, such that G(A,B) = G(B,C) = G(C,C) = 1 and G(X, Y) = 0 for all other pairs of glues X, Y. Finally, we define the target shape as all open spaces inside the connected region, except for the k leftmost positions in the bottom row. Now a solution to the constructed instance of the Fixed Seed Tile Polyomino Assembly Problem corresponds to a solution to the original instance of the k-region relocation problem. The key idea is, that tiles can only bond if they are connected to the seed tile and that only t_reloc can directly bond with the seed tile according to the glue function. Therefore, t_reloc must be relocated k positions to the right, which was shown to solve the original k-region relocation problem (in <cit.>). Conversely, once t_reloc has been successfully moved k spaces to the right, the target shape is immediately assembled, as all open spaces, except the k leftmost positions in the bottom row, are filled with tiles that can bond according to the glue function and are connected to the fixed seed tile via t_reloc.
The Fixed Seed Tile Polyomino Assembly Problem with extra tiles is PSPACE-hard.
As a basis for the definition of our heuristic functions, we use the following definitions of distance between positions, tiles and the target shape.
Consider a configuration C= (B, T) and let p = (p_1, p_2), q = (q_1, q_2) be open positions in B.
Then d_M(p) denotes the length of a shortest path from p to any position from M⊆ B in G_B,
and
d_1(p, q) |p_1 - q_1| + |p_2 - q_2| is called the taxicab distance.
For the sake of brevity, we define d(t) d(p_t) and analogously d_1(t,q) := d_1(p_t,q) for t=(p_t, g_t) ∈ T.
§ ALGORITHMIC APPROACHES
In this section, we investigate algorithmic approaches for solving the Polyomino Assembly Problem with and without fixed seed tiles. We focus on best-first search algorithms with different heuristics, some of which are admissible, and therefore lead to an A* search that finds solutions of optimal length.
These heuristics are then further divided into simultaneous construction, allowing multiple subassemblies, and incremental construction, adding single tiles to the target shape while maintaining separation of the remaining tiles.
We also investigate pruning techniques to avoid unnecessary computation. In addition to the best-first search algorithms, we propose an approach using RRTs.
§.§ Simultaneous Construction
Simultaneous construction uses a best-first search with a heuristic based on the distance of the available tiles.
§.§.§ Greatest Distance heuristic (GD)
The GD is an admissible heuristic that can be used with an A* search. In this case, the value of a heuristic function h_GD(C, X) is added to the distance of the current configuration from the initial configuration, i.e., the number of moves required to get to the current configuration.
In particular, h_GD(C, X) is defined as follows.
Let C = (B, T) be a configuration and X a target shape with |X| = n.
Then h_GD(C, X) is the distance of the n-th nearest tile in T_available to X, where T_available is the set of tiles which are part of some polyomino P that can be moved to a position where it is contained in X.
In order to decide if a given tile is in T_available, our implementation remembers for each polyomino if it is able to reach the target shape and fit into the target shape. These properties are only reevaluated whenever the polyomino changes.
GD is a consistent heuristic because a single step, which has a cost of 1, applied to a configuration reduces the heuristic value of GD by a maximum of 1.
h_GD can be used in combination with a greedy best-first search approach by not adding the distance from the initial configuration to the heuristic value. If the heuristic is used this way, we refer to the resulting algorithms as Greedy Greatest Distance (GGD). The idea of this approach is to give up the optimality of the solution in order to potentially improve the execution speed.
§.§.§ Pruning methods
There are multiple pruning methods that can be used together with the simultaneous construction approach. Firstly, if |T_available| < n, the configuration can never lead to a solution.
Furthermore, when it can be shown that there is no subset of existing polyominoes which can exactly cover the target shape, the branch can be pruned.
In general, however, it may not be computationally feasible to decide whether such a tiling of the target shape exists whenever a polyomino changes.
In case there are exactly as many tiles on the board as required for the target shape, we determine if the k largest polyominoes can be packed into the target shape. If this is not possible then the branch can be pruned. In our implementation, k is set to 3.
Alternatively, when only one single polyomino exists on the board, we check if this polyomino fits into the target area and if it can be moved to the target area.
Although this may not yield optimal solutions (even if a A* search with a constant heuristic is used), the solution is at most d_max steps longer, where d_max is the maximum distance between any two open positions.
§.§ Incremental Construction
In contrast to simultaneous construction, incremental construction uses multiple consecutive best-first searches to add tiles to a polyomino one after another. For that reason, the heuristic function used in each of the best-first searches depends on the positions of the subassembly and the tile that we are currently trying to add. To make this possible, an incremental construction method needs to determine a suitable set of tiles and a building order in which these tiles can be added to the target polyomino. Each best-first search attempts to keep all tiles that are not involved in the current construction step separated from each other by pruning configurations with undesired subassemblies. Importantly, this approach is not a complete solution, since it is not always possible to avoid the creation of multiple subassemblies. Furthermore, an obtained solution is generally not of optimal length, even if each construction step consists of an A* search with a consistent heuristic.
§.§.§ Minimum Moves to Polyomino heuristic (MMP)
Given a configuration C = (B, T), a target shape X, and a Polyomino P, the MMP heuristic provides a lower bound on the number of moves required to move a selected tile t= (p_t, g_t) to a
position with coordinates x adjacent to P.
Although x may not be an open position, we consider x to be one for the heuristic function, which is defined as follows.
h_MMP(C, X, t, x) ⌈d(t, x) - d_1(t, x)2⌉+ d_1 (t, x),
i.e., h_MMP is a lower bound on the number of moves required to move t to the target position relative to P.
A disadvantage of this heuristic is that it requires knowledge of the pairwise distance between all pairs of the m open positions on the board. These distances can be computed in advance by breadth-first searches starting on each open position in time O(m^2).
§.§.§ Minimum Moves to Polyomino or Target heuristic (MMPT)
The MMPT heuristic aims to reduce the need to calculate pairwise distances. To achieve this, MMPT only uses the estimated number of required moves for the heuristic value calculation if the current subassembly and the selected tile are within a specific distance (threshold distance d_t) from the target area. Define the target area A as the set of all open positions that are reachable (i.e., can be covered) by the final polyomino. The target area can be computed with a breadth-first search.
If t or x is not near the target area, the maximum distance of the two components to the target area is used as a basis for the heuristic value instead. To prioritize configurations where both components are near the target area, we apply a weighting factor w = |A| case.
h_MMPT(C, X, t, x)
h_MMP(C, X, t, x),
if d_A(p_t) ≤ d_t d_A(x) ≤ d_t
w ·max(d_A(p_t), d_A(x)),
otherwise
This approach has two main advantages over MMP. Firstly, it only requires the computation of pairwise distances within the proximity of the target area. Secondly, it encourages the formation of subassemblies within the reachable area of the target shape.
§.§.§ Distance to Fixed Position heuristic (DFP)
In the case of a fixed seed tile, a much simpler heuristic can be used.
Because the target position of the tile is fixed, each move can reduce the length of a shortest path from the tile to the target position by at most 1.
Therefore:
h_DFP(C, X, t, x) d(t, x)
Additionally, during the computation of the shortest paths to the target position the positions of tiles contained in the fixed polyomino, as well as neighboring positions that are impassable for the selected tile because of glues on the edges of a fixed tile, are marked as blocked.
§.§.§ Pruning methods
All of the incremental construction approaches use the same two pruning methods. Firstly, a branch is pruned if its configuration contains more than one subassembly. Secondly, a branch is pruned if the existing subassembly cannot be moved to a position where it is contained in the target shape.
§.§.§ Computation of the building order
A brute-force algorithm is used to determine a tiling of the target shape such that all tiles bond. A recursive backtracking algorithm is then utilized to remove tiles from the polyomino one at a time, in order to determine a potential building order.
In each iteration, the algorithm checks if the remaining tiles are still connected by glues and if a path exists along which the selected tile can be moved outside of the enclosing rectangle without being blocked by other tiles or glues.
If all tiles can be removed from the polyomino in this way the reversed deconstruction order is a potential building order.
This approach is directly inspired by research on the constructibility of polyominoes under tilt transformations <cit.>.
If no building order is found, a different tiling of the polyomino is attempted until a potential building order is found or all possible tilings were considered.
A building order found in this way is not guaranteed to be realizable on the board of the problem instance. If any of the best-first searches during the motion planning process terminate without finding a solution we proceed with the next candidate building order.
§.§ Rapidly-expanding Random Tree (RRT)
The basic idea of an RRT is to select a random configuration from the configuration space and expand the configuration in the current tree that is closest according to some cost-to-go function towards the selected configuration.
When designing an RRT-based algorithm for the Polyomino Assembly Problem, the main challenge is to find a cost-to-go function that is fast to compute and provides a reasonable estimate of the distance between two configurations.
Let C_1 = (B, T_1) and C_2 = (B, T_2) be two configurations on the same board.
For two tiles t_1 = (p_1, g_1) ∈ T_1 and t_2 = (p_2, g_2) ∈ T_2 we define
d_H(t_1, t_2)
d(p_1, p_2) , if g_1 ≡ g_2
∞ , otherwise.
Then we define the distance between a tile t and a set of tiles S as D_H(t, S) min_s ∈ S d_H(s, t).
Note that D_H(t_1, T_2) < ∞ for any t_1 ∈ T_1, and vice versa, because both sets of tiles have the same sets of glues.
Now the cost-to-go-function between C_1 = (B, T_1) and C_2 = (B, T_2) is defined to the Hausdorff distance <cit.> between the two sets of tiles.
D_H(C_1, C_2) max{max_t_1∈ T_1 d(t_1, T_2) , max_t_2∈ T_2 d(t_2, T_1) }
Additionally, if a polyomino exists in C_1 that does not fit into any polyomino of C_2 we set the cost-to-go to ∞.
Since the number of times the cost-to-go function is evaluated in each expansion step grows with the number of nodes in the RRT, we try to further reduce the computational cost of repeatedly evaluating the cost-to-go function by keeping a sparser tree. This is achieved through a more expensive expansion step, which expands the closest node by multiple steps in the direction of the randomly selected configuration. For that purpose, a greedy best-first search with a limited number of iterations is used to minimize the cost-to-go.
A downside of this cost-to-go function is that it requires knowledge about the pairwise distances between positions of the board. A potential solution is to use the taxicab distance instead of the length of a shortest path as the underlying distance metric. However, this would further decrease the accuracy of the estimated distance.
To estimate the distance to the goal, i.e., any configuration containing the target shape, a different metric must be used because the order of tiles within the target shape and the position of possible left-over tiles are not known. Therefore we choose the greatest distance among the n nearest available tiles, as defined by the GD approach, as an estimate for the distance to the goal. We bias the RRT search to expand the closest viable node directly towards the goal in 5% of the expansion steps. For these expansion steps, the GGD approach with a limited number of iterations is used. If the distance from the selected node to the goal cannot be reduced in this way, the node is marked and the next expansion towards the goal will use the next closest node.
§ SIMULATION SETUPS
The algorithmic approaches are evaluated on sets of procedurally generated instances with different features, including instances with and without a fixed seed tile.
We investigate the impact of the number of tiles, size of the board, obstacle placement, etc., on the problem difficulty.
The analysis focuses on the success rate, runtime, and length of the solutions.
§.§ Method for Instance Creation
We implemented a Python function that procedurally generates instances based on six input parameters.
Board type Two board types are used: Maze starts on a board filled with blocked positions. A randomized recursive backtracking algorithm creates a tree of open spaces. Next, a number of rectangular open regions proportional to the size of the board are added uniformly at random.
Cave boards are created by cellular automata. In the next generation a dead cell becomes alive if it has five or more living neighbors; a living cell becomes dead if it has less than four living neighbors. Each cell starts alive with a probability of 0.45 and two generations are applied.
Living cells are blocked positions.
To ensure connectedness, the largest connected component of open spaces is determined and all other open spaces are filled.
Target shape size
The number n of tiles required to assemble the target shape.
Board size The edge length of the board.
Only square-shaped boards were used.
Number of extra tiles
Tiles not needed to create the target shape. The total number of tiles is the sum of the target shape size and the number of extra tiles.
Number of different glues
The glues on the edges of each tile are selected uniformly at random from the set of available glues. To make it likely for instances to have a solution, a subset of the tiles is arranged into the target shape and a set of rules is added such that they bond. Finally, additional rules are selected at random, until at least half of all possible combinations of glues stick together.
Problem type Whether a fixed seed tile exists or not. If a fixed seed tile exists, tiles can only bond if the resulting polyomino contains the fixed tile.
On all generated boards, the open positions form one connected region.
To determine initial tile positions, tiles are successively placed on legal positions uniformly at random. A legal position is open, does not contain a tile, and is not adjacent to a position containing a tile.
If a fixed seed tile is required, it is placed first on a suitable position within the target shape. Although some measures were taken to increase the probability that the created instances are solvable, instances are not guaranteed to have a solution.
§.§ Simulation Environment
All motion planning algorithms were implemented in a self-developed simulator based on TumbleTiles[TumbleTiles: <https://github.com/asarg/TumbleTiles>]. The original TumbleTiles software was upgraded from Python 2 to Python 3 and heavily modified.
It can simulate step transformations on configurations with and without fixed seed tile.
Experiments were conducted on multiple computers, each with the same specifications (Intel® CoreTM i7-6700K CPU @ 4x4.00GHz, 64GB RAM) running Ubuntu 20.04 LTS.
§.§ Evaluated Approaches
* Breadth-first-search (BFS)
* Greatest Distance heuristic (GD)
* Greedy Greatest Distance heuristic (GGD)
* Minimum Moves to Polyomino heuristic (MMP)
* Minimum Moves to Polyomino or Target heuristic (MMPT): d_t = 4.
* Distance to Fixed Position heuristic (DFP)
* Rapidly-exploring Random Tree (RRT): Each expansion step consists of 40 iterations of a greedy best-first search.
The bias towards the goal is 5%. If within 7 distance to the goal, a best-first search limited to 500 iterations attempts to find a solution.
For each solver, all applicable pruning methods discussed in Sec. <ref> were used.
In combination with the simultaneous construction heuristic approaches GD and GGD, the alternative stop condition is used if the configuration contained no extra tiles.
§.§ Experimental Procedure
Two instance sets were created in advance and contain the same instances for every experiment. Both contain five procedurally generated instances for each combination of parameters (2160 instances each) from the following table:
Board 2c|cave, maze
Target shape size 3 , 4, 5 , 6 5, 10, 13, 15
Board size 20 , 30, 40 40, 80, 120
Extra tiles 0, 1, 3 0, 3, 5
Number of glues 1 , 2, 3 1, 3, 5
Problem type 2c|seed tile, no seed tile
consists of relatively easy to solve instances, that allow a comparison of all solvers, including the breadth-first search solver and other (near)optimal solvers. Six instances are shown in Fig. <ref>.
consists of instances with a wider range of difficulties. See Fig.<ref> for an example.
Only GGD, MMPT, and DFP were used for . DFP was only used on instances with a fixed seed tile. All experiments were conducted with a timeout of 600 seconds per instance.
The problem instance, solution sequence (if solved), runtime, peak memory usage, and number of nodes in the search tree (sum of nodes in all search trees for incremental construction) were recorded.
§ RESULTS OF SIMULATION
Our algorithmic approaches are evaluated using the experiment results from both sets of instances.
§.§ Evaluation of
The experiment results from are used to measure the impact of various parameters on the performance of different motion planning algorithms.
The relatively small size of these instances allows every solver to solve a fraction of the instances within the timeout of 10 minutes.
This data is used to evaluate the impact of configuration parameters on the efficiency of the different motion planning algorithms and compare memory requirements.
Breadth-first search is used as the baseline for performance comparison.
§.§.§ Comparison of simultaneous construction algorithms
The performance of the best-first search motion planning algorithms GD and GGD is compared with breadth-first search.
As expected, all heuristic searches show a better performance in terms of runtime and success rate than BFS (see Fig. <ref>).
The A* search with the consistent heuristic GD takes longer to find a solution and solves fewer instances within the time limit than the greedy best-first search approach GGD.
All solvers show a decrease in successfully solved instances with an increasing number of tiles.
Compared to other parameters such as board size, the number of tiles has the greatest performance impact.
Conversely, we observe that the board size has a greater impact on the length of the found solution than the number of tiles.
The near-optimal motion planning algorithm GD produces solutions of approximately the same length as BFS (see Fig. <ref>) whereas the solution length of the greedy algorithm was greater by a factor of 3.76 on average when compared only on instances for which both GD and GGD found a solution.
However, this factor decreases with an increase in board size as Table I indicates.
Board size GGD / GD length
20 × 20 3.92 ± 3.97
30 × 30 3.67 ± 3.57
40 × 40 3.46 ± 2.82
*Table I: Mean ratio of lengths of solutions found by GGD and GD to the same instance ± SD for different board sizes.
§.§.§ Comparison with other motion planning algorithms
This section compares the incremental construction motion planning algorithms (MMP, MMPT, DFP), and RRT.
For comparison, GGD is included in the plots.
As shown in Figs. <ref> and <ref>, all incremental construction algorithms solve larger fractions of instances successfully than GGD on instances with a number of tiles greater than 5. The runtime on instances with many tiles is clearly better too.
However, the runtime of MMP is particularly sensitive to an increase in board size.
MMPT does not have this downside and shows superior performance to MMP.
In general, the performance of the incremental approach degrades slower than the performance of other approaches when the number of tiles is increased.
The DFP motion planner can only be evaluated on instances of the Fixed Seed Tile Polyomino Assembly Problem. It has a high success rate of around 90% on these instances regardless of the number of tiles and the size of the board. Furthermore, a large majority of instances that are solved by DFP are solved in less than 1 second.
RRT displays a similar behavior as GGD but the runtime is more sensitive to increased board size.
§.§.§ Comparison of the difficulty of the two problem types
contains instances of both the Polyomino Assembly Problem and the Fixed Seed Tile Polyomino Assembly Problem.
We compared the performance of three fundamentally different motion planning approaches (GGD, MMPT, RRT) for both problems in order to evaluate which problem is harder to solve in practice.
Figure <ref> shows the fraction solved by GGD. These results are similar to RRT.
All three solvers demonstrate decisively faster runtime for the problem with a fixed seed tile (see Figure <ref>).
Furthermore, the specialized motion planning algorithm DFP for instances with a fixed seed tile solved a large fraction of the suitable instances in a short time, as Figures <ref> and <ref> indicate.
§.§.§ Memory usage
The peak memory requirement of three motion planning algorithms depending on the time needed to solve an instance are shown in Fig. <ref>.
For GGD and MMPT, the maximum of the peak memory usage grows linearly over time, because the speed at which nodes are expanded remains constant over time and the memory requirements per node remain constant. The memory usages of both algorithms cover a similar range with a maximum of around 8 GB and 9 GB respectively.
In contrast, the peak memory usage of RRT is two orders of magnitude smaller, because only a sparse tree of configurations is stored. Furthermore, it does not increase linearly over time. Instead, it increases sharply in the first few seconds and then slows down significantly. An explanation for this behavior is that every iteration requires the computation of the Hausdorff distance from a random configuration to all configurations in the RRT. As the number of nodes grows over time, each iteration takes more time than the previous one. Furthermore, up to a certain number of iterations the memory requirements for a single expansion step, which includes a heuristic search, outweigh the memory requirements of the RRT.
§.§ Evaluation of
results show that the near-optimal motion planning algorithms are usually unable to solve larger instances. Regarding the RRT solver, the calculation of the cost-to-go function based on the length of a shortest path is computationally infeasible for the larger boards in . Therefore, we choose to compare GGD, MMPT, and DFP.
§.§.§ Comparison of the motion planning algorithms
The time needed by the motion planning algorithms and their success rate can be seen in Figs. <ref> and <ref>. These plots are grouped by the target shape size, instead of the total number of tiles on the board. Instances can have up to 5 additional tiles that are not needed to build the target shape. Generally speaking, the runtime of all algorithms increases with an increased size of the target shape. Again, all algorithms finish a majority of the solved instances long before the timeout, which indicates that there are certain instances that the solvers struggle with, whereas other instances can be solved easily.
The performance of GGD in terms of the fraction of solved instances continues to decrease sharply with an increase in the size of the target shape. MMPT shows an overall higher success rate but the performance also falls off when the target shape gets bigger. DFP performs best and shows a high success rate that only declines slowly with an increase in the target shape size and does not fall under 60%.
Furthermore, DFP finds a solution much faster than all other algorithms. Since instances are not guaranteed to have a solution, it is possible that randomly generated instances with a larger target shape are less likely to be solvable, which could also be a factor for the fraction of solved instances.
The greater range of board sizes in confirms that the runtime increases much faster with an increased number of tiles, rather than increased board size. In particular, the success rate of DFP is not decreased when the board size is increased from 40 × 40 to 120 × 120.
§ CONCLUSION AND FUTURE WORK
We designed motion planning algorithms for the assembly of shapes in the tilt model based on multiple different approaches, including search-based and sampling-based motion planning algorithms.
We evaluated the effectiveness of these approaches experimentally on procedurally created boards and investigated the parameters that are most significant for the performance of motion planning algorithms in general, as well as the specific strength and weaknesses of the different approaches.
The best complete algorithm analyzed was GGD.
This was outperformed by incomplete algorithms, the best of which was MMPT. All of these were outperformed by DFP, an incomplete motion planner that uses a fixed seed tile.
The evaluation of computational complexity for the Polyomino Assembly Problem without a fixed seed tile and without extra tiles is left to future research.
RRT is a promising method to solve tilt motion planning problems. Combined with a computationally more expensive expansion step, it has the added benefit of requiring less memory than other approaches. A major challenge in this context is to find a cost-to-go function that is fast to compute and gives a good approximation of the actual distance between two configurations.
Better best-first search algorithms could potentially be achieved with heuristics that not only depend on the distance of tiles to the target shape but instead consider, for example, the involved glue types and possible positions of tiles within the target polyomino.
On the instance side, it seems that the number of moving tiles drastically increases the time to solve.
This result seems analogous to Schwartz and Sharir's result on moving disks through a polygon <cit.>.
Thus, we expect our problem without a fixed seed tile to be PSPACE-complete. This, however, is left for future work.
abbrv
|
http://arxiv.org/abs/2307.00689v1
|
20230702235527
|
Camera Calibration from a Single Imaged Ellipsoid: A Moon Calibration Algorithm
|
[
"Kalani R. Danas Rivera",
"Mason A. Peck"
] |
cs.CV
|
[
"cs.CV"
] |
§ INTRODUCTION
§.§ Camera Calibration
Optical Navigation (OPNAV) establishes estimates of a spacecraft's relative state using a calibrated onboard camera. Inaccuracies in the camera's intrinsic parameters propagate to inaccurate OPNAV estimates. Accurate knowledge of the camera's parameters requires accurate and precise camera calibration procedures. Full camera calibration estimates a variety of camera parameters <cit.>, however we reserve the discussion to a camera's intrinsic, geometric calibration parameters. Geometric calibration solves for a camera's geometric distortion due to optics, effective focal length, and alignment. The effective focal length establishes the distance from the camera center to the focal or image plane <cit.>. The alignment addresses the location of the boresight's intersection with the focal plane (i.e., principal point). For spacecraft cameras, ground-based calibration provides preliminary values for a camera's intrinsic parameters. A second calibration occurs in space and images a star cluster such as the Pleiades for more precise and accurate estimates of the intrinsic camera parameters <cit.>.
§.§ Calibration of a Camera's Intrinsic Parameters from a Planet's Horizon
The majority of the solar system planets and moons resemble ellipsoids. As ellipsoids, OPNAV work in horizon-based navigation <cit.> estimates a spacecraft's relative state using a calibrated camera. Given horizon-based navigation is a solved problem, we express interest in the inverse problem of estimating a camera's parameters given a spacecraft's relative state. In other words, is camera calibration possible using a planet's horizon?
Camera calibration methods using ellipsoids and ellipses as imaging targets exist in the calibration literature <cit.>. However for the application of planet horizons, the minimum imaging target requirement and model assumptions in the existing literature are problematic. The calibration algorithms in Refs. <cit.> require a minimum of N≥ 2 imaging ellipsoids or ellipses. Though imaging two bodies in the same image is possible
, it requires favorable geometry and considerable planning to occur.
Imaging a single planet represents the general case experienced by most spacecraft. Additionally, camera calibration algorithms in Refs. <cit.> specifically address images of spherical or circular imaging targets. As ellipsoidal bodies, planets are generally not spheres, and their apparent horizon projects to a conic section which is generally an ellipse and not a circle. As is, the existing methods only apply to the specific case of a nadir-pointed (i.e. pointing at planet center) spacecraft imaging a spherical planet, and a model mismatch exists for all other cases.
Camera calibration from a planet's horizon requires an algorithm that takes the single image of any arbitrary ellipsoid as input. We propose an algorithm that requires N=1 ellipsoidal targets for intrinsic camera calibration from a single image.
§ PROPOSED METHOD
Onboard spacecraft cameras are an immense sensing asset for onboard optical navigation. OPNAV methods enable spacecraft to fulfill their stringent navigation requirements and all rely on a calibrated camera for successful integration. For the case of spacecraft implementing OPNAV near an ellipsoidal planetary body, this work considers using the planetary body's horizon conic to calibrate the spacecraft camera's intrinsic parameters.
§ CONIC-TO-CONIC MAPPING
§.§ Reference Conic 𝒞
Extended solar system bodies such as planets and moons resemble ellipsoids. For an ellipsoid, points on its surface p_P≜[ p_x p_y p_z ]^T satisfy the following
[ p_x; p_y; p_z ]^T
[ 1/a^2 0 0; 0 1/b^2 0; 0 0 1/c^2 ][ p_x; p_y; p_z ]
= p_P^T 𝒜_P p_P
= 1
equation for a quadric surface. The matrix
𝒜_P ≜[ 1/a^2 0 0; 0 1/b^2 0; 0 0 1/c^2 ]
defines the shape matrix of an ellipsoid in terms of its semi-major radii a,b, and c about its principal axes as shown in Fig. <ref>. Subscript P denotes the vector/matrix in the planet's principal coordinate system.
National Aeronautics and Space Administration's (NASA) Navigation and Ancillary Information Facility (NAIF) maintains SPICE kernels that provide a host of parameters for planetary bodies including the a,b, and c of their respective best-fit ellipsoid as well as their position and orientation at a given epoch <cit.>. When building 𝒜_P we refer to SPICE kernels for the appropriate a,b, and c values.
Shape matrix 𝒜_P influences the ellipsoid's perspective projection as imaged by an observer at planet-relative position r_P. From the observer, surface point p_P exists along the line-of-sight (LOS) vector ê_P at some distance λ. From vector addition,
p_P = r_P + λê_P
provides p_P in terms of r_P, ê_P, and λ. Substituting Eq. (<ref>) into Eq. (<ref>) and re-arranging provides
ê^T 𝒜_pêλ^2 + 2 r_P^T 𝒜_pêλ + (r_P^T 𝒜_p r_P - 1) = 0
a quadratic expression in terms of unknown λ. The familiar quadratic formula solves for unknown λ
λ_1, λ_2 = -2 r_P^T 𝒜_pê_P/2 ê_P^T 𝒜_pê_P±
√( 4 ê_P^T (𝒜_p r_P r_P^T 𝒜_p ) ê_P - 4 ê_P^T ( (r_P^T 𝒜_p r_P - 1) 𝒜_p)ê_P )/2 ê_P^T 𝒜_pê_P.
As a quadratic expression, two roots exist for λ. However as Refs. <cit.> point out, p_P lying tangent to the ellipsoid have repeated roots such that the discriminant becomes zero. The locus of p_P tangent to the ellipsoid describes the apparent outline or horizon of the ellipsoid as viewed by the observer at r_P. For horizon points, setting the discriminant to zero reduces to
ê_̂P̂^T ( 𝒜_p r_P r_P^T 𝒜_p - (r_P^T 𝒜_p r_P - 1)𝒜_p)ê_P
for which the inner matrix
𝒞_p≜𝒜_p r_P r_P^T 𝒜_p - (r_P^T 𝒜_p r_P - 1) 𝒜_p
defines the conic formed by the horizon under perspective projection. The conic may be a circle, hyperbola, or parabola, but the horizon's projection is generally an ellipse. In this work we prefer expressing 𝒞_P in camera coordinates as given by
𝒞_C ≜ T_P^C 𝒞_P T_C^P
where T_P^C is the coordinate transformation matrix from planet-fixed principal coordinates (P) to spacecraft camera coordinates (C).
§.§ Imaged Conic 𝒞'
From Eq. (<ref>), horizon points project to the observer as a conic. An imager observes the horizon projection on its image sensor in pixel coordinates (u,v). Recovering the imaged horizon conic requires fitting all imaged horizon points to a conic section. Conic-fitting based on the algebraic distance metric as in
Au^2 + Buv + Cv^2 + Du + Ev + F = 0
provides coefficients A,B,C,D,E, and F of the best conic-fit. A plethora of conic-fitting algorithms exist, but we apply Ref. <cit.>'s method in this work for a direct, unbiased fit. Expressing Eq. (<ref>) into compact notation through homogeneous pixel coordinates u=[ u v 1 ]^T reduces to
[ u; v; 1 ]^T
[ A B/2 D/2; B/2 C E/2; D/2 E/2 F; ][ u; v; 1 ]
= u^T 𝒞' u
= 0
where
𝒞'≜[ A B/2 D/2; B/2 C E/2; D/2 E/2 F; ]
denotes the imaged horizon conic in terms of the fitted coefficients.
§.§ Action of Calibrated Camera
An observer's relative position r_P and attitude T_P^C dictate the apparent horizon conic. The image sensor captures the imaged apparent horizon conic in (u,v) coordinates. The reference and imaged conics relate to each other by the camera's calibration matrix K. The proportionality
K^T 𝒞' K ∝𝒞
describes the conic-to-conic mapping between reference and imaged conics. Both sides of Eq. (<ref>) are equivalent up to a scale factor. The camera calibration matrix relates an image from focal plane normalized coordinates to pixel coordinates as in
[ u; v; 1 ] =
[ f/μ_x γ u_o; 0 f/μ_y v_o; 0 0 1; ][ x'; y'; 1 ]
where u=[ u v 1 ]^T and x'=[ x' y' 1 ]^T denote homogeneous pixel and homogeneous focal plane normalized coordinates, respectively. The inner matrix
K ≜[ f/μ_x γ u_o; 0 f/μ_y v_o; 0 0 1; ]
defines the camera calibration matrix.
Matrix K consists of image sensor parameters and intrinsic camera parameters. The image sensor parameters in K are μ_x, μ_y, and γ. Pixel pitches μ_x and μ_y represent the center-to-center distance between adjacent pixels in the image sensor's local x and y directions, respectively <cit.>. Skew angle γ expresses the angle between the local x and y directions on the image sensors. Generally the local x and y directions are orthogonal such that γ=0 for most image sensors, but we include γ here for completeness. The intrinsic camera parameters consist of u_o,v_o, and f. The principal point (u_o,v_o) provides the location where the camera's boresight intersects the focal plane in pixel coordinates. The effective focal length f defines the focal plane's distance from the origin of the camera coordinate system.
§ CAMERA CALIBRATION ALGORITHM
Camera manufacturers produce cameras with nominal intrinsic parameters, however these intrinsic parameters are subject to defects and require verification. Camera calibration verifies the intrinsic parameters and replaces the nominal parameter values with their calibrated values. Eq. (<ref>) provides an opportunity for estimating K (i.e., calibrating the camera) given a planet image. The spacecraft's state defines 𝒞, and image processing supplies 𝒞' from the imaged planet. Eq. (<ref>) is a proportionality, but introducing s as the unknown scale coefficient gives
sK^T 𝒞' K = 𝒞
which converts the proportionality relationship to an equality. Eq. (<ref>) is nonlinear with respect to K, and for this reason other works require N≥2 targets/images for estimating K <cit.>. In the next section, we detail an algorithm that first solves for unknown s and then K for intrinsic camera calibration from the image of a single planet.
§.§ Matrix Block-Partitioning
For the camera calibration algorithm, we first partition matrices K, 𝒞, and 𝒞' into sub-blocks. Block-partitioning of K results in
K ≜[ K_11 K_12; 0_1×2 1; ]
where
K_11≜[ f/μ_x γ; 0 f / μ_y ] and
K_12≜[ u_o; v_o ].
Similarly, we'll also block partition 𝒞 and 𝒞' as follows
𝒞≜[ 𝒞_11 𝒞_12; 𝒞_12^T 𝒞_22; ], 𝒞'≜[ 𝒞'_11 𝒞'_12; 𝒞'_12^T 𝒞'_22; ]
where 𝒞_11, 𝒞'_11∈ℝ^2×2, 𝒞_12, 𝒞'_12∈ℝ^2×1, and 𝒞_22, 𝒞'_22∈ℝ^1×1. Substituting the block-partitioned matrices results in
s
[ K_11^T 𝒞'_11 K_11 K_11^T (𝒞'_11 K_12 + 𝒞'_12 ); (K_12^T 𝒞'_11 + 𝒞'_12^T ) K_11 K_12^T 𝒞'_11 K_12 + 2 K_12^T 𝒞'_12 + 𝒞'_22 ]
=
[ 𝒞_11 𝒞_12; 𝒞_12^T 𝒞_22; ].
We use the system of equations in Eq. (<ref>) to solve for s directly and then solve for K_11 and K_12 separately.
§.§ Solving for Scale Factor s
When solving for unknown s in systems of equations, Refs. <cit.> apply the trace(∙) or det(∙) operator to isolate s. Applying det(∙) to the overall system of equations in Eq. (<ref>) results in
det(sK^T 𝒞' K) = s^3 det(K)^2 det(𝒞') = det(𝒞)
where det(𝒞) and det(𝒞') are known but det(K) is not. Interestingly, due to K's upper triangular structure
det(K) = det(K_11) det(1) = det(K_11).
such that Eq. (<ref>) is also
det(sK^T 𝒞' K) = s^3 det(K_11)^2 det(𝒞') = det(𝒞).
Since K_11 also appears in the sub-block equality
sK_11^T 𝒞'_11 K_11 = 𝒞_11,
we also apply the det(∙) operator to yield
s^2 det(K_11)^2 det(𝒞'_11) = det(𝒞_11)
which is also quadratic in det(K_11). Since both Eq. (<ref>) and Eq. (<ref>) possess det(K_11)^2 terms, dividing one by the other eliminates the unknown, and re-arranging provides
s = det(𝒞) det(𝒞'_11)/det(𝒞') det(𝒞_11),
a direct expression for s in terms of known 𝒞, 𝒞', 𝒞_11, and 𝒞_11'
§.§ Solving for K_11
With s known, we can now solve for K_11 and K_12 to ultimately provide K. First we focus on K_11 in Eq. (<ref>) which we point out is a symmetric expression quadratic in K_11. If we can also prove matrices s𝒞'_11 and 𝒞_11 are positive definite, then their Cholesky decompositions
𝒞_11 = L_𝒞 L_𝒞^T
and
s 𝒞'_11 = L_𝒞' L_𝒞'^T,
where L_∙ are lower triangular matrices, reduce Eq. (<ref>) to
L_𝒞'^T K_11 = L_𝒞^T
an expression linear in K_11. This desired linear form provides a direct solution for K_11.
When assessing the definitions of s𝒞'_11 and 𝒞_11, their det(∙) as given in Eq. (<ref>) is useful. Applying the sign(∙) operated defined by
sign(∙)
∙ > 0 sign(∙)= +1
∙ < 0 sign(∙)=-1
to Eq. (<ref>) results in
sign(s^2 det(K_11)^2 det(𝒞'_11)) = sign(det(𝒞_11))
and provides a means of assessing each term's sign convention. Scale factor s is a real-valued scalar such that s^2>0, and therefore sign(s^2) = +1. The explicit definition of det(K_11)
det(K_11) = f^2/μ_x μ_y
consists of f, μ_x, and μ_y terms that are strictly positive terms. Hence, det(K_11) >0 by convention, and sign(det(K_11)) = +1. Now substituting sign(s^2)=+1 and sign(det(K_11)) = +1 into Eq. (<ref>) simplifies to
sign(det(𝒞'_11)) = sign(det(𝒞_11))
in terms of conics 𝒞'_11 and 𝒞_11.
It is well known that for ellipses det(𝒞'_11) > 0. Given that the ellipsoid's horizon generally projects to an ellipse under perspective projection,
sign(det(𝒞'_11)) = sign(det(𝒞_11)) = +1
holds for nearly all viewing configurations of the ellipsoid. Considering 𝒞'_11, 𝒞_11∈ℝ^2×2 , 𝒞'_11 and 𝒞_11 have strictly positive or strictly negative eigenvalues that result in non-negative det(∙). In other words, 𝒞'_11 and 𝒞_11 are either positive- or negative-definite. To ensure 𝒞'_11 and 𝒞_11 are strictly positive-definite, we modify 𝒞' and 𝒞 to
𝒞' = α𝒞'
and
𝒞 = β𝒞
where α=sign(trace(𝒞'_11)) and β=sign(trace(𝒞_11)). As 2×2 matrices, non-negative trace(∙) implies both eigenvalues are positive (i.e., positive definite) and vice-versa for non-positive trace(∙). With this modification, s𝒞_11 and 𝒞'_11 are symmetric positive definite which enables their Cholesky decompositions L_𝒞 and L_𝒞' given by Eq. (<ref>) and Eq. (<ref>), respectively.
Substituting L_𝒞' and L_𝒞 into Eq. (<ref>) produces
K_11^T L_𝒞' L_𝒞'^T K_11 = L_𝒞 L_𝒞^T
from which we obtain the desired form in Eq. (<ref>). Re-arranging and solving for K_11 provides
K_11 = L_𝒞'^-T L_𝒞^T
in exact terms.
§.§ Solving for K_12
With K_11 known, we observe the following sub-block equality
sK_11^T (𝒞'_11 K_12 + 𝒞'_12) = 𝒞_12
is linear in K_12 and contains known terms K_11,s,𝒞'_11,𝒞'_12, and 𝒞_12. Re-arranging Eq. (<ref>) to
K_12 = 𝒞'_11^-1( (sK_11^T)^-1𝒞_12 - 𝒞'_12)
provides a direct solution for K_12. Substituting Eq. (<ref>) and Eq. (<ref>) simplifies Eq. (<ref>) to
K_12 = (L_𝒞 L_𝒞'^T )^-1𝒞_12 - 𝒞'_11^-1𝒞'_12
consisting solely of terms involving s, 𝒞, and 𝒞'. For ease of notation, we define
𝒥≜ (L_𝒞 L_𝒞'^T )^-1𝒞_12 - 𝒞'_11^-1𝒞'_12
as the right-hand side of Eq. (<ref>). With K_11 and K_12 known, our algorithm estimates the camera calibration matrix K from a single imaged ellipsoid.
§.§ Camera Calibration Algorithm Summary
The camera calibration algorithm from a single imaged ellipsoid is quite simple. Algorithm <ref> details the entire algorithm in 10 lines of pseudo-code where 𝒞 and 𝒞' are inputs. For each line of pseudo-code, we analytically compute the required floating point (FLOP) count and report it in Table <ref>. We refer to Refs. <cit.> for the appropriate FLOP count approximations for the algorithm lines involving Cholesky factorization and triangular matrix inversions. In summary, the camera calibration algorithm provides an estimate for K from a single imaged ellipsoid in ∼ 135 FLOPs which for context is slightly more than the FLOPs required to invert a 5×5 matrix (i.e., ∼ 125 FLOPS) <cit.>.
§ ALGORITHM EXTENSIONS
The proposed camera calibration algorithm estimates K from a single imaged ellipsoid which applies to planetary images. A camera's focal length f is an important intrinsic parameter that is embedded within K but requires knowledge of the camera's image sensor. In this section we provide f estimation from K. Additionally, our camera calibration algorithm extends to multiple images for estimating K and the intrinsic parameters in a least-squares or batch-filter approach. In this section we also extend the algorithm for batch-filter estimates from multiple images.
§.§ Focal Length Estimation
Within K, the camera's focal length f appears in the f/μ_x and f/μ_y terms. Introducing standard basis vectors e_1≜[ 1 0 ]^T and e_2≜[ 0 1 ]^T, we extract the f terms from estimated K_11 through
d_x ≜ e_1^T K_11 e_1 = f/μ_x
and
d_y ≜ e_2^T K_11 e_2 = f/μ_y
where we introduce d_x and d_y for convenience.
Through directly estimating K, it is impossible to estimate f without first knowing the image sensor's pixel pitches μ_x and μ_y <cit.>. Therefore, using the image sensor's μ_x and μ_y, the following linear system
[ 1; 1 ]
f =
[ μ_x d_x; μ_y d_y ]
estimates f in a least-squares sense from estimated K. Even from a single imaged planet in a single image, Eq. (<ref>) provides an over-determined system of equations for solving f.
§.§ Extension to Multi-Image Calibration
Though we prove camera calibration is possible from a single image, extending the calibration to multiple images enables more accurate and precise estimates for the camera's intrinsic parameters. Eq. (<ref>) is an over-determined system from a single image. We introduce d_x,i and d_y,i to denote the d_x and d_y values from the i^th image so that
[ 1; 1; ⋮; 1; 1; ]
f =
[ μ_x d_x,1; μ_y d_y,1; ⋮; μ_x d_x,N; μ_y d_y,N; ]
augments Eq. (<ref>) to f estimation from multiple images. Since each image provides 2 entries to Eq. (<ref>), the uncertainty in f estimates scales with ∼ 1 / √(2N) where N is the total number of images.
When estimating K, sub-block K_12 provides an estimate for the principal point's coordinates (u_o,v_o) directly. Augmenting the principal point estimate using multiple images leads to
[ I_2×2; ⋮; I_2×2; ][ u_o; v_o ] =
[ 𝒥_1; ⋮; 𝒥_N; ]
a least squares problem where 𝒥_i is the 𝒥 matrix of the i^th image. Matrix I_2×2 has dimensions 2×2 such that uncertainty in (u_o,v_o) scales with ∼ 1 / √(N).
§ NUMERICAL SIMULATIONS
We simulate planetary bodies of varying ellipsoid shapes and pointing geometries to assess our method's camera calibration performance. Table <ref> details the semi-axes values used to model each ellipsoid in terms of the planet's polar radius R_p.
We examine the method's sensitivity to ellipse-fit error by perturbing the semi-major/minor axes and center coordinates of the imaged ellipse 𝒞' with Gaussian noise ∼𝒩(0,σ^2). Here the proportionality relationship
𝒞'∝ K^-T𝒞' K^-1
gives the imaged ellipse 𝒞' in pixel coordinates <cit.>, similar to what an observer computes. The Gaussian noise perturbation of 1 σ = 1 pixels models effects of edge localization error typical of off-the-shelf edge detection algorithms <cit.>. The 1σ =1 pixel perturbation serves as a large, conservative perturbation given that semi-major/minor axes and center coordinates are usually known to sub-pixel precision for a fitted ellipse. After perturbation, we compare the estimated intrinsic parameters with the ground truth and obtain the residual.
The simulation performs a pose-varying Monte Carlo (MC) simulation for a nadir-pointing (i.e., pointing at the planet center) spacecraft. We sample all possible viewing latitudes (i.e., -90^∘ to 90 ^∘) and longitudes (i.e., -180^∘ to 180^∘) at a distance of 10 R_p with a 10× 10 grid and obtain the root-mean-square (RMS) of 1000 MC runs per gridpoint. To generalize the findings, we divide the RMS value by the ground-truth value to obtain the normalized root-mean-square (NRMS). Figure <ref> and Fig. <ref> report the NRMS of f and (u_o,v_o), respectively, for
varying ellipsoid shapes and viewing poses of a nadir-pointing spacecraft.
From Fig. <ref> and Fig. <ref> the proposed method estimates f with greater precision than (u_o,v_o) as witnessed by the lighter contours. Since K_12 is computed from K_11, the estimation error of K_11 propagates to the (u_o,v_o) estimates. Additionally, Fig. <ref> and Fig. <ref> also illustrate the effect the shape of the apparent horizon has on estimated f and (u_o,v_o). Under the nadir-pointing assumption, the apparent horizon of a sphere is a circle for all poses, and thus all poses yield similar camera calibration performance. However when semi-axes b ≠ a and c ≠ a as in the oblate spheroid and triaxial ellipsoid cases, the observer's pose influences the shape of the apparent horizon . The apparent horizon is no longer a circle but an ellipse with arbitrary eccentricity. As witnessed by the oblate spheroid and triaxial ellipsoid cases in Fig. <ref> and Fig. <ref>, the pose-dependent apparent horizon results in varying degrees of camera calibration performance. The NRMS contours of f and (u_o,v_o) provide insight on what an observer's pose needs to be for a desired level of camera calibration performance from an imaged ellipsoid.
§ APPLICATION TO PLANETARY IMAGES
§.§ Image Dataset
To verify our simulated findings, we subject the camera calibration algorithm to planetary images. As Ref. <cit.> points out, horizon-based navigation works best with ellipsoidal bodies without an atmosphere given that the ellipsoid does not model the atmosphere. For this reason, we select the Cassini Imaging Science Subsystem (ISS) dataset (available through NASA Planetary Data System) that contains numerous images of Saturn's atmosphere-less, ellipsoidal moons as imaged by the spacecraft Cassini. Cassini was a spacecraft that explored and studied the Saturn system. The Cassini ISS consists of a Narrow Angle Camera (NAC) and Wide Angle Camera (WAC) with 0.35^∘× 0.35^∘ and 3.5^∘× 3.5^∘ field of view, respectively <cit.>. Out of the two cameras, we subject our algorithm to NAC images due to its higher angular resolution. Within the ISS NAC dataset, the Saturnian ellipsoidal moons listed in Table <ref> serve as the imaging targets for camera calibration. Table <ref> also lists the semi-major axes for each moon's best-fit ellipsoid. In total, our ISS NAC dataset consists of 50 planetary images.
§.§ Image Processing
For camera calibration, our algorithm requires extracting the imaged horizon conic from the imaged ellipsoid. We obtain the imaged conic by first extracting the apparent horizon points in the image and then fitting the extracted points to a conic section. For horizon point extraction, we employ the partial area effect algorithm in Ref. <cit.> for subpixel edge estimates. Typically, limb scanning precedes subpixel edge estimation and provides a pixel-level guess of the planetary body's limb prior to refinement via subpixel edge estimation. However in our application, subpixel estimation without limb scanning results in accurate horizon extraction for the vast majority of images. Figure <ref> provides an example of the extracted horizon conic at the subpixel level for 10 images from the ISS NAC dataset used in this work.
With accurate subpixel estimates of the extracted horizon, we apply Ref. <cit.>'s Semi-Hyper Least-Squares algorithm for an unbiased, direct conic-fit. The conic-fit's coefficients assemble to form the imaged conic 𝒞'.
§.§ Results
From each of the 50 images in the Cassini ISS NAC dataset, our algorithm estimates K for Cassini's NAC. We compare our estimated focal length f and principal point (u_o,v_o) intrinsic parameters to their documented calibrated values to assess our algorithm's performance. Camera calibration for the Cassini ISS NAC consisted of a ground calibration segment and an in-orbit calibration segment. Ground-based calibration provides nominal values for the intrinsic parameters, and the in-orbit calibration segment corrects these nominal values with higher accuracy/precision through imaging star clusters <cit.>.
§.§.§ Focal Length
From each image's K estimate, appying Eq. (<ref>) provides f. The f estimates of each image provides a sample population from which we report its central tendency and statistical dispersion in Table <ref>. For context, Table <ref> presents the calibrated f values for the Cassini ISS NAC where the in-orbit f serves as the ground-truth value for comparison. Since we did not employ limb scanning, our image processing also extracts additional edges that do not belong to planet's horizon and biases the fitted conic. The biased conic-fit then leads to biased K estimates and produces a handful of outliers. The robust statistical measures such as median(∙) and median absolute deviation MAD(∙) are robust to outliers and provide insight on what to expect had we included limb scanning in the image processing. We illustrate the different distributions in f estimates in Fig. <ref>.
Fig. <ref> plots the distributions of Cassini's f estimates according to their respective normal distribution parameters reported in Ref. <cit.>. Fig. <ref> plots the f estimates for each image in the dataset using our algorithm and then applies kernel density estimation with a Gaussian kernel to visualize the sampled distribution. Treating the on-orbit calibrated f value as the truth, our method provides a more accurate and precise estimate of f when compared to the ground calibrated f value as seen by the spread and peak of our method's distribution. The mean(∙) and median(∙) metrics confirm our method's central tendency agrees with the true f, and the MAD(∙) confirms the lower statistical spread compared to the ground-calibrated value of f. Our method is essentially as accurate as the on-orbit calibration method for f, but not as precise as evidenced by the larger statistical spread.
§.§.§ Principal Point
We continue the comparison with estimates for Cassini ISS NAC's principal point (u_o,v_o). Once again, Eq. (<ref>) provides our algorithm's estimate for (u_o,v_o). Table <ref> and Table <ref> report the values and statistics of Cassini ISS NAC's in-orbit calibrated values of (u_o,v_o) <cit.> and those from our method, respectively. Holding the in-orbit calibrated values for (u_o,v_o) as truth, Fig. <ref> plots the residuals of our method's estimates compared to the truth as well as the 3σ ellipse from in-orbit calibration. The sides of Fig. <ref> illustrate the distribution sampled from the residuals through kernel density estimation with a Gaussian kernel.
As evidenced in Fig. <ref>, our method's residuals are centered about the in-orbit calibration's 3σ with all residual samples within of it. Our method provides (u_o,v_o) estimates with higher certainty than the in-orbit calibration method by at least ∼ 10 pixels as demonstrated by the 1σ value in Table <ref>.
§.§ Extension to Multiple Images
Figure <ref> and Fig. <ref> compare our camera calibration algorithm with the star cluster-based calibration. However, this is not a fair comparison between the two due to the amount of images each method requires. For instance, our method requires a single image for a K estimate, while the star-cluster calibration employs multiple images. Ref. <cit.> details the calibration test procedures for in-orbit Cassini ISS NAC calibration and requires 450 total images of the star cluster targets for NAC calibration. For proper comparison we now apply the multiple image extension of our algorithm given by Eq. (<ref>) and Eq. (<ref>) for multiple images.
Our Cassini ISS NAC dataset contains 50 processed images, but to simulate multi-image calibration we'll sample multiple combinations of q images from the total n=50 images. For each q,
_nC_q =
[ n; q ]
=
n!/q!(n-q)!
provides the total unique image combinations possible in the existing Cassini ISS NAC dataset. We sample 2000 combinations for each q and report the resultant 1σ and 1MAD for f and (u_o,v_o) estimates in Fig. <ref> and Fig. <ref>, respectively. As expected, the uncertainty in both estimates denoted by 1σ and 1MAD decrease with increasing images. As a robust statistic, 1 MAD serves as the lower bound of the statistical dispersion of the estimates, and 1σ serves as the upper bound. From our simulations, at q=45 images we expect f uncertainty to lie between 0.30 - 0.43 mm and (u_o,v_o) uncertainty bounded by 1.1-3.1 pixels. Recalling the ground truths in Table <ref> and Table <ref>, multi-image calibration improves f uncertainty such that it approaches the star cluster-calibration uncertainty but with far fewer images (about an order of magnitude fewer). Additionally, at q=45 precision in our (u_o,v_o) surpasses the star cluster-calibrated precision by about one order of magnitude. Both focal length and principal point estimate uncertainties scale ∼ 1 / √(N) with increasing N.
Fig. <ref> illustrates the trends linear in 1 / √(N) for 1σ uncertainties in f and (u_o,v_o).
§ DISCUSSION
Our camera calibration algorithm is the first calibration algorithm that estimates K from a single imaged ellipsoid. Since ellipsoids are good shape models for planets and moons, our algorithm enables calibration from the nearest ellipsoidal planetary body from a single image. Using Cassini ISS NAC as a case study, our algorithm estimates f to higher accuracy and precision when compared to the ground calibration. Though as equally accurate, our algorithm is not nearly as precise at the in-orbit calibrated f values using star clusters. Conversely for principal point (u_o,v_o) estimation, our algorithm is more precise than the star-cluster calibration method and just as accurate for u_o estimates. A slight ∼ 7 pixel bias exists for the v_o estimate. The multi-image extension to our algorithm improves the precision of both f and (u_o,v_o) estimates. At q=45 images, uncertainty in our algorithm's outputs approach and surpass that of the star-calibration method for f and (u_o,v_o) estimates, respectively, despite using ∼ 1 order of magnitude fewer images. With increased q, we expect our algorithm's precision to surpass that of the star cluster-calibration method for both f and (u_o,v_o) estimates.
Though formulated with planets in mind, our algorithm applies to existing camera calibration setups documented in Refs. <cit.> where a single image captures numerous ellipsoids. The multi-image extensions in Eq. (<ref>) and Eq. (<ref>) also apply to multiple ellipsoids captured within the same image. Each imaged ellipsoid provides estimates for f and (u_o,v_o) from which Eq. (<ref>) and Eq. (<ref>) provide optimal estimates in a least-squares sense.
§ CONCLUSION
This work provides a novel camera calibration method requiring only one imaged ellipsoid and enables camera calibration using the nearest ellipsoidal planetary bodies. For spacecraft in Earth orbit, the spacecraft need not look further than the moon for calibrating its onboard camera. Our algorithm estimates the camera calibration matrix K from a single image but extends to multiple images for higher precision estimates. The algorithm also applies to ground-based calibration where only a single ellipsoidal imaging target is required.
§ IMAGES USED IN CASSINI ISS NAC DATASET
The NASA Planetary Data System is an online repository that archives the data collected of NASA planetary missions such as images from Cassini's NAC. The Cassini ISS NAC dataset used in this work consists of the 50 images listed in Table <ref>. Each of these images are 1024 pixels × 1024 pixels in size and corrected for radial distortions.
§ WORKING WITH SPICE KERNELS
Each Cassini ISS NAC image is accompanied by a file that houses the respective image's metadata. The and metadata entries are the necessary pieces of information for building the apparent horizon conic 𝒞. identifies the imaged planetary body, and establishes the epoch. With s known, SPICE <cit.> provides its shape matrix 𝒜_P in planet-fixed coordinates P. With known, SPICE <cit.> provides the spacecraft's planet-relative position r_P (in P coordinates) and planet-relative attitude T_P^B (i.e., coordinate transformation) from P to spacecraft body-fixed coordinate system B. Combined, 𝒜_P, r_P, and T_P^B provide 𝒞_B in B coordinates.
Since image processing provides 𝒞_C' in camera coordinates C, we require transformation T_B^C to obtain 𝒞_C to then apply our camera calibration algorithm. Transformation
T_B^C = T_F^C T_B^F
defines the coordinate transformation from B to C with the focal plane coordinate F as an intermediate coordinate systems. As a charge-coupled device, the Cassini ISS NAC records measurements in F. SPICE reports the T_F^B transformation as a 3-2-1 sequence of Euler rotations with Euler angles θ_3 ≈ 89.93^∘, θ_2 ≈ -0.04^∘, and θ_1 ≈ -89.99^∘. Once in F, a 180^∘ rotation about the camera boresight (i.e. f̂_3 || ĉ_3) defines transformation T_F^C. Together T_F^B and T_F^C produce T_B^C which ultimately provides 𝒞_C in desired C coordinates.
|
http://arxiv.org/abs/2307.01737v1
|
20230704141230
|
Partial mean-field model for neurotransmission dynamics
|
[
"Alberto Montefusco",
"Luzie Helfmann",
"Toluwani Okunola",
"Stefanie Winkelmann",
"Christof Schütte"
] |
physics.bio-ph
|
[
"physics.bio-ph",
"math.PR"
] |
Synchronous Image-Label Diffusion Probability Model with Application to Stroke Lesion Segmentation on Non-contrast CT
Jianhai Zhang^1 Tonghua Wan^2 Ethan MacDonald^1 Bijoy Menon^1
Aravind Ganesh^1 Qiu Wu^2
=====================================================================================================================
This article addresses reaction networks in which spatial and stochastic effects are of crucial importance. For such systems, particle-based models allow us to describe all microscopic details with high accuracy. However, they suffer from computational inefficiency if particle numbers and density get too large.
Alternative coarse-grained-resolution models reduce computational effort tremendously,
e.g., by replacing the particle distribution by a continuous concentration field governed by reaction-diffusion PDEs.
We demonstrate how models on the different resolution levels can be combined into hybrid models that seamlessly combine the best of both worlds, describing molecular species with large copy numbers by macroscopic equations with spatial resolution while keeping the stochastic-spatial particle-based resolution level for the species with low copy numbers. To this end, we introduce a simple particle-based model for the binding dynamics of ions and vesicles at the heart of the neurotransmission process. Within this framework, we derive a novel hybrid model and present results from numerical experiments which demonstrate that the hybrid model allows for an accurate approximation of the full particle-based model in realistic scenarios.
§ INTRODUCTION
Models of spatially well-mixed chemical reaction networks
have provided a solid foundation for studying molecular and cellular systems; however, the importance of spatial organization in such systems has increasingly been recognized <cit.>. Interacting molecules commonly occur at low copy numbers and move in crowded and diverse environments, so that both stochasticity and spatial resolution play an essential role when modeling biochemical reaction networks.
Spatial-stochastic simulations have become a prominent tool for understanding how stochasticity at the microscopic level influences the macroscopic behavior of such systems. Recent years have seen increasing interest in particle-based reaction-diffusion models in which all interacting molecules (from ions to entire macromolecules) are single particles diffusing in space, and reactions happen solely if two or more reacting species are in close proximity. Different models and associated simulation environments have been developed: cf. <cit.> for examples and <cit.> for an overview. Moreover, there is an extensive literature on using these particle-based models to describe the interplay between spatial organization and stochasticity <cit.>.
While particle-based models guarantee the level of detail necessary to accurately describe the microscopic dynamics, their simulation typically becomes inefficient (or even practically infeasible) for systems with large copy numbers.
Likewise, so-called agent-based simulations, which become more and more popular for investigating and understanding cellular systems, require cost-effective simulation tools due to the natural complexity of these systems.
In general, this leads to a conflict of interest between computational efficiency and biochemical accuracy.
An alternative to developing high-performance computation methods is to study the systems on a theoretical level by finding macroscopic models which approximate the underlying particle-based dynamics. Such a macroscopic approximation not only allows for more efficient simulations, but also gives us a better understanding of the qualitative and quantitative global features of the system. One approach is to study mean-field approximations which approximate the particle-based dynamics in the limit of large numbers of interacting particles.
Typically, it is shown that the empirical distribution of the particles converges (for an increasing population size) to a concentration field, and the equations governing the particle-based system give rise to a macroscopic equation for this concentration field, e.g., in terms of reaction-diffusion partial differential equations (PDEs) (cf. <cit.>) or stochastic PDEs (see the extensive literature on fluctuating hydrodynamics <cit.>). However, these approaches replace the microscopic, discrete resolution of the particle-based model completely by a continuous field. Additionally, recently, methods have been proposed for seamlessly coupling reaction-diffusion PDEs in one spatial compartment (the “reservoir”) to particle-based simulations in the compartment of interest <cit.>. These coupled approaches, however, also do not solve the conflict of interest if the reaction network under consideration contains molecular species with large copy numbers as well as other species with only a few molecules whose specific spatial positions in the cell play an important role in the reaction process. In this case, one would like to construct hybrid models that seamlessly combine the best of both worlds, describing the high-abundant species by a macroscopic equation for its concentration field while keeping the stochastic-spatial particle-based resolution level for the low-copy-number species, without spatially separating the two descriptions.
An important biochemical reaction network containing both low-abundant and high-abundant species is given by the process of neurotransmission which is summarized in the following.
Background of neurotransmission dynamics.
Neurotransmission is the process of information transfer from one neuron to another ( <ref>). Within the axon terminal of the presynaptic neuron, the signalling molecules, called neurotransmitters, are stored in synaptic vesicles which transport the neurotransmitters to release sites within the so called active zone <cit.>. Upon stimulation by calcium influx, the vesicles fuse with the membrane to release their content of neurotransmitters into the synaptic cleft where they bind to and activate the receptors of the postsynaptic neuron.
The calcium influx is induced by action potentials which trigger the opening of voltage gated calcium channels <cit.>. Calcium ions enter through these channels, diffuse through the axon terminal and bind to the calcium sensors of the vesicles <cit.>. The binding of ions to a vesicle increases the probability for the vesicle's fusion to the membrane. It is assumed that there is a maximum number of ions that can attach to a single vesicle (e.g., five ions per vesicle in <cit.>). After a fusion event, both the vesicle and the release site undergo a recycling procedure before getting available for reuse <cit.>.
Modeling neurotransmission dynamics. Several studies have shown that the process of vesicle fusion and neurotransmitter release is “stochastic” in the sense that an arriving action potential does not always elicit fusion <cit.>, while on the other hand also spontaneous release in the absence of stimuli is possible <cit.>. This motivates to consider stochastic modeling approaches to describe neurotransmission dynamics. In <cit.>, Kobbersmed et al introduce a stochastic vesicle fusion model which describes the dynamics of a set of release sites by a Markovian reaction jump process. The model consists of a set of first-order reactions representing the docking/undocking of a vesicle to the release site, the binding/unbinding of calcium ions, and the fusion event. For some of these reactions, the rates depend on the local calcium concentration which is given as a solution of a PDE taking into account the external calcium concentration and the time point of a stimulus <cit.>. Positions and movement of vesicles and their recycling after fusion, however, are not taken into account;
instead, it is assumed that there is an infinite supply of vesicles available to all release sites independently of their physical position. This model has been analysed from a mathematical perspective in <cit.> with a derivation of the characteristic equations for first- and second-order moments of the output current. In <cit.>, the linear reaction network has been modified by introducing a second-order reaction for the docking of a vesicle to a release site and by adding explicit recovery steps, thereby taking account of the bounded supply of vesicles as well as their recycling.
In this article we step beyond the available models and consider a spatially resolved particle-based model for the movement and interactions of vesicles and calcium ions in the axon terminal of the presynaptic neuron. Based on this particle-based model, we construct a hybrid model via a limit process for the high-population species of ions leading to a partial mean-field model coupled to the particle-based model for the low-copy-number species of vesicles. The approximation of the (fully stochastic) dynamics by the hybrid model is well justified by the insight that in the cases of interest there are many more ions present in the axon terminal than release sites or vesicles. The derivation of the model will also show the difficulties and possible pitfalls of hybrid model construction. For the sake of simplicity and transparency, we will restrict both the particle-based and the hybrid model to the core of the neurotransmission process, given by the spatial interaction between the ion field and the stochastic dynamics of the vesicles. Many other processes, like transport through and opening/closing of ion channels, the docking of vesicles to release sites, the vesicle recycling process and the neurotransmitter release itself, are ignored (but can be built it later).
Outline. At first, we introduce the stochastic particle-based reaction-diffusion model in <ref>. The formal derivation of the hybrid model is given in <ref>. The two models are compared in <ref> by means of numerical experiments. Finally, in <ref>, we discuss how to expand the models by integrating further aspects of biological detail.
§ PARTICLE-BASED REACTION-DIFFUSION MODEL
The particle-based reaction-diffusion model for the spatio-temporal dynamics of vesicles and calcium ions is sketched in <ref>. It will be introduced in the following.
We will use capital letters, like X_i, for random variables, and small letters, like x_i, for their possible realizations.
§.§ The configuration space
The spatial domain is a region within the Euclidean space (^d,·), where the position of each of m vesicles is denoted by y_k ∈, k ∈{1, …, m}. In the same domain, the position of each of n calcium ions is denoted by x_i ∈, i ∈{1, …, n}, and each ion carries a further internal variable _i ∈{0, …, m} with the following meaning: if _i = k, the i-th ion is bound to the k-th vesicle, while _i=0 means that the i-th ion is free/unbound. Each vesicle can bind at most n_v ⌊ a n ⌋ ions with a ratio a ∈ [0, 1].
The configuration space is thus characterized by the triple of vectors (, , ) ∈^n ×_n, m×^m, where the space
_n, m{∈{0,...,m}^n | ∑_i=1^n δ__i, k≤ n_v ∀ k ∈{1, ..., m}}
ensures that each vesicle binds no more than n_v ions.
The particle-based dynamics is given by the stochastic process
((t),(t),(t))_t≥ 0∈^n ×_n, m×^m,
where (t)=(X_i(t))_i=1,...,n refer to the ion's positions, (t)=(_i(t))_i=1,...,n give their binding state, and (t)=(Y_k(t))_k=1,...,m are the vesicle's positions.
The dynamics is a superposition of two types of stochastic processes: a diffusive component for the positions of both the vesicles and the unbound ions, and a reaction component for the binding of the ions to the vesicles as well as their unbinding.
§.§ Position dynamics
The k-th vesicle moves according to the overdamped Langevin equation
dY_k(t) = -(∇ V(Y_k(t)) + ∑_ℓ≠ k∇ U(Y_k(t) - Y_ℓ(t)) ) dt + σ^Y dW_k^Y(t) ,
where σ^Y > 0 is the noise intensity, (W_k^Y(t))_t≥ 0, k ∈{1, ..., m}, are d-dimensional independent Wiener processes, V →ℝ is a potential field, and U →ℝ generates a short-range repulsion – for instance given by an exclusion force.
In an analogous but simpler fashion, the position of each unbound ion i (with _i(t)=0) evolves according to a stochastic process X_i given by the Brownian motion
dX_i(t) = δ__i(t),0 σ^X dW_i^X(t) i ∈{1, … n} ,
with noise intensity σ^X > 0 and independent Wiener processes W_i^X.
A bound ion i is assumed to move with the vesicle k it is attached to (i.e., X_i(t)=Y_k(t) for all times where _i(t)=k) and only starts moving independently again when it unbinds from it. The spatial trajectories of the ions are thus piecewise continuous with discontinuities restricted to the time points where binding or unbinding occurs.
Both vesicles and ions are restricted to stay in the domain , which is implemented by reflecting boundary conditions.
§.§ Binding and unbinding
When the i-th ion is unbound (_i(t)=0) and ϵ-close to the k-th vesicle, i.e., X_i(t)∈(Y_k(t)) for
(y) {x ∈|x-y≤ϵ} ,
then it has a certain probability to bind to that vesicle.
We assume that the binding rate only depends on the relative occupancy of the particular vesicle, which we define as
w_k 1/n_v∑_i=1^n δ__i, k∈ [0, 1]
given the binding state (t) =∈_n, m.
The binding rate is thus of the form r^+(w_k) with some function r^+ [0, 1] →_+ that we specify later. From the moment t where the ion becomes bound, it assumes the position Y_k(t) of the vesicle, such that X_i(t') = Y_k(t') for t'≥ t, until it unbinds again.
Analogously, an ion that is bound to the k-th vesicle can unbind from it at rate r^-(w_k) with r^- [0, 1] →_+. After unbinding, it starts from a new position extracted randomly, according to some distribution μ_k, inside the ball around Y_k(t). A concrete choice for the distribution is the uniform ((Y_k(t))). We will formulate the generator of this dynamics in <ref>.
The binding and unbinding rate functions.
It remains to specify the functions r^±. Different choices exist in the literature <cit.> and are based on both theoretical and empirical grounds. The simplest form of the binding rate is given by
r^+(w) = γ^+ (1-w) , w ∈ [0,1]
with γ^+ > 0, which decreases linearly with the number of available binding sites. The unbinding rate may in its simplest form be assumed to be a constant
r^-(w) = γ^-,
with γ^- > 0,
implying that unbinding is independent from the number of currently bound ions.
A typical feature that emerges from the literature and is supported by experimental evidence is the so-called cooperativity: the more ions are bound to the vesicle, the easier for a new ion to bind and the harder for a bound ion to unbind.
This form of attractive force between calcium ions is modeled in different ways in the literature.
* <cit.> suggests a cooperative binding rate of the form
r^+(w) = γ^+ (w + α^+) (1-w) .
Hence, the binding rate not only decreases with a decreasing number of binding sites ∝ (1-w), but also increases with an increasing number of bound ions ∝ (w+α^+) because of an attracting interaction between the calcium ions. The additive constant α^+>0 ensures that binding is also possible when w=0.
* The same work <cit.> suggests a cooperative unbinding rate of the form
r^-(w) = γ^-(1-w+α^-) ,
which makes it linearly harder for the ion to unbind the more ions are already bound. The factor α^->0 ensures that unbinding is also possible when w=1.
* In <cit.>, the cooperative unbinding function is assumed to decay exponentially in w and is of the general form
r^-(w) = γ^- β^w
with β∈ (0,1). Thus, unbinding becomes exponentially harder the more ions are already bound.
Numerical experiments to study the dynamics for the different types of rate functions will be given in <ref>.
§ PARTIAL MEAN-FIELD MODEL
When following the detailed trajectories of all particles is either unfeasible or uninteresting, a description of the system in terms of a collective variable may give us the possibility of faster simulations and a better understanding of the qualitative and quantitative global features of the system. Furthermore, when the number of particles is sufficiently large, there is a chance to have a simpler description of the system by reducing the noise – or part of it – to some deterministic dynamics.
In our model, we are interested in keeping track of the spatial concentration of unbound calcium ions and the positions and occupancies of all vesicles. The goal is to derive, in the limit where the number of calcium ion is sufficiently large, a PDE for the spatial calcium concentration coupled to an ordinary differential equation (ODE) for the relative occupancy state w(t) of the vesicles,
while keeping the particle-based resolution for their movement.
For the sake of clarity, we confine the formal derivation of s <ref>-<ref> to a simplified setting and focus on what happens around a single vesicle (m = 1) with a fixed position Y_1(t) = y∈𝕏 for all t. The configuration space is then ^n ×_n, where _n _n, 1, with states of the form (,) for the positions and binding states of the n calcium ions. Moreover, we will neglect boundary conditions. The extension to the complete model will be considered in <ref>.
§.§ Derivation of the generator for the empirical measure
The central object in our derivation is the empirical measure
ρ^n_, ( x', ') 1/n∑_i=1^n δ_X_i( x') δ__i, ' ,
which counts the relative number of ions in the volume x' and with binding state ' (unbound if ' = 0, bound if ' = 1).
Instead of manipulating the stochastic processes directly, we take a “weak” viewpoint and work with their associated infinitesimal generators. In the present section, we derive, from the infinitesimal generator (<ref>) for ((t), (t)), the generator (<ref>) for the measure-valued process ρ^n_(t), (t). Then, in <ref>, we look for its deterministic limit as n →∞, and finally project the dynamics further. The derivations will not be rigorous, but the language will be close to the mathematical formalism that would be necessary for a full proof.
The starting point of the derivation requires the infinitesimal generator for the process ((t), (t))_t ≥ 0. This contains a diffusion component for the positions of the calcium ions, as well as a binding and an unbinding component. Since, in <ref>, we will perform the limit when the number of ions n is large, we stress the dependence of the generator on the parameter n by denoting it as L^n. For any observable f ∈ C^2,0(^n ×_n), we have
(L^n f)(, )
= σ^2/2∑_i=1^n δ__i, 0 Δ_x_i f(, )
+ r^+(w) ∑_i=1^n _(y)(x_i) δ__i, 0 [f(x_1, ..., x_i-1, y, x_i+1, ..., x_n, _1, ..., 1-_i, ..., _n) - f(, ) ]
+ r^-(w) ∑_i=1^n δ__i, 1 ∫_μ( x'_i) [f(x_1, ..., x_i-1, x'_i, x_i+1, ..., x_n, _1, ..., 1-_i, ..., _n) - f(, ) ] ,
where w w_1 ∈ [0,1] is a placeholder for the relative occupancy of the single vesicle with fixed position y, given by
w=1/⌊ an ⌋∑_i=1^n δ_s_i, 1 .
The first term in (<ref>) contains the second derivative of the observable f and corresponds to diffusion of the unbound ions, where σσ^X is the noise intensity. The second term refers to the binding of the i-th ion (if unbound and ϵ-close to the vesicle), which is placed at the position y of the vesicle. The third term refers to the unbinding of the i-th ion (if bound) and its replacement around the position of the vesicle.
A uniform replacement corresponds to the distribution
μ( x') = 1/||_(y)(x') x' .
The infinitesimal generator (<ref>) acts on observables for the pair (, ). To operate the passage from (, ) to the empirical measure, we apply the generator to observables of the form
f(, ) = g(ρ^n_, ) ,
for g∈ C^2((×{0, 1}))
which depend on (, ) only through the empirical measure. Ideally, we would hope that the function L^n f also depends on (, ) solely through the empirical measure and, as a consequence, we would be able to identify a generator for the Markov process ρ^n_,. This will be our case, as we will see in the following.[In general, one cannot accomplish this procedure so easily, but often can still recover an autonomous equation for the empirical measure in the deterministic limit, namely when n →∞ <cit.>.] The final step is to take the limit of the generator for ρ^n_, as n →∞: in our situation, we obtain another generator which contains only a drift term and thus corresponds to a deterministic PDE.
We thus need to find good observables g for the empirical measure, such that they fully characterize the generator: the set of observables can be smaller than the domain of the generator, but still has to be big enough.[According to semigroup theory, a subset of the domain that fully characterizes the generator is a core <cit.>.] Since the empirical measure is an infinite-dimensional object, it is convenient to consider a finite-dimensional projection by testing it with a finite set of continuous and bounded functions ϕ_ℓ∈ C_b(×{0, 1}), ℓ = 1, ...,:
ρ⟼( ⟨ϕ_1, ρ⟩, ⟨ϕ_2, ρ⟩, …, ⟨ϕ_, ρ⟩)
for ρ∈(×{0, 1}),
where
⟨ϕ, ρ⟩∑_∈{0, 1}∫_ϕ(x, ) ρ( x, ) .
The projection onto a finite-dimensional space makes the successive calculations manageable – these are reduced to ordinary calculus – without loss of generality, since we consider all possible projections, i.e., all possible test functions.
The corresponding simplified observables for the empirical measure are the cylindrical functions <cit.>
g(ρ) ψ( ⟨ϕ_1, ρ⟩, ⟨ϕ_2, ρ⟩, …, ⟨ϕ_, ρ⟩) ,
with ψ∈ C^2(^). We have thus traded a function g on a infinite-dimensional space for a function ψ on an Euclidean space. We now make use of the cylindrical functions g and consider the following observables for (, ):
f(, ) = g(ρ^n_, ) = ψ( ⟨ϕ_1, ρ^n_, ⟩, ⟨ϕ_2, ρ^n_, ⟩, …, ⟨ϕ_, ρ^n_, ⟩)
= ψ( 1/n∑_i=1^n ϕ_1(x_i, _i), 1/n∑_i=1^n ϕ_2(x_i, _i), …, 1/n∑_i=1^n ϕ_(x_i, _i) ) .
Since these functions depend on (, ) only through the empirical measure, they do not depend on the permutations of particles, namely are invariant under any permutation that is performed in both and . The goal then is to show that the function L^n f also depends on (, ) only through the empirical measure.
As a result of the application of the generator (<ref>) to the observables (<ref>), we obtain
(L^n f)(, )
= σ^2/2∫_𝕏ρ^n_, ( x', 0) ∑_ℓ=1^∂_ℓψ(⟨ϕ_1, ρ^n_, ⟩, ...) Δ_1ϕ_ℓ(x', 0)
+ σ^2/2n∫_𝕏ρ^n_, ( x', 0) ∑_k=1^∑_ℓ=1^∂^2_k ℓψ(⟨ϕ_1, ρ^n_, ⟩, ...) ∇_1 ϕ_k(x', 0) ·∇_1 ϕ_ℓ(x', 0)
+ n ∫_𝕏ρ^n_, ( x', 0) _(y)(x') r^+(w) [ ψ(⟨ϕ_1, ρ^n_, ⟩ - 1/nϕ_1(x', 0) + 1/nϕ_1(y, 1), ...) - ψ(⟨ϕ_1, ρ^n_, ⟩, ...) ]
+ n ∬_𝕏^2ρ^n_, ( x', 1) μ( x”) r^-(w) [ ψ(⟨ϕ_1, ρ^n_, ⟩ - 1/nϕ_1(x', 1) + 1/nϕ_1(x”, 0), ...) - ψ(⟨ϕ_1, ρ^n_, ⟩, ...) ] ,
where ∇_1 and Δ_1 act on the first variable, and we clearly need that ϕ_ℓ∈ C^2,0(×{0, 1}) for every ℓ. The placeholder w is now interpreted in terms of ρ^n_, as
w=n/⌊ an ⌋∫_ρ^n_,( x',1)=n/⌊ an ⌋ρ^n_,({y},1)
Note, indeed, that all bound ions are placed at y, and therefore the measure ρ(·, 1) concentrates fully on y.
The complete steps of the calculations are shown in Appendix <ref>.
As hoped for, the generator depends on (, ) only through the empirical measure.
To write down the final expression of the generator for the process ρ^n_, of empirical measures, we need the derivatives of the cylindrical functions g(ρ), which we compute via the chain rule. Since the functional derivative of the linear function ρ↦⟨ϕ, ρ⟩ (see (<ref>)) is simply ϕ∈ C_b(×{0,1}), we have
g'(ρ) = ∑_ℓ=1^q ∂_ℓψ(⟨ϕ_1, ρ⟩, ..., ⟨ϕ_, ρ⟩) ϕ_ℓ ∈ C_b(×{0, 1}) ,
g”(ρ) = ∑_k=1^∑_ℓ=1^∂^2_k ℓψ(⟨ϕ_1, ρ⟩, ..., ⟨ϕ_q, ρ⟩) ϕ_k ⊗ϕ_ℓ ∈ C_b(×{0, 1}) ⊗ C_b(×{0, 1}) .
Given these derivatives, we find an expression that can be written fully in terms of the function g,
(L^n f)(, ) = σ^2/2∫_𝕏ρ^n_, ( x', 0) Δ_1(g'(ρ^n_, ))(x', 0) + σ^2/2n∫_𝕏ρ^n_, ( x', 0) (∇_1 ·∇_3) (g”(ρ^n_, ))(x', 0, x', 0)
+ n ∫_(y)ρ^n_, ( x', 0) r^+(w) [ g(ρ^n_, - 1/nδ_x' δ_0 + 1/nδ_y δ_1) - g(ρ^n_, ) ]
+ n ∬_𝕏^2ρ^n_, ( x', 1) μ( x”) r^-(w) [ g(ρ^n_, - 1/nδ_x' δ_1 + 1/nδ_x” δ_0) - g(ρ^n_, ) ] ,
and thus, after replacing ρ^n_, by ρ, arrive at the generator Q^n for the Markov process ρ^n_,:
(Q^n g)(ρ) = σ^2/2∫_𝕏ρ( x, 0) Δ_1(g'(ρ))(x, 0) + σ^2/2n∫_𝕏ρ( x, 0) (∇_1 ·∇_3) (g”(ρ))(x, 0, x, 0)
+ n ∫_(y)ρ( x, 0) r^+(w) [ g(ρ - 1/nδ_x δ_0 + 1/nδ_y δ_1) - g(ρ) ]
+ n ∬_𝕏^2ρ( x, 1) μ( x') r^-(w) [ g(ρ - 1/nδ_x δ_1 + 1/nδ_x' δ_0) - g(ρ) ] .
This is the generator of an infinite-dimensional measure-valued Markov process and, for us, represents the starting point to derive the partial mean-field model. The expression is very general and accommodates measures ρ that do not have any Lebesgue density – like for instance Dirac measures.
Before performing the last step and sending n →∞ in (<ref>), we examine the various terms in the generator and highlight their contribution to the measure-valued process in the following remark.
The operator Q^n contains three types of terms <cit.>:
* A first-derivative term, which corresponds to a drift. This is the contribution
σ^2/2∫_ρ( x, 0) Δ_1 φ(x, 0)
for a test function φ∈ C^2,0(×{0, 1}). This term alone is the weak form of a parabolic diffusion equation for ρ(·, 0), which in strong form would be
σ^2/2Δ_1 ρ(x, 0) .
* A second-derivative term which corresponds to a stochastic diffusion and has one order in n less than the drift one. The underlying bilinear form (a diffusion tensor) is the integral form
D(φ_1, φ_2) σ^2/2n∫_ρ( x, 0) ∇_1 φ_1(x, 0) ·∇_1 φ_2(x, 0) .
To display a strong form, we can perform a formal integration by parts and obtain
D(φ_1, φ_2) = - σ^2/2n∫_∇_1 ·(ρ(x, 0) ∇_1 φ_1(x, 0) ) φ_2(x, 0) x .
The “square root” of the diffusion matrix is the noise intensity that would appear in the corresponding stochastic partial differential equation, where it acts on the space-time white noise (cf. <cit.>).
* A finite-difference term, which corresponds to jumps of the form
ρ ⟼ ρ - 1/nδ_x δ_0 + 1/nδ_y δ_1 with transition rate density n r^+(w) ρ( x, 0) , and
ρ ⟼ ρ - 1/nδ_x δ_1 + 1/nδ_x' δ_0 with transition rate density n r^-(w) ρ( x, 1) μ( x') .
The three terms essentially reflect the features of the original particle-based process (, ): the diffusion of the ions has been translated into a drift of the empirical measure and a lower-order diffusion term; the jumps have remained the same, with rates that are proportional to the empirical measure.
§.§ Deterministic limit
As n →∞, we expect the process ρ^n_, to become more and more deterministic, namely concentrated on a continuous measure-valued trajectory. The trajectory is the solution of a measure-valued PDE. Here we give a heuristic derivation of such a PDE by performing a formal Taylor expansion around 1/n = 0 of the jump terms in the generator (<ref>):
n [ g(ρ - 1/nδ_x δ_ + 1/nδ_x' δ_') - g(ρ) ] = n [ ψ(⟨ϕ_1, ρ⟩ - 1/nϕ_1(x, ) + 1/nϕ_1(x', '), ...) - ψ(⟨ϕ_1, ρ⟩, ...) ]
= ∑_ℓ=1^q ∂_ℓψ(⟨ϕ_1, ρ⟩, ...) (ϕ_ℓ(x', ') - ϕ_ℓ(x, )) + o(1)
= g'(ρ)(x', ') - g'(ρ)(x, ) + o(1) .
Then, we replace the corresponding terms in the generator (<ref>) and obtain, upon sending n →∞,
(Q^n g)(ρ) → (Q^∞ g)(ρ) = σ^2/2∫_𝕏ρ( x, 0) Δ_1(g'(ρ))(x, 0)
+ ∫_(y)ρ( x, 0) r^+(w) ( g'(ρ)(y, 1) - g'(ρ)(x, 0) )
+ ∬_𝕏^2ρ( x, 1) μ( x') r^-(w) ( g'(ρ)(x', 0) - g'(ρ)(x, 1) )
with w= 1/aρ({y},1).
The limit generator Q^∞ contains only first derivatives of the observables and therefore is the generator of a deterministic (measure-valued) process.[A generator L of the form (Lf)(x) = A(x) · f'(x) containing only first derivatives of the argument is associated with the deterministic differential equation ẋ = A(x); such a generator is the transpose operator of the operator that generates the Liouville equation.]
Its paths are the solutions (ρ(·; t))_t≥ 0 of the equation
/ t∑_∈{0, 1}∫_ρ( x, ; t) φ(x, ) = σ^2/2∫_𝕏ρ( x, 0; t) Δ_1φ(x, 0) + ∫_(y)ρ( x, 0; t) r^+(w(t)) ( φ(y, 1) - φ(x, 0) )
+ ∬_𝕏^2ρ( x, 1; t) μ( x') r^-(w(t)) ( φ(x', 0) - φ(x, 1) ) ,
As a final step, we aim to find the evolution equations for the relative occupancy w(t)
and the concentration of unbound ions. Since we write them in strong form, we define the concentration as the Lebesgue density of ρ(·, 0):
c(x) ρ(x, 0) .
If μ has a Lebesgue density too, we can perform an integration by parts in (<ref>) and find
ċ(x; t) = σ^2/2Δ c(x; t) - _(y)(x) r^+(w(t)) c(x; t) + r^-(w(t)) μ(x) a w(t) .
The equation for w(t) is recovered from (<ref>) by using the concentration property w(t) = 1/aρ({y}, 1; t):
ẇ(t) = r^+(w(t)) 1/a∫_(y) c(x; t) x - r^-(w(t)) w(t) .
(<ref>) marks a crucial step in this derivation. We started with a purely discrete object (the relative occupancy w_k defined in (<ref>) with a finite state space) and replaced it with the object w(t), which continuously evolves in space [0,1].
The continuous occupancy w(t) is the n→∞ limit of the sequence of discrete occupancies. This step is based on a scaling assumption: When the number n of ions grows, the number of ions that can be bound to an individual vesicle grows as well (its maximum n_v⌊ a n ⌋ scales with n). If n_v did not scale with n, i.e., if there were an absolute upper bound to the number of ions that can be bound to a vesicle, then for growing n all vesicles would be filled with ions after shorter and shorter time, simply because there more and more unbound ions. Thus, the definition of w_k in (<ref>) as a quantity relative to n is crucial to getting a reasonable hybrid model with good approximation properties. It is very important to note that this scaling assumption does not contradict the findings in the biological literature where it is often assumed that vesicles bind maximally 5 calcium ions, with an estimate of n= 10^2 ions per vesicle in the spatial domain of interest. There is no contradiction since the limit n→∞ is a mathematical abstraction used to define a meaningful mean-field limit and not biological reality, and since our scaling assumption can be calibrated to agree with the numbers mentioned in the biological literature by setting a=0.05 for n= 10^2 ions.
§.§ Full hybrid model
In the previous section we derived the partial mean-field model in the simplified setting of one vesicle with a fixed position. More generally, from the particle-based dynamics in <ref>, one can derive the following partial mean-field model
ċ(x; t) = (σ^X)^2/2Δ c(x; t) + ∑_k=1^m (- _(Y_k(t))(x) r^+(w_k(t)) c(x; t) + r^-(w_k(t)) μ_k(x) a w_k(t) ),
ẇ_k(t) = r^+(w_k(t)) 1/a∫_(Y_k(t)) c(x; t) x - r^-(w_k(t)) w_k(t) ,
Y_k(t) = -(∇ V(Y_k(t)) + ∑_ℓ≠ k∇ U(Y_k(t) - Y_ℓ(t)) ) dt + σ^Y B_k^Y(t)
for k∈{1,…,m}, where μ_k defines the distribution of the ion's position after unbinding from vesicle k. The model is composed of a PDE (<ref>) for the concentration of unbound ions c, a collection of ODEs (<ref>) for the occupancies w_k, and a collection of stochastic differential equations (SDEs) (<ref>) for the positions of the vesicles Y_k.
The boundary conditions corresponding to the particle-based dynamics are given by a Neumann no-flux condition n ·∇ c = 0 on the domain boundary ∂ and reflection from ∂ for the vesicle positions Y_k. It follows, that at all times the distribution of ions (bound or unbound) is conserved:
∫_ c(x;t) dx + ∑_k=1^m a w_k(t)= 1 ∀ t≥ 0,
given that ∫_ c(x;0) dx + ∑_k=1^m a w_k(0)= 1.
§ NUMERICAL EXPERIMENTS
For the subsequently discussed numerical experiments we
employed an Euler-Maruyama discretization of the SDEs (<ref>) and (<ref>) to simulate the particle-based dynamics.
The solution of the PDE (<ref>) for the hybrid model was approximated by means of a linear-implicit discretization in time and a finite element method in space. For the corresponding ODE (<ref>) the implicit Euler method was applied, and the SDE (<ref>) was discretized in time using again the Euler-Maruyama scheme.
It was checked that decreasing time step and grid size yields identical solutions up to sufficient numerical precision.
§.§ Choice of parameter values
As a base setting, we consider a bounded region =[0,1]^2⊂ℝ^2, as well as n= 10^2 calcium ions, m=2 vesicles and a=0.05, thus each vesicle has n_v = 5 binding sites. For the rate functions we first neglect cooperativity and assume the form r^+(w) = γ^+ (1-w) and r^-(w) = γ^- with γ^+=4, γ^-=2, combined with an interaction radius of ϵ=0.2. Later, in <ref>, we will also consider other rate functions based on cooperativity.
The vesicles move towards the lower domain boundary due to the potential field V(y)=0.25y^(2) for y=(y^(1),y^(2))∈ℝ^2 and are affected by short-range repulsion from other vesicles by the potential U(y_k-y_ℓ) = 0.05 exp(-5y_k-y_ℓ) for y_k,y_l∈ℝ^2. In all experiments, we assume a noise intensity of σ^X=0.25 for ions and of σ^Y=0 for vesicles. While we want the vesicle dynamics to be stochastic in general, we set σ^Y=0 in order to make the hybrid model deterministic (solely) to simplify the analysis below significantly.
§.§ Comparison of particle-based and hybrid dynamics
s <ref> and <ref> show the evolution of the spatial distribution of n= 10^2 calcium ions and the position and occupancy status for m=2 vesicles for both the stochastic particle-based dynamics and the deterministic dynamics given by the hybrid model. In addition, the ensemble average of the particle-based dynamics with respect to 10^4 simulations is depicted. The parameter values are given in <ref>; the initial positions of the ions were selected randomly from a uniform distribution, and the two vesicles start with occupancy w_k(t=0)=0, k=1,2.
For the given choice of parameter values, the hybrid model very well reproduces the average behaviour of the particle-based dynamics. Nonetheless, it is important to note that a single particle-based realization is still highly stochastic and can deviate substantially from the average, which the hybrid model is incapable of capturing. Also note that vesicle k=1 has a lower occupancy status than vesicle k=2 due to its position. The vesicle's proximity to the domain boundary results in it interacting with fewer calcium ions. Similar parameter values give rise to similarly high approximation quality.
§.§ Different parameter values and (un-)binding functions
In this section we will examine the approximation quality of the average occupancy status of the particle-based dynamics by the dynamics given by the hybrid model when comparing n= 10^2 to n=10^3. The average quantities for the particle-based dynamics are computed using an ensemble of 5· 10^3 resp. 5· 10^2simulations. We will change the form of the rate functions r^± to investigate cooperative and non-cooperative behavior as given by the rate functions from <ref> under different values of the rate parameters γ^+, α^±, β, while fixing γ^-=2.[The rate γ^-=2 may be fixed since by changing γ^+ we vary the ratio of the binding to unbinding rates.] For simplicity we only consider a single vesicle (m=1) and denote w(t) := w_1(t).
When neglecting cooperativity, the approximation quality is already very good for n= 10^2 and a wide range of rate parameter values, see <ref>.
In contrast, when binding or unbinding is cooperative, the approximation quality for n= 10^2 depends on the specific values of the rate parameters, see <ref>. The approximation is good for some values, while for others, there are discrepancies between the average particle-based dynamics and the hybrid model. However, no clear pattern emerges to explain these discrepancies. This indicates that, for certain cooperative rate functions and parameter values, a higher number of calcium ions is required for a good approximation between the two models. For n=10^3 the approximation quality is high for all tested combinations of rate values.
§ MODEL EXTENSIONS
The particle-based model introduced above is based on several simplifications and assumptions. First of all, the model is restricted to the pure ion-vesicle binding process. Even regarding this process alone, there are aspects that are not included in the model as it was presented above, e.g., spatial dependence of binding rates and/or noise, effects of charges, buffer proteins, etc.
Furthermore, the particle-based model ignores many other parts of the neurotransmission process as a whole, like transport through and opening and closing of ion channels, the docking of vesicles to release sites, the recycling of vesicles after release or the neurotransmitter release process itself.
Next, we will shortly outline how the presented particle-based model for the ion-vesicle binding process might be improved. Then, we will show that the model can properly be extended to incorporate ignored parts of the whole neurotransmission process by illustrating how to incorporate
ion transport through an ion channel.
§.§ Improved models for the ion-vesicle binding process
Space-dependent rates and noise. The particle-based model is based on several specific assumptions about the ion-vesicle binding process. For example, it is assumed above that the binding process happens with equal rate in all of the spatial domain considered. In <cit.>, the authors have postulated that the binding rate is very small away from the membrane, and that additional molecular structures anchored at the membrane may support ion binding. One idea would be to take a non-zero binding rate away from the active zone (smaller than the binding rate at the active zone) and to choose a clearly larger dissociation rate away from the active zone. As soon as such a mathematical model for the spatial dependence of the (un-)binding rate existed, it would be easy to include it into the particle-based and thus also into the partial mean-field model. Furthermore, the diffusion constant might depend on the position of the ions/vesicles. This could be incorporated by making the noise intensity factors in s (<ref>) and (<ref>) position-dependent in the particle-based model. Both improvements would lead to obvious generalization in the partial mean-field model.
Buffer proteins. Moreover, in the literature one also finds models that include the reaction of calcium ions with buffer proteins, through which many of the ions in the spatial domain of interest or ions that enter through the calcium channel get bound to buffer proteins and thus only a certain portion of ions eventually reach the vesicles <cit.>. Clearly, this could be considered by incorporating an additional species of particles (buffer proteins) with its own diffusive position dynamics and (un-)binding reactions. Consequently, the hybrid model would have to be changed accordingly, e.g., by introduction of an additional PDE for the distribution of buffer proteins. These extensions could be guided by <cit.>, where a deterministic PDE-ODE model of diffusion of calcium ions and reactions with buffer proteins and vesicles is described.
Charges. Another assumption was to ignore the charge of the ions. While these charges may be screened by different effects within the cellular environment, they should not be ignored completely. Even if we assume that the effect of charge on binding and unbinding has been considered in the respective rates, there will be an effect on the position dynamics of ions and charge-carrying vesicles. While the effect of charge on the motion of the vesicles might be modelled by means of additional repulsion or attraction terms in the potential U of (<ref>), the diffusion equation (<ref>) would have to be complemented by analogous terms modelling the screened electrostatic repulsion between the ions. A candidate would be
dX_i(t) = δ__i(t),0 (- ∑_j ≠ i∇Φ(X_i(t) - X_j(t)) t + σ^X B_i^X(t)), i ∈{1, … n},
where Φ denotes the screened electrostatic potential of the ions. The introduction of these terms changes the partial mean-field model accordingly, that is, the PDE (<ref>) for the distribution of unbound ions c(x;t) gets additional terms and takes the form of a (generalized) Nernst-Planck equation or other electrodiffusion models, cf. <cit.>:
ċ(x; t) = … + ∇·( c(x; t) ∇∫_Φ(x - x') c(x'; t) x'_solution of Poisson's equation) .
Potentially, one would also have to add analogous terms for the electrostatic interaction between ions and charged vesicles.
§.§ Adding transport through an ion channel
The influx of ions through an ion channel can easily by included in both, the particle-based and the partial mean-field model. For the sake of simplicity, we subsequently describe the influx case only. The outflux case can be handled in analogy.
Particle-based model.
When we want to include an ion channel through which ions can enter the domain, we may model this by assuming that the n ions can not only be (i) unbound and in the domain (_i=0), (ii) bound and in the domain (_i=k>0), but also (iii) outside of the domain, denoted by _i=-1. Then, ions that are outside the domain (_i=-1) can at a certain rate k (constant rate or time-dependent to ensure a constant inflow number) enter the domain at the channel location x_ch∈, i.e., their position just after entering is given by X_i=x_ch∈, or according to a certain distribution centered at x_ch.
As soon as these ions entered the domain, they are governed by the same rules (diffusion, reactions) as outlined above.
Hybrid model.
To include an ion channel through which ions can enter the domain, we also model the time-dependent amount of ions outside of the domain, c_out(t). These ions can enter the domain through the channel at rate κ, leading to the ODE
ċ_out(t) = - κ c_out(t).
In case that the location x_ch∈ of the channel lies inside of the domain and not at the boundary, we add the following last term to the PDE (<ref>):
ċ(x;t) = σ^2/2Δ c(x;t) + ∑_k=1^m (…) + κ c_out(t) p(x),
where p:→^+ is a non-negative function that integrates to one and determines how the ions enter the domain, e.g. p(x) = δ(x-x_ch).
Now it holds ∫_ c(x;t) dx+ ∑_k a w_k(t) +c_out(t)= 1 for all times t≥ 0, in analogy to (<ref>), assuming that this holds for t=0.
Assuming instead that the channel lies on the domain boundary, x_ch∈∂, we replace the Neumann no-flux boundary conditions by n ·∇ c = κ c_out p(x) on ∂, where p:∂→^+ is a non-negative function integrating to one along the boundary, e.g., p(x) = δ(x-x_ch).
§.§ Further extensions
Other aspects of the neurotransmission process can be included in similar ways. For example, docking of vesicles to release sites can be integrated by fixing the vesicle position to the membrane/boundary with a certain binding rate upon close contact and starting a new form of position dynamics outside of the boundary for describing neurotransmitter release and diffusion. In the hybrid model, this would lead to an additional PDE for the distribution of neurotransmitter in the spatial domain on the outside of the boundary with addition source terms upon binding of a vesicle to the boundary.
In conclusion, the particle-based model is flexible enough to allow for incorporation of all aspects of the whole neurotransmission process, as long as good models and parameters (rates, noise intensities, etc.) for the effects to be incorporated become available. The transfer of these additional aspects to the hybrid model then follows the same mathematical recipe as in the derivation above, that is, an (almost) automated derivation process, except for the scaling assumptions that have to be made (cf. <ref>).
§ CONCLUSION
This article addresses reaction networks in which spatial and stochastic effects are of crucial importance. For such systems particle-based models allow to describe all microscopic details with high accuracy. However, they suffer from computational inefficiency if particle numbers and density get too large. Alternative models refrain from describing all microscopic details. They reduce the computational effort tremendously by introducing, e.g., a concentration field to represent the particle density, and utilize reaction-diffusion PDEs or similar macroscopic descriptions for the evolution of the concentration field.
The goal of this work is to demonstrate how models on the different resolution levels can be combined into hybrid models that seamlessly combine the best of both worlds, describing molecular species with large copy numbers by macroscopic equations for its concentration field while keeping the stochastic-spatial particle-based resolution level for the low-copy-number species.
To this end, we introduced a simple particle-based model for the ion-vesicle binding process at the heart of the neurotransmission process. Then, we derived a novel hybrid model and presented numerical experiments that demonstrate that the hybrid model allows for an accurate approximation of the full particle-based model in realistic scenarios. We also discussed how to extend the particle-based model in order to incorporate details and additional aspects of the neurotransmission process presently ignored. It is easy to see how these extensions would results in analogous changes of the hybrid model.
Conclusively, the door is now open to construct hybrid models for other reaction networks with spatial stochastic effects, where one molecular species is only present in low copy numbers in contrast to other high population species. However, the present work also shows that, as usual, the devil is in the details. The form and the approximation properties of the hybrid model crucially depend on the specific scaling properties used. In this work, this is most visible when we revisit the way the upload of ions to one vesicle is modeled: in the particle-based model the ion occupancy of a vesicle is a discrete number; in the hybrid model it becomes a continuous variable that scales with n, the number of ions, see <ref>. This kind of scaling assumption will have to be made in every specific case. Further research will have to show which scaling strategies are appropriate for which realistic scenario.
Acknowledgments.
This work has been partially funded by the Deutsche Forschungsgemeinschaft (DFG) through grant CRC 1114 Scaling Cascades in Complex Systems (project no. 235221301) and under Germany's Excellence Strategy through grant EXC-2046 The Berlin Mathematics Research Center MATH+ (project no. 390685689).
Code Availability. The code is available at <github.com/LuzieH/neuro>.
alpha
§ DETAILS OF THE DERIVATION
In this appendix, we report the full calculations that bring us from the generator (<ref>) for (, ) to the intermediate expression (<ref>).
We first compute some useful quantities, assuming ϕ,ϕ_ℓ∈ C^2,0(×{0, 1}) and for f given by (<ref>):
⟨ϕ, ρ^n_, ⟩ = 1/n∑_i=1^n ϕ(x_i, s_i) ,
∇_x_i⟨ϕ, ρ^n_, ⟩ = 1/n∇_1 ϕ(x_i, s_i) ,
∇_x_i f(, ) = ∇_x_iψ(⟨ϕ_1, ρ^n_, ⟩, ..., ⟨ϕ_, ρ^n_, ⟩)
= 1/n∑_ℓ=1^∂_ℓψ(⟨ϕ_1, ρ^n_, ⟩, ..., ⟨ϕ_, ρ^n_, ⟩) ∇_1 ϕ_ℓ(x_ℓ, s_ℓ) ,
Δ_x_i f(, ) = Δ_x_iψ(⟨ϕ_1, ρ^n_, ⟩, ..., ⟨ϕ_, ρ^n_, ⟩)
= 1/n∑_ℓ=1^∂_ℓψ(⟨ϕ_1, ρ^n_, ⟩, ..., ⟨ϕ_, ρ^n_, ⟩) Δ_1 ϕ_ℓ(x_ℓ, s_ℓ)
+ 1/n^2∑_k,ℓ=1^∂^2_k ℓψ(⟨ϕ_1, ρ^n_, ⟩, ..., ⟨ϕ_, ρ^n_, ⟩) ∇_1 ϕ_k(x_k, s_k) ·∇_1 ϕ_ℓ(x_ℓ, s_ℓ) ,
[8] f((x_1, ..., x'_i, ..., x_n), (s_1, ..., 1-s_i, ..., s_n)) - f(, )
= ψ( 1/n∑_j=1^n ϕ_1(x_j, s_j) - 1/nϕ_1(x_i, s_i) + 1/nϕ_1(x'_i, 1-s_i), ... ) .
The identities (<ref>) and (<ref>) follow from the chain rule, and in the final identity (<ref>) we used the trick of removing the contribution of the i-th ion with coordinates (x_i, s_i) from the sum and adding its contribution with the new coordinates (x'_i, 1-s_i).
Using these formulas, we find
[1.82] (L^n f)(, )
= σ^2 /2n∑_i=1^n δ_s_i, 0∑_ℓ=1^∂_ℓψ(1/n∑_j=1^n ϕ_1(x_j, s_j), ...) Δ_1ϕ_ℓ(x_i, s_i)
+ σ^2 /2n^2∑_i=1^n δ_s_i, 0∑_k,ℓ=1^∂^2_k ℓψ(1/n∑_j=1^n ϕ_1(x_j, s_j), ...) ∇_1ϕ_k(x_i, s_i) ·∇_1ϕ_ℓ(x_i, s_i)
+ r^+(w) ∑_i=1^n _(y)(x_i) δ_s_i, 0[ ψ(1/n∑_j=1^n ϕ_1(x_j, s_j) - 1/nϕ_1(x_i, s_i) + 1/nϕ_1(y, 1-s_i), ...) - ψ(1/n∑_j=1^n ϕ_1(x_j, s_j), ...) ]
+ r^-(w) ∑_i=1^n δ_s_i, 1∫_𝕏μ( x') [ ψ(1/n∑_j=1^n ϕ_1(x_j, s_j) - 1/nϕ_1(x_i, s_i) + 1/nϕ_1(x', 1-s_i), ...) - ψ(1/n∑_j=1^n ϕ_1(x_j, s_j), ...) ] .
To express the generator in terms of the empirical measure, we use the properties
∑_' ∈{0, 1}∫_ f(x', ') ρ^n_, ( x', ') = 1/n∑_i=1^n f(x_i, _i) ,
which allows us to replace the outermost summations by the corresponding integrals, and
w = 1/⌊ a n ⌋∑_i=1^n δ_s_i, 1 = n/⌊ a n ⌋∫_ρ^n_, ( x', 1) ,
by which we shift the meaning of the placeholder w. We then obtain
(L^n f)(, )
= σ^2/2∑_s'∈{0,1}∫_𝕏ρ^n_, ( x', s') δ_s', 0∑_ℓ=1^∂_ℓψ(⟨ϕ_1, ρ^n_, ⟩, ...) Δ_1ϕ_ℓ(x', s')
+ σ^2/2n∑_s'∈{0,1}∫_𝕏ρ^n_, ( x', s') δ_s', 0∑_k=1^∑_ℓ=1^∂^2_k ℓψ(⟨ϕ_1, ρ^n_, ⟩, ...) ∇_1 ϕ_k(x', s') ·∇_1 ϕ_ℓ(x', s')
+ n ∑_s'∈{0,1}∫_𝕏ρ^n_, ( x', s') _(y)(x') δ_s', 0 r^+(w) [ ψ(⟨ϕ_1, ρ^n_, ⟩ - 1/nϕ_1(x', s') + 1/nϕ_1(y, 1-s'), ...) - ψ(⟨ϕ_1, ρ^n_, ⟩, ...) ]
+ n ∑_s'∈{0,1}∬_𝕏^2ρ^n_, ( x', s') δ_s', 1 μ( x”) r^-(w) [ ψ(⟨ϕ_1, ρ^n_, ⟩ - 1/nϕ_1(x', s') + 1/nϕ_1(x”, 1-s'), ...) - ψ(⟨ϕ_1, ρ^n_, ⟩, ...) ] .
and finally get the more compact expression (<ref>) after performing the summations over s'.
|
http://arxiv.org/abs/2307.01338v1
|
20230703202124
|
Hidden Symmetry in the Double Copy
|
[
"Adam Ball",
"Anna Bencke",
"Yaxi Chen",
"Anastasia Volovich"
] |
hep-th
|
[
"hep-th"
] |
=1
∂SL(2,C)𝒪HIz̅εκσλ1/2equationsection
Department of Physics, Brown University, Providence, RI 02912, USAadam_ball@brown.edu,
anna_bencke@brown.edu, yaxi_chen@brown.edu, anastasia_volovich@brown.eduWe show that the Killing tensor of the Kerr spacetime has an analogue in the √( Kerr) gauge theory solution related to it by the classical double copy. This hidden symmetry of √( Kerr) leads to an additional constant of motion for color-charged point particles moving in it, implying integrability of the equation of motion. These are the gauge theory counterparts to the Carter constant and the integrability of the geodesic equation in a Kerr background.Hidden Symmetry in the Double Copy
and Anastasia Volovich
==================================
§ INTRODUCTION
The double copy relation between gravity and gauge theory was originally discovered in the context of perturbative scattering amplitudes <cit.>, but it was soon shown in <cit.> that a version of it also applies to certain classical solutions of gravity and gauge theory. This classical double copy has since been extended and clarified substantially, e.g. in <cit.>, but its fundamental underpinnings remain somewhat elusive and it still holds mysteries. One direction of generalization was described in <cit.>, where the authors established a version of the double copy for point particles moving in double-copy-related backgrounds. They expounded in particular on the Kerr solution to gravity and the so-called √( Kerr) solution to Yang-Mills theory. Here we build on their work, allowing for non-equatorial trajectories and translating the results about particle motion back to results about the classical double copy itself. We find that the hidden symmetry of the Kerr solution is mapped by the double copy relation to a hidden symmetry of the √( Kerr) solution.
In section <ref> we review the Kerr-Schild double copy. In section <ref> we review the results of <cit.>. In section <ref> we review the Carter constant and corresponding integrability of the Kerr solution. In section <ref> we present our results, namely the gauge analogue of the Carter constant, the reduction of particle motion in the √( Kerr) solution to quadratures, and the gauge analogue of the Kerr solution's Killing tensor. Finally in section <ref> we further contextualize our results and discuss prospects for generalization.
§ REVIEW OF KERR-SCHILD DOUBLE COPY
A surprisingly wide class of metrics can be written in Kerr-Schild form (see e.g. <cit.> for a review),
g_μν = g̅_μν + h_μν
= g̅_μν + φ k_μ k_ν,
where g̅_μν is the flat metric and the covector k_μ is null and geodesic with respect to it, satisfying
k_μg̅^μν∇̅_ν k_ρ = 0
with ∇̅_μ the covariant derivative of g̅_μν. This property fixes φ up to a constant. The inverse metric takes the simple form
g^μν = g̅^μν - φ k^μ k^ν,
where the indices on the right hand side have been raised with g̅^μν. Note that k_μ is null with respect to g_μν as well as g̅_μν, and its index can be raised equally well with either metric. One can also show that k_μ is geodesic with respect to g_μν.
The Kerr-Schild double copy <cit.> states that if h_μν = φ k_μ k_ν is a stationary Kerr-Schild perturbation, then A_μ^a = φ k_μc̃^a solves the Yang-Mills equations in the flat background g̅_μν for any constant color vector c̃^a. This relationship,
h_μν = φ k_μ k_ν ⟷ A_μ^a = φ k_μc̃^a,
is pithily summarized as
k_ν ⟷ c̃^a,
which states a sort of duality between kinematics and color <cit.>. Note that since the color behavior in A_μ^a is just a constant factor c̃^a, the Yang-Mills equations linearize and the gauge field is effectively abelian.
Our focus in this paper is on the Kerr metric <cit.>, which in Kerr-Schild form in Cartesian coordinates (t, x, y, z) is <cit.>
g_μν = g̅_μν + φ k_μ k_ν
with
φ = 2GM r^3/r^4 + a^2 z^2
and
k_μ = ( 1, rx+ay/r^2+a^2, ry-ax/r^2+a^2, z/r).
Here r is defined implicitly by
1 = x^2 + y^2/r^2 + a^2 + z^2/r^2.
The black hole's mass is M and its angular momentum is aM. The particular value of the mass will be irrelevant to us, so henceforth we set 2GM = 1. The corresponding gauge theory solution, which we refer to as the √( Kerr) solution, is
A_μ^a = φ k_μc̃^a.
From here on we switch to spheroidal coordinates, defined by
x = √(r^2 + a^2)sinθcosϕ
y = √(r^2 + a^2)sinθsinϕ
z = r cosθ,
in which the flat metric reads
ds^2 = -dt^2 + r^2 + a^2 cos^2θ/r^2 + a^2 dr^2 + (r^2 + a^2cos^2θ) dθ^2 + (a^2 + r^2)sin^2θ dϕ^2.
In these coordinates the √( Kerr) gauge field is
[ A_t^a = r c̃^a/r^2 + a^2 cos^2θ A_r^a = r c̃^a/r^2 + a^2; ; A_ϕ^a = -r c̃^a/r^2 + a^2 cos^2θ asin^2θ A_θ^a = 0. ]
§ REVIEW OF GEODESIC DOUBLE COPY
In this section we review the double copy relation of test particles moving in double-copy-related backgrounds, as introduced in <cit.>. There are several standard Lagrangians that describe the motion of a relativistic point particle in a gravitational background. The one we find most convenient is
L_ grav = g_μνẋ^μẋ^ν,
where ẋ^μ = dx^μ/d with as our time parameter. The equation of motion is the geodesic equation,
0 = D/dẋ^μ
= ẍ^μ + Γ^μ_νρẋ^νẋ^ρ,
which in particular implies that is affine. The momenta are
p_μ = L_ grav/ẋ^μ
= g_μνẋ^ν,
with corresponding Hamiltonian
H_ grav = p_μẋ^μ - L_ grav
= g^μν p_μ p_ν.
The Hamiltonian is conserved, and its value determines the mass via p^2 = -m^2. Note then that is related to proper time as τ = m. A Killing vector ξ^μ of the spacetime implies a symmetry of the Lagrangian, and by Noether's theorem there is a corresponding conserved charge
Q_ξ = L_ grav/ẋ^μξ^μ
= p_μξ^μ.
The analogous Lagrangian for a relativistic particle with color charge moving in a gauge background is <cit.> L_ gauge = g̅_μνẋ^μẋ^ν - iψ̇^†ψ + c^a A_μ^a ẋ^μ
where g̅_μν is the flat metric, we have set the gauge coupling to unity, ψ is the particle's color vector which is valued in the fundamental representation, and we have defined
c^a ≡ψ^† T^a ψ
where {T^a} is an orthonormal basis for the adjoint representation. The equations of motion for ψ, ψ^† can be combined to give
ċ^a = f^abc c^b A_μ^c ẋ^μ.
The equation of motion for x^μ, sometimes called Wong's equation <cit.>, is
D̅/dẋ^μ = c^a F^a,μ_νẋ^ν
where D̅/d is the covariant time derivative with respect to g̅_μν and F_μν^a is the non-abelian field strength.
This can be interpreted as a relativistic and non-abelian version of the Lorentz force law. The momenta are
p_μ = L_ gauge/ẋ^μ
= g̅_μνẋ^ν + c^a A_μ^a
and
p_ψ^† = L_ gauge/ψ̇^† = -iψ, p_ψ = L_ gauge/ψ̇ = 0.
The Hamiltonian is then
H_ gauge = p_μẋ^μ + ψ̇^† p_ψ^† + p_ψψ̇- L_ gauge
= g̅_μνẋ^μẋ^ν
= g̅^μν (p_μ - c^a A_μ^a)(p_ν - c^b A_ν^b).
Once again the conserved Hamiltonian determines the mass as H = - m^2. If ξ^μ is a Killing vector of g̅_μν and its Lie derivative also annihilates the gauge field, ℒ_ξ A_μ^a = 0, then it implies a symmetry of the Lagrangian with Noether charge
Q_ξ = L_ gauge/ẋ^μξ^μ
= p_μξ^μ.
When we specialize to a stationary Kerr-Schild solution and its corresponding gauge solution, as in (<ref>), the conserved combination
C ≡ c^a c̃^a
plays the role of an effective abelian charge and the point particle momenta (<ref>) and (<ref>) can be written as
Gravity: p_μ = g̅_μνẋ^ν + φ k_μ k_νẋ^ν
Gauge: p_μ = g̅_μνẋ^ν + φ k_μc̃^a c^a.
It was observed in <cit.> that if the double copy relation (<ref>) is extended as
k_ν ⟷ c̃^a, ẋ^ν ⟷ c^a,
then the momenta p_μ of the two theories, and in particular the Killing charges Q_ξ = p_μξ^μ, are mapped to each other. Specializing further to the Kerr and √( Kerr) backgrounds described in section <ref>, the authors of <cit.> showed that for equatorial orbits the two conserved charges of energy and angular momentum are enough to determine the trajectory completely, giving a double copy relation between families of trajectories in the Kerr and √( Kerr) backgrounds. This suggests a deep relationship between the two point particle theories, although seemingly not a complete duality since the Hamiltonian expressions in terms of the momenta are not identical. However we will see that the correspondence goes beyond mere Killing vectors and equatorial orbits.
§ REVIEW OF CARTER CONSTANT AND INTEGRABILITY IN KERR
The energy and angular momentum are famously not the only conserved quantities for a point particle moving in a Kerr background. There is also the Carter constant, which we review in this section along with the corresponding integrability properties.
The Carter constant <cit.> is
k_ grav = p_θ^2 + a^2 m^2 cos^2θ + ( p_ϕ/sinθ + a p_t sinθ)^2.
Recalling that m^2 = -p^2, we see it is a homogeneous quadratic polynomial in momenta and therefore can be written as
k_ grav = K_μν p^μ p^ν
for some symmetric tensor K_μν. Let us investigate the properties of K_μν, given that k_ grav is conserved. We have
k̇_ grav = p^ρ∇_ρ( K_μν p^μ p^ν)
= p^ρ p^μ p^ν∇_ρ K_μν
= p^ρ p^μ p^ν∇_(ρ K_μν).
This must vanish for all p^μ, meaning that
∇_(ρ K_μν) = 0.
This is the defining equation for a Killing tensor. Usually one shows that a Killing tensor implies a conserved quantity, but here we found the converse to be more instructive. More generally one can ask about the conservation of any polynomial in momenta <cit.>, i.e.
0 = D/d∑_i K^(i)_μ_1…μ_i p^μ_1… p^μ_i.
One finds that each of the symmetric tensors K^(i)_μ_1…μ_i must be a Killing tensor. In a general theory the tensors of different rank can mix, and one gets more complicated conservation conditions. We will see this in the √( Kerr) theory.
The Liouville-Arnold theorem states that a Hamiltonian system with n degrees of freedom and n independent Poisson-commuting conserved quantities is integrable. Our point particle has four degrees of freedom x^μ, and four independent conserved quantities which we can take as H, p_t, p_ϕ, k_ grav. They also Poisson-commute, as one can quickly check using {x^μ, p_ν} = δ^μ_ν. The resulting integrability is what underlies the well-known reduction to quadratures of the geodesic equation in Kerr, as reviewed for example in <cit.>.
§ HIDDEN SYMMETRY IN √( KERR)
The extended double copy relation (<ref>) maps p_μ in the Kerr background theory to p_μ in the √( Kerr) background theory. This suggests that the expression for the Carter constant, reinterpreted with p_μ as the √( Kerr) momentum, might be conserved in the √( Kerr) background theory. Indeed, conservation of
k_ gauge≡ p_θ^2 + a^2 m^2 cos^2θ + ( p_ϕ/sinθ + a p_t sinθ)^2
can be shown using eqs. (<ref>), (<ref>), and (<ref>). As in the Kerr case, this constitutes a fourth independent, Poisson-commuting constant of motion and leads to integrability for the x^μ degrees of freedom.[The gauge system technically has more than four degrees of freedom due to the color variables, but their effective abelian nature means that they can be analyzed separately from the x^μ equations. Said differently, we could just treat c^a A_μ^a as an abelian background and solve for the motion of an electric charge.] Consequently the equation of motion can be solved by quadratures, as we now show. Our derivation closely parallels that of <cit.> for Kerr.
The first step in reducing the equation of motion to quadratures is to write the momenta in terms of the four constants of motion p_t, p_ϕ, m, and k_ gauge (along with two independent signs ±_r and ±_θ). For p_t and p_ϕ this is tautological. Overall we find
p_μ dx^μ = p_t dt + C r ±_r √(ℛ(r))/r^2 + a^2 dr ±_θ√(Θ(θ)) dθ + p_ϕ dϕ
where we have defined
ℛ(r) ≡[ (r^2 + a^2) p_t + a p_ϕ]^2 - (r^2 + a^2) (k_ gauge + m^2 r^2) - C r [ 2p_t r^2 - C r + 2a(a p_t + p_ϕ) ]
and
Θ(θ) ≡ k_ gauge - a^2 m^2 cos^2θ - ( p_ϕ/sinθ + a p_t sinθ)^2.
Just as in the Kerr case p_θ depends only on θ and there are substantial cancellations resulting in p_r depending only on r. The next step is to rewrite the momenta in terms of velocities, which gives
Σ dt/d = -p_t r^2 + C r - a^2 p_t cos^2θ
Σ dr/d = ±_r √(ℛ(r))
Σ dθ/d = ±_θ√(Θ(θ))
Σ dϕ/d = C a r - a^2 p_ϕ/r^2 + a^2 + p_ϕ/sin^2θ
where we have defined
Σ≡ r^2 + a^2 cos^2θ.
The signs ±_r and ±_θ are taken to match those of dr/d and dθ/d. Next we note
1/±_r √(ℛ(r))dr/d = 1/Σ = 1/±_θ√(Θ(θ))dθ/d.
Integrating this gives a relation between r and θ. More specifically, if our initial position is (t_i, r_i, θ_i, ϕ_i) then the final coordinates r_f, θ_f will be related by
_r_i^r_fdr/±_r √(ℛ(r)) = _θ_i^θ_fdθ/±_θ√(Θ(θ)).
The notation indicates that the integrals are taken along the trajectory, possibly over multiple oscillations in r, θ. Note that the integrands are always positive, so the integrals grow monotonically as they proceed along the particle's trajectory. For t_f and ϕ_f we can use a similar trick. We write
t_f - t_i = _t_i^t_f dt
= ∫__i^_fdt/d d
= ∫__i^_f( -p_t r^2 + C r - a^2 p_t cos^2θ) d/Σ
= _r_i^r_f( -p_t r^2 + C r ) dr/±_r √(ℛ(r)) + _θ_i^θ_f( -a^2 p_t cos^2θ) dθ/±_θ√(Θ(θ)).
For ϕ we have
ϕ_f - ϕ_i = _ϕ_i^ϕ_f dϕ
= ∫__i^_fdϕ/d d
= ∫__i^_f( C a r - a^2 p_ϕ/r^2 + a^2 + p_ϕ/sin^2θ) d/Σ
= _r_i^r_f( C a r - a^2 p_ϕ/r^2 + a^2) dr/±_r √(ℛ(r)) + _θ_i^θ_f( p_ϕ/sin^2θ) dθ/±_θ√(Θ(θ)).
In summary, given the initial position (t_i, r_i, θ_i, ϕ_i), the final position at time t_f is completely determined by the definite integrals above. The conceptual steps in this derivation were identical to those for Kerr.
As in the gravitational case, we can view k_ gauge as a quadratic polynomial in momenta. However now m^2 = -ẋ^2 -p^2, so it will not be homogeneous. Let us explore the consequences of conservation for a general quadratic polynomial in momentum,
Q = K_μν^(2) p^μ p^ν + K_μ^(1) p^μ + K^(0).
The equation of motion in terms of momenta is
D̅/d p^μ = D̅/d( ẋ^μ + c^a A^a,μ)
= c^a ẋ^ν∇̅^μ A_ν^a.
Using this, the time derivatives of the individual terms in Q are
D̅/d K^(0) = ẋ^μ∇̅_μ K^(0)
D̅/d( K_μ^(1) p^μ) = ẋ^μẋ^ν∇̅_ν K_μ^(1) + c^a ẋ^μℒ_K^(1) A_μ^a
D̅/d( K_μν^(2) p^μ p^ν) = ẋ^ρẋ^μẋ^ν∇̅_ρ K_μν^(2) + 2c^a ẋ^μẋ^ρ (A^a,ν∇̅_ρ K_μν^(2) + K_μν^(2)∇̅^ν A_ρ^a)
+ c^a c^b A^a,μẋ^ρ (A^b,ν∇̅_ρ K_μν^(2) + 2K_μν^(2)∇̅^ν A_ρ^b).
Here ℒ_K^(1) A_μ^a = K^(1),ν_ν A_μ^a + _μ K^(1),ν A_ν^a is the Lie derivative with respect to the vector field K^(1),ν. If Q is to be conserved then the sum of these must vanish. The vanishing of the 𝒪(ẋ^3) terms implies that K_μν^(2) is a Killing tensor,
0 = ∇̅_(ρ K_μν)^(2).
This was also necessary for conservation in the gravitational case, but here it is not sufficient. We also have the vanishing of the 𝒪(ẋ^2) terms, which requires
0 = ∇̅_(μ K_ν)^(1) + 2c^a (A^a,ρ∇̅_(μ K_ν)ρ^(2) + K_ρ(μ^(2)∇̅^ρ A_ν)^a).
If the gauge field vanished then we would have the usual Killing vector condition on K_μ^(1), but here we have two additional terms involving K_μν^(2). Finally the vanishing of the 𝒪(ẋ) terms requires
0 = ∇̅_μ K^(0) + c^a ℒ_K^(1) A_μ^a + c^a c^b A^a,ρ (A^b,ν∇̅_μ K_ρν^(2) + 2K_ρν^(2)∇̅^ν A_μ^b)
which involves all three of K_μν^(2), K_μ^(1), and K^(0). These equations describe the gauge analogue of a rank two Killing tensor. For k_ gauge we have
k_ gauge = K_μν^(2) p^μ p^ν + K_μ^(1) p^μ + K^(0)
with
K^(2)_μν = (_θ)_μ (_θ)_ν - a^2 g̅_μνcos^2θ + ( _ϕ/sinθ + asinθ _t )_μ( _ϕ/sinθ + asinθ _t )_ν
K^(1)_μ = 2a^2 c^a A_μ^a cos^2θ
K^(0) = -a^2 (c^a A_μ^a) (c^b A^b,μ) cos^2θ = 0
where (_μ)^ν = δ^ν_μ are the coordinate vector field components. One can check that eqs. (<ref>)-(<ref>) are satisfied. This perspective is especially interesting because it emphasizes special properties of the √( Kerr) background itself, as opposed to merely the particle moving on the background. It suggests that the double copy respects hidden symmetries.
§ DISCUSSION
After reviewing the Kerr-Schild double copy and its extension to include point particles moving on the respective backgrounds, we established the existence of a single copy analogue of the Carter constant for the √( Kerr) solution. It led to integrability of the equations of motion of a point particle, and we showed how to reduce them to quadratures. We then showed how to reinterpret this constant of motion as a geometric statement about the background itself, showing that, at least for the Kerr spacetime, the double copy preserves hidden symmetries. One may wonder to what extent the more powerful Weyl double copy <cit.> makes this manifest. After all, the Weyl double copy is built from the rank-two Killing spinor possessed by any Petrov type D spacetime <cit.>. However, this Killing spinor is not necessarily associated with any Killing tensor, but rather a conformal Killing tensor. Closely related to this, it leads to a constant of motion only for massless particles, namely the Penrose-Walker constant. The Kerr spacetime's Killing tensor is related to its Killing spinor, but somewhat nontrivially; it is not simply a special case of the associated conformal Killing tensor. Therefore, while it is not shocking, neither is it obvious to us why the double copy should respect the hidden symmetry of Kerr. Our results do seem related to those of <cit.> though, where it was found that the single copy gauge field corresponding to a Kerr-NUT-(A)dS spacetime <cit.> has a field strength that is “aligned”<cit.> with (but not proportional to) the spacetime's principal tensor <cit.>. The principal tensor controls the Killing tensors of Kerr-NUT-(A)dS, as reviewed in <cit.>, so their results provide further evidence that the double copy somehow respects hidden symmetries. In any case, it seems likely that our results generalize beyond Kerr, perhaps even to the entire Kerr-NUT-(A)dS family. It will be very interesting to see how far they extend.
We are grateful to Tucker Manton and Marcus Spradlin for useful discussions, and to Cynthia Keeler for comments on the draft. This work was supported in part by the US Department of Energy under contract DE-SC0010010 Task F and by Simons Investigator Award #376208. Y. Chen was also supported by a Karen T. Romer Undergraduate Teaching and Research Award.
JHEP
|
http://arxiv.org/abs/2307.03303v1
|
20230706213217
|
Demixing in binary mixtures with differential diffusivity at high density
|
[
"Erin McCarthy",
"Ojan Damavandi",
"Raj Kumar Manna",
"M. Lisa Manning"
] |
cond-mat.soft
|
[
"cond-mat.soft"
] |
Department of Physics and BioInspired Institute,
Syracuse University, Syracuse, New York, 13244
Department of Physics and BioInspired Institute,
Syracuse University, Syracuse, New York, 13244
rajkmphys@gmail.com
Department of Physics and BioInspired Institute,
Syracuse University, Syracuse, New York, 13244
mmanning@syr.edu
Department of Physics and BioInspired Institute,
Syracuse University, Syracuse, New York, 13244
Spontaneous phase separation, or demixing, is important in biological phenomena such as cell sorting. In particle-based models, an open question is whether differences in diffusivity can drive such demixing. While differential-diffusivity-induced phase separation occurs in mixtures with a packing fraction up to 0.7 <cit.>, here we investigate whether demixing persists at even higher densities relevant for cells. For particle packing fractions between 0.7 and 1.0 the system demixes, but at packing fractions above unity the system remains mixed, exposing re-entrant behavior in the phase diagram. We also find that a confluent Voronoi model for tissues does not phase separate, consistent with the highest-density particle-based simulations.
Demixing in binary mixtures with differential diffusivity at high density
M. Lisa Manning
August 1, 2023
=========================================================================
Spontaneous sorting is a common emergent behavior in particle packings composed of different species. While surface tension in fluid-like mixtures composed of molecules with different adhesion is a canonical mechanism that drives phase separation, there are many particulate systems that phase separate due to other mechanisms.
One example is sorting in vibrated granular materials, where sorting occurs when particles of the same size differ in density, as well as when particles of the same density differ in size <cit.>. Sorting also occurs in thermal systems; for example, an entropy-based depletion force causes phase separation for large colloidal particles suspended in a solution of smaller particles <cit.>.
In biological materials, cell sorting and phase separation contribute to compartmentalization and patterning of groups of cells, especially during development <cit.>. At the subcellular level, the ability of particles to undergo robust, spontaneous segregation allows for organization within the cell membrane <cit.>, protein compartmentalization <cit.>, and the formation of non-membrane bound organelles <cit.>.
Given the complexity of biological materials, there are multiple physical mechanisms that could be driving these sorting behaviors, including standard mechanisms of surface tension and differential adhesion. However, even in systems without adhesion, materials that are active can also spontaneously phase separate. One such phenomenon is called Motility Induced Phase Separation (MIPS) <cit.>, which occurs when persistently moving particles drive a feedback loop between velocity and density that leads to a droplet-forming instability. A related mechanism can also help drive phase separation in mixtures of self-propelled particles at intermediate and high densities <cit.>.
Importantly, Weber and coworkers <cit.> have demonstrated that the time invariance introduced by persistent self-propulsion is not required for phase separation; ordinary diffusion is enough, provided that diffusivity is different for two different species. Specifically, they studied mixtures of Brownian repulsive particles where the two different particle species differed only in their diffusivity. While such a system is active because one of the species is not equilibrated with the thermal bath, particle motion is not persistent. In such a system, the authors find that at low and intermediate densities (up to a packing fraction of 0.7) differential diffusivity drives phase separation via nucleation and coarsening into cold droplets surrounded by a hot gas.
Similar results have been observed in a model of active and passive dumbbells, up to a packing fraction of 1.0 <cit.>. In addition, recent analytical work has shown that in the dilute limit, hard spheres with differential diffusivity exhibit a positive effective surface tension at the interface and undergo binodal and spinodal decomposition <cit.>. This result is only approximate when a solid phase exists, and third-order corrections change the phase diagram significantly, suggesting that particle packings at intermediate and high densities could exhibit emergent behaviors not captured by the analytic theory.
Understanding the role of differential diffusivity at higher densities is important in real biological systems. Recent work has demonstrated that enzymes in the presence of their substrates behave as active particles that are not persistent and have an enhanced diffusivity <cit.>, and these processes occur in a dense intercellular environment. In addition, some cell types exhibit differential diffusivity compared to their neighbors inside aggregates in vitro <cit.>. Given these results, an open question is whether differential diffusivity can drive segregation at high densities, relevant for the crowded environment inside the cell, inside multicellular aggregates, or in dense active colloidal suspensions.
It is known that homogeneous packings of active particles change their behavior dramatically at high densities, as glassy dynamics emerge. Previous work has shown that homogeneous packings of soft disks with self-propulsion reach a new, glassy state at high densities <cit.>. Such packings exhibit complex dynamics, including avalanches <cit.> and intermittent plasticity <cit.>.
Therefore, the goal of this Letter is to study whether the glassy dynamics that emerge at high densities alter or destroy the differential-diffusivity-induced phase separation seen in low- and intermediate- density particulate systems. To address this question, we numerically investigate a repulsive two-species particulate system at intermediate and high densities and quantify the phase separation over a wide range of model parameters. For completeness, we also simulate and analyze dynamics in a model for confluent systems, where there are no gaps or overlaps between cells/particles.
We consider a binary particle mixture where the particle types differ only in their diffusivity. The diffusion constants for “cold" and “hot" particles are D_cold and D_hot respectively where D_cold≤ D_hot and the ratio of their diffusion constants is defined as D=D_cold/D_hot. The particles interact with soft repulsive Hertzian potential, E=k(1-r_ij/2R)^5/2 for r_ij<2R, here r_ij=|r_ij|=|r_i-r_j| is the distance between two particles and R and k are the radius and stiffness of the particles. The particle dynamics is governed by over-damped Langevin equations of motion (see SM for details). We initialize a 50:50 mixture of cold and hot particles distributed uniformly in a box of length L and evolve the system with time. We investigate the system with different D and packing fraction, ϕ=Nπ R^2/L^2, where N is the total number of particles in the system. Unless otherwise noted, we simulate N=1000 particles with D_hot = 5.0, as we have verified (SM Fig S2) that value of D_hot is sufficiently high to drive diffusive particle behavior at the highest packing fractions studied. Results are reported in natural units of length R and time τ= R^2/D_hot. To complement our results at higher packing fractions, we also study a confluent Voronoi model <cit.> where there are no gaps between particles (see SM for details).
We first confirm that for sufficiently small values of D our model shows large-scale demixing at low packing fractions, where cold particles form a large cluster, surrounded by gas-like hot particles as seen in Ref. <cit.>. We quantify the amount of demixing using the demixing parameter, DP=⟨ DP_i ⟩ =⟨ 2(N_s/N_t-0.5)⟩, where N_s and N_t are respectively the number of homotypic neighbors and the total number of neighbors of particle i. For D<1.0, DP evolves to reach a steady-state plateau indicating the demixing configurations is favorable in the system, as shown in Fig. <ref>(a). As discussed in other work <cit.>, it is possible for the demixing parameter to reach a small, but non-zero steady state value if a system partially demixes on small scales. This microscopic demixing is observed for D=0.1 at ϕ=0.6 (see Fig. <ref>a).
Next, we study how the demixing of particles changes with the packing fraction, ϕ for a fixed D. For small values of D, there is no large-scale demixing at very low packing fraction (Fig. <ref>b and Fig. <ref>a), but systems exhibit macroscopic large-scale demixing at intermediate values of ϕ (Fig. <ref>b-c). However, large-scale demixing becomes less favorable as the packing fraction increases and there is less free space between particles. As shown in Fig. <ref>b and Fig. <ref>d, the large-scale demixing of particles does not occur at very high packing fractions (ϕ∼ 1.1-1.2). We confirm the results of no demixing at high densities with a Voronoi model of a binary mixture of particles (see SM for details) where there is no gap between particles (Fig. <ref>e). This validates our hypothesis that free space between particles is necessary for large-scale demixed configurations.
How does the demixing of the particles vary across ϕ and D parameter space? To answer this, we plot the steady state demixing parameter, averaged over 10 ensembles as a function of ϕ for different values of D in Fig. <ref>f. For all values of D, differential diffusivity of particles cannot drive large-scale demixing at the highest packing fraction ϕ=1.2.
For certain values of D, the system does not demix at both very low and very high values of ϕ and exhibits large-scale demixing at intermediate values of ϕ. Therefore we observe a re-entrant behavior in the ϕ direction.
For the system sizes considered here, large-scale demixing is sharply defined by a DP value greater than 0.5 (see SM Fig S1(a)). Using this threshold DP value, we construct a phase diagram (Fig. <ref>g) in ϕ and D parameter space that marks the regions of large-scale demixing states (open circles). The solid pink line separates the region of large-scale demixing with no-large scale demixing at D_hot=5.0. We also investigate the impact of overall system temperature on the phase diagram by varying the diffusivity of hot particles, D_hot. The qualitative features of the phase diagram remain the same for different values of D_hot (dashed and dotted pink lines in Fig. <ref>g, see SM for full phase diagrams). Additionally, the results of the confluent Voronoi model (where there is no free space between particles) are indicated by the label ‘CON’ that shows no demixing at any value of D.
To further understand the demixing behavior of the particle mixtures, for demixed states with D ≤ 0.1 we construct a binodal based upon the concentration c of cold particles inside (c_in) and outside (c_out) of the condensed cluster, where the concentration is extracted from a cumulative density distribution (see SM for details). As shown in Fig. <ref>a, the two concentration profiles for (c_in) and (c_out) would meet at the center above ϕ=1.1, indicating that no demixing occurs and there is one concentration throughout the packing. Therefore, for D ≤ 0.1, there is a critical point in the system above ϕ=1.1 (at a system size of N=1000) beyond which demixing is no longer favorable.
We expect that this re-entrant phase transition is related to the entropy of hot and cold particles, but directly computing entropy production in very dense systems is challenging. Instead, we were able to quantify an effective temperature <cit.> of cold particles, T_eff^c, extracted from the mean-squared displacement (see SM for details), which appears to be relevant for the phase behavior across the entire range of packing fractions studied. At low ϕ, effective temperatures are higher than the input temperature due to interactions with the hot particles, while at high ϕ the effective temperatures are lower due to caging effects. The packing fraction at which T_eff^c becomes maximum defines the onset of the demixed state (Fig. <ref>b). The effective temperature also drops to a very small value at the re-entrant transition where the system re-mixes and where neighbor cages significantly restrict particle motion.
In summary, at very high densities differential diffusivity does not drive demixing in particle-based systems. Mixtures of soft particles with different diffusivities exhibit re-entrant behavior, where phase separation is not observed at low densities nor very high densities. Similarly, a Voronoi model for a confluent system where there are no gaps between particles never demixes due to differential diffusivity. These results indicate that free space between particles is necessary for diffusivity-based demixing.
Given that we only see the re-entrant behavior for packing fractions above unity, one might wonder whether such high packing fractions are relevant or reasonable. They are certainly relevant; best-fit particle-based models for dense biological tissues are often in a regime with a packing fraction greater than unity <cit.>. In addition, the mechanical and dynamical behavior of soft-sphere particle packings is reasonably robust up to packing fractions of at least 1.2-1.3, before next-nearest neighbor interactions become important and lead to new minimum-energy structures <cit.>.
Previous analytic work <cit.> on systems with differential diffusivity predicts demixing as the density increases and does not predict re-entrant behavior, but the existing theory does not include many-body corrections. It would therefore be interesting to investigate whether including third- or higher-order interactions, even perturbatively, would suggest a suppression of demixing.
In addition, our observation that free volume is required for sorting suggests a quasi-thermodynamic picture for the observed re-entrant behavior. At intermediate densities, sorting of the low-diffusivity particles into a compact phase gives high-diffusivity particles access to more configurations, which may increase the total entropy. At high enough densities, however, the lack of free volume prevents the high diffusivity particles from accessing additional configurations.
A challenge to testing this hypothesis directly is that it is difficult to define analogues of thermodynamic quantities in a system where particles at two different temperatures do not equilibrate. While the kinetic pressure is a valid observable in molecular dynamics simulations and swim pressure is useful for thermodynamic-like describes of phase separation in self-propelled particle systems <cit.>, in this Brownian system we were unable to define a pressure-like variable that could capture phase behavior. We also found that standard methods for calculating entropy production in glassy materials <cit.> failed at the highest densities, though were were able to compute a related effective temperature intensive variable that is informative. Future work could focus on identifying other metrics for the phase transition.
A potentially related observation is that at higher densities, even in regions of phase space that do not macroscopically demix, we observe local demixing over a length scale of a few particle or cell diameters. Such “micro-demixing" is quite robust for all values of D. A similar phenomenon was recently observed in a confluent cell model composed of two different cell types that differed in their preferred cell shape <cit.>. In that case, the origin was demonstrated to be purely kinetic; differential energy barriers to cell rearrangement exist for cells of one type to enter an island of another type, even though the states themselves are energetically equivalent. It would be interesting to consider whether differential diffusivity similarly generates asymmetric rates of rearrangement at high densities.
This work has immediate implications for biological systems. Most obviously, different cell types in confluent or nearly confluent biological tissues have been observed to have different diffusivities. An open question is whether such differential diffusivity could be responsible for cell sorting seen in such experiments. Our work demonstrates the answer is unequivocally no – researchers must search for other differences between the cell types, such as heterotypic interfacial tensions <cit.>, to drive sorting.
Another potential application is the collective behavior of enzymes in dense cellular environments. It has recently been shown that enzymes in the presence of their substrate diffuse as Brownian particles at a higher effective temperature than their surroundings <cit.>. Given the extremely crowded environment inside cells <cit.>, as well as the increased concentration of active enzymes in liquid-liquid phase separation <cit.> it may be interesting to investigate whether differential diffusivity could help segregate enzymes at intermediate densities, and whether the re-mixing we observe at high enough densities could also occur for collections of enzymes inside cells.
Taken together, our results highlight that active sorting behaviors at high densities do not extrapolate in a simple way from observations at intermediate densities, and suggest that investigations of the high-density limit in other active matter systems may also uncover non-trivial emergent behavior.
Acknowledgements The authors acknowledge support from NSF-DMR-1951921 (EM and MLM) NIH R01HD099031 (RKM and MLM), and Simons Foundation 454947 (OKD and MLM).
§ SUPPLEMENTARY MATERIAL
§ DESCRIPTION OF PARTICLE-BASED SIMULATIONS
We model a binary mixture soft spheres of uniform size and shape in a periodic box. The two cell types differ only by their diffusivity, where N_hot+N_cold=N=1000. Hot particles have a diffusion constant of D_hot, and cold particles have a diffusion constant of D_cold. Cell-cell interactions are dictated by a hertzian energy potential.
E=
kδ^5/2 if r_ij<2R
0 otherwise;
δ = 1-r_ij/2R (if r_ij < 2R).
where k represents particle stiffness, R is the radius of a single particle, and 𝐫_ij=𝐫_i-𝐫_j. The force on a particle 𝐅_ij is found by
𝐅_ij=∂E_ij/∂𝐫_ij,
such that 𝐅_ij=0 when particle overlap is equal to 0, and 𝐅_ij becomes non-zero when δ>0 because |𝐫_ij|<2R. The dynamics of the packing are determined by the sum of these short range interactions, and a translational noise term (η_i), described in detail below:
d𝐫_i/dt=μ∑_j≠ i𝐅_ij+D_typeη_i,
D_type=D_hot or D_cold.
Here η_i is a numerical approximation to Gaussian white noise with an average of 0 and a standard deviation of 1, where ⟨η _iα(t)η _jβ(t')⟩ = δ_ijδ_αβδ(t-t'), which means that the noise on different particles i and j are uncorrelated with one another and uncorrelated in time, so that the noise is memoryless. The strength of the noise is set by the diffusivity D_type. In practice, this means that in our numerical simulation at each time step of unit dt, the position is incremented by a Gaussian random step drawn from a distribution with standard deviation √(2dtD_type).
The particle stiffness k is set to a value of 500 and the time step dt is set to a value of 0.001 to ensure that the numerical integration scheme is stable and does not depend on our choice of dt. μ is the mobility or inverse drag coefficient, which is set to 1. The natural time unit, τ, is equal to R^2/D_hot. The value of D_hot, which determines the overall level of activity in the system, is set to a value of 5.0, unless otherwise noted, such that the system remains fluid-like even at high densities and the strength of the activity does not become too large for our time step. However, we also perform a sweep in D_hot to investigate changes to the phase-diagram when the overall amount of energy varies. The other two parameters we vary are the packing fraction, represented by ϕ, and the parameter D, representing the ratio between the diffusivities of the two particle types:
D=D_hot/D_cold.
§ DESCRIPTION OF VORONOI SIMULATIONS
To further understand activity-based phase-separation at high densities, we model a confluent system where again two particle types differ only by their diffusion constants. We use a Voronoi model <cit.> with parameters identical to those of the particle-based model, aside from the energy functional of the packing. Particle dynamics are determined by minimizing the energy function E = E(𝐫_i)
E=∑_i=j^N[K_A(A(𝐫_𝐢)-A_0)^2+K_P(P(𝐫_𝐢)-P_0)^2].
The area term represents volume incompressability, and the perimeter term results from contractility of the cytoskeleton, as well as a cell-membrane tension due to cell-cell adhesion. p_0 is a non-dimensionalized shape index equal to P_0/√(A_0). Contrasting the entirely localized force interactions of the particle-based model, forces in the confluent system are non-local and non-additive, such that 𝐅_i=-∇_iE. Dynamics of the system are determined by
d𝐫_i/dt=μ𝐅_i++D_typeη_i,
d
where μ, the noise term and D_type are the same as described for the particulate system.
§ ANALYSIS OF SIMULATION DATA
The Demixing Parameter (DP) is used to quantify the amount of demixing in a packing. In the limit of large system sizes, when cells are completely sorted by type, we expect DP to give a value of 1; in a completely mixed system, we expect a value of 0.
DP=⟨ DP_i⟩ =⟨ 2(N_s/N_t-1/2)⟩,
where N_s is the number of homotypic neighbors of particle i, and N_t is the total number of neighbors of particle i. When DP gives a value of 0.5 or above, we consider the system to have undergone large-scale demixing. We determine the DP threshold beyond which the system has formed a droplet or stripe by comparing DP values to the difference in slopes found using the Cumulative Density Function, described below; these values can be seen plotted in Fig. <ref>a.
We use a Cumulative Density Function (CDF) to characterize the change in density between the solid and gas phases, as well as to determine the radius of a formed droplet. This function sums the number of particles within small circular bins size dr moving out from the center of mass of the cold particles. In a uniform system, the numerical density, n(r) is expected to be a constant value. In our system, we expect it to be one constant value inside the cluster and another outside. The concentration of particles, c(r), has the same expectations, as it is simply the numerical denisty scaled with the particle area. In our system, the area of a particle is equal to 1, therefore, simply counting up the particles and taking the derivative can give us a concentration directly. The CDF is calculated as follows:
CDF(r)=∑N_bin/A_bin=∫_0^maxRc(r) dr.
Where N_bin is the number of particles in a bin, and A_bin is the area of the bin. Therefore, after calculating this function from the center of mass to the edge of the box, taking a derivative will output the concentration at that part of the box (slope=M_1 for the inner section, and slope=M_2 for the outer section). From this, we can find c_in, the concentration inside the cluster, and c_out, the concentration outside the cluster, in systems where a droplet forms. These values can then be used to construct a binodal, as shown in Figure 3 in the main text. Additionally, by finding the value of r at which the slope of the CDF changes, we can find the radius of a formed droplet. If there is a significant difference between M_1 and M_2, this indicates that a droplet has formed. As shown in Fig. <ref>(a), for our system size, M_1-M_2 has a sharp bimodal distribution, where it is less than 0.1 if a droplet has not formed and is greater than 0.5 if a droplet is formed. Moreover, a value of DP = 0.5 provides a sharp threshold for this jump, and therefore we use DP to quantify the phase transition. Fig. <ref>(b-i) show snapshots and CDF plots for different values of ϕ.
Mean squared displacement(MSD) is another metric used to quantify the behavior of the system. MSD is a measure of how far a particle travels on average in some given amount of time, called Δ T. Therefore we find
δ r_t^2(Δ T) = (r_x(t + Δ T) - r_x(t))^2 +
(r_y(t + Δ T) - r_y(t))^2,
where r_x is the particle's position in the x direction and r_y is the y position. This value is then averaged over many t values throughout the simulation such that,
MSD(Δ T) = 1/N_t∑_tδ r_t^2(Δ T).
MSD is a useful measure of how much particles are moving around in a global sense. Particularly, we use MSD to ensure that, as we approach high densities and particles become caged by their neighbors, the system can still explore different configurations. For particles whose motion is diffusive, MSD scales linearly with time. Therefore, on a log-log plot of MSD vs time, diffusive particles will give a line with a slope of 1. For our system, if MSD has a slope close to unity, that indicates fluid-like behavior at long times. A slope of 1/2 indicates that particle motion is sub-diffusive, meaning particles must be, to some degree, caged by their neighbors.
For active systems that become so crowded that particles cannot change neighbors at all, MSD is no longer an effective tool because there can be a mode where the entire box is translating so that particles move without changing neighbors. In these cases, we use Mean Squared Distance (MSD_ist), a similar measurement that removes those translations by considering the distances between a particle and the particle that was its closest neighbor at t=0. We have the same expectations of the slope when this quantity is plotted against time logarithmically.
Log-log plots of both MSD and MSD_ist are shown in Fig <ref>. When D_hot= 10.0, both MSD and MSD_ist have a slope near unity for both hot and cold particles. When D_hot= 5.0, MSD has a slope near 1 for both hot and cold particles. However, for the cold particles at a ϕ value of 1.2, MSD_ist does have a slope near 1/2 at early time points, although it eventually reaches a slope near 1. This indicates that at very high density, cold particles are trapped by their neighbors on a short time scale, but given enough time they are able to change neighbors and display diffusive behavior. When D_hot= 2.0, MSD_ist has a slope near 1/2 for both hot and cold particles at ϕ = 1.2.
This indicates that the system becomes arrested and sold-like at high density, when D_hot= 2.0. Therefore, a D_hot value of at least 5.0 is needed for the system to remain fluid-like on long timescales.
The effective temperatures T_hot^eff and T_cold^eff respectively for hot and cold particles are computed by measuring long-time diffusivity from mean-squared displacement.
MSD_type(t) = 4 D_type^eff t,
where D_type^eff=μ k_B T_type^eff is the effective diffusivity for either hot or cold particles. The effective temperatures obtained for three packing fractions for D_hot=5 and D=0.001 are shown in Fig. <ref>.
|
http://arxiv.org/abs/2307.01384v1
|
20230703222948
|
Systematic Bias in Sample Inference and its Effect on Machine Learning
|
[
"Owen O'Neill",
"Fintan Costello"
] |
cs.LG
|
[
"cs.LG",
"stat.ME"
] |
Spatial-temporal Graph Based Multi-channel Speaker Verification With Ad-hoc Microphone Arrays
Yijiang Chen, Chengdong Liang, and Xiao-Lei Zhang
Yijiang Chen and Xiao-Lei Zhang are with the School of Marine Science and Technology, Northwestern Polytechnical University, 127 Youyi West Road, Xi'an, Shaanxi 710072, China (e-mail: orangechen@mail.nwpu.edu.cn, xiaolei.zhang@nwpu.edu.cn).
Chenegdong Liang is currently with the Horizon Robotics, Beijing, China. The work was done when Chengdong Liang was with the Northwestern Polytechnical University, China (e-mail: chengdong01.liang@horizon.ai).
9 June 2023
================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
A commonly observed pattern in machine learning models is an underprediction of the target feature, with the model’s predicted target rate for members of a given category typically being lower than the actual target rate for members of that category in the training set. This underprediction is usually larger for members of minority groups; while income level is underpredicted for both men and women in the ‘adult’ dataset, for example, the degree of underprediction is significantly higher for women (a minority in that dataset). We propose that this pattern of underprediction for minorities arises as a predictable consequence of statistical inference on small samples. When presented with a new individual for classification, an ML model performs inference not on the entire training set, but on a subset that is in some way similar to the new individual, with sizes of these subsets typically following a power law distribution so that most are small (and with these subsets being necessarily smaller for the minority group). We show that such inference on small samples is subject to systematic and directional statistical bias, and that this bias produces the observed patterns of underprediction seen in ML models. Analysing a standard sklearn decision tree model's predictions on a set of over 70 subsets of the `adult' and COMPAS datasets, we found that a bias prediction measure based on small-sample inference had a significant positive correlations (0.56 and 0.85) with the observed underprediction rate for these subsets.
§ INTRODUCTION
Over the past decade, ML has been increasingly applied to several sensitive areas. Criminal justice, healthcare and banking all apply ML to inform their decisions, which can have significant impact on people's lives. Given the sensitive nature of these areas, and historical discrimination, it is vital to understand the sources of any biases exhibited by the model.
In the literature, examples abound of standard ML approaches showing significant bias towards certain demographics <cit.>. Two primary sources of bias have been identified: data bias and algorithmic bias. Data bias may be due to errors in data collection, non-representative or skewed samples or active prejudice within the problem area (e.g. societal gender/racial bias). It is assumed that biased data will result in biased predictions, assuming no correction methods are applied to the model.
Algorithmic bias is bias introduced by the model and will result in biased predictions even when using `unbiased' data. The causes of algorithmic bias are more nebulous, most often attributed to some flaw in the algorithm's design or the inference process of ML itself.
While attempts have been made to quantify bias in data <cit.>, algorithmic bias is primarily seen as a problem to be corrected rather than as a phenomena to be measured. However, even ignoring the challenges in defining fairness, modelling this bias would be a useful addition to real world ML application. Therefore, our research aims to examine ML inference, and the statistical processes underpinning it, in order to understand the patterns of bias seen in the literature.
We focus on a particular objective and quantifiable measure of bias proposed by <cit.>, based on the difference between the rate at which members of a given group have the target variable in the dataset (the `observed target rate') and the rate at which members of that group are predicted to have the target by an ML algorithm trained on the same dataset (the `predicted target rate'). This measure reveals a common pattern of `underprediction', where the predicted target rate for a given group is reliably lower than the observed target rate for that group in the dataset. This measure also reveals a related `underprediction bias' against minority groups (where the degree of underprediction is higher for the minority than the majority). An example of these patterns in the `adult’ dataset can be seen in tables 5-7, where the target rate is underpredicted for both men and women, but where underprediction for the minority (female) group is larger than for the majority (male) group (25% versus 17%).
§ BACKGROUND
§.§ Bias Metric
Before we begin our analysis, we must elaborate on our bias metric. We refer to this metric as `underprediction bias' as we have frequently observed ML models underestimating the target rate in their predictions, relative to the observed sample rate. We are interested in the rate at which failures of prediction occur for the minority and majority groups (G=0,G=1). We measure failures of prediction relative to the dataset, by comparing the number of minority/majority group members in the dataset who have the target feature (T=1) against the predictions of a given machine learning algorithm (which we write as P(T=1 | X=1, G), where X is the feature used to inform the prediction). For each group we normalise by dividing by the actual number with the target variable, so the degree of bias for a given group i is given by
U(X,i)= P(T=1|G=i) - P(T=1 | X=1, G=i)/P(T=1|G=i)
The larger this number, the higher the probability that a given member of group i who should be predicted to be T=1 is actually predicted to be T=0, and the more members of that group are disadvantaged by the algorithm's predictions. A value of U(X,i)=0.1, for example, would indicate that the machine learning algorithm predicts the target variable for group i at a rate 10% lower than the rate of that variable in the dataset.
§.§ Distributional Inference from Samples
Our investigation begins with inference at its most fundamental: given a sample, what can we infer about the population from which it was drawn? Consider a situation where we are given a sample of N items drawn from some population, K of which have a particular feature (which we'll call A). We want to make predictions about the probability of A in the population, using only the given sample (and no information beyond that sample).
A typical assumption is that the correct probability estimate for A in the population, given the observed sample, is equal to the sample proportion:
Pr(A) = K/N
However, this assumption is fundamentally incorrect. To see why, consider an extreme case, where you are shown a sample of 2 items (neither of which are instances of A) that come from one population, and a sample of 20 items (none of which are instances of A) that come from another population. The sample proportions in both cases are Pr(A) = 0. Concluding that A has a probability of 0 in both populations is incorrect, the sample of size 2 is far too small to justify such a statement. Proposing that A has the same probability in both populations is also inaccurate. P(A) = 0.25 could reasonably hold in the first population (the probability of drawing a sample of 2 items neither of which are A, from a population where P(A) = 0.25, is (1 - 0.25)^2 = 0.56; a more than 50% chance), but P(A) = 0.25 is extremely unlikely to hold in the second population (the probability of drawing a sample of 20 items, none of which are A, from a population where P(A) = 0.25, is (1 - 0.25)^20 = 0.003; a less than 1% chance).
Therefore, an alternative approach is necessary. In order to determine the most likely population to have generated our sample, we base our inference on the distribution of all possible populations (`distributional inference' (DI)). Beginning with simulation, for a given sample size N we run the function PROBABILITIES(N) (see Algorithm 1). This function loops 10,000 times, on each cycle randomly picking a value for the population probability p = P(A) of some event A (p is drawn uniformly from the range 0 … 1 inclusive). On each cycle the function SAMPLE(p, N) then draws a sample of N items from the population with p = P(A), by randomly picking N values q, drawn uniformly from the range 0 … 1 inclusive: Cases where q < p are counted as an instance of event A in our sample. SAMPLE(p, N) then returns the number of cases which were counted as an instance of event A in the drawn sample. For each K from 0 to N the function PROBABILITIES(N) has an associated storage list P_K: On each cycle of our simulation where the drawn sample contains K instances for event A, we add the probability p = P(A), which generated that sample to the associated storage list P_K. Each list P_K thus holds the set of population probabilities p = P(A) that generated samples of N events containing K instances of A. After running this simulation for 10,000 cycles, we then display the average generating probability that produced samples with K = 0, K = 1,..., K = N. This average generating probability represents the statistically optimal estimate for the underlying population probability P(A), given an observed sample of size N that contains K instances of A.
Table <ref> shows the output from this simulation for values N = 16, 4 compared with the sample proportions. It is clear from this table that, for a given sample of N items containing K instances of event A, the average probability P that generated that sample differs from the sample proportion Pr = K/N. Specifically, the average generating probability (and so the normatively correct, optimal estimate for the population probability, given the sample in question) is regressive toward 0.5 with the degree of regression increasing as the sample size N falls. Therefore, for a sample containing a majority and minority group, different target rates will be predicted for each group, even if sample target rates are equal.
These predictions actually follow a well-known result in epistemic probability theory, the Rule of Succession (RoS), given by
P(A) = K + 1/N+2
This expression has been proved elsewhere in various ways, with the strongest and most general proof being given by <cit.>. As <cit.>, in a very interesting presentation of the history and various proofs of the RoS, notes, “[I]n order to attack [De Finitti’s proof] one must attack the formidable edifice of epistemic probability itself.”
§.§ The Beta Prior
While the RoS was stated before the advent on Bayesian Statistics, the result is equivalent to assuming a uniform prior, Beta(1,1), and updating this using the sample.
A central property of the Beta distribution is that, given that we have the prior distribution p_A ∼Beta(a,b) and have also observed K occurrences of A in a sample of N events, then the updated or posterior distribution for p_A will be the Beta distribution
p_A ∼Beta(a+K,b+N-K)
(a Beta prior necessarily gives a Beta posterior; the prior and the posterior are `conjugate', to use Bayesian terminology). Equation <ref> gives a probability distribution for the unknown generating probability p_A, given the observed sample K/N (and the prior parameters a and b). The expected value or mean of this Beta distribution is
⟨Beta(a+K,b+N-K) ⟩ = K+a/N+a+b
This expression gives the expected value for our unknown generating probability given the observed sample, and so is the theoretically optimal estimate for that probability (given the sample, and given our priors a,b).
In our `inference from samples' task, we assume no information about the generating probability p_A apart from that given by our sample: prior to seeing the sample, we would consider every possible value of p_A as equally likely. For a Beta distribution with a = b = 1 we have
P( p ≤ p_A ≤ p +Δ p) = p^0(1-p)^0/B(1,1) Δ = Δ p
and every possible value of p_A is equally likely (the chance of p_A falling in a given p … p +Δ p range is simply equal to the size of that range). The distribution Beta(1,1) thus represents the uniform distribution or the `uninformative prior' for probability inference from samples, and so the expected value of our unknown generating probability p_A, given a sample of N items of which K are instances of A (and no other information beyond that sample) is the RoS. Note that this effect is unavoidable, any attempts to reduce these regressive effects will deviate from DI. Other choices of prior will be explored in the discussion.
§ MACHINE LEARNING INFERENCE
§.§ The Effect of Predictor Variables
Our investigation thus far has only considered the bias arising from inference on data containing target and group membership. It is important to note, however, that it would be more accurate to describe these effects as occurring between target and predictor variable. Group membership is mentioned above to highlight the observed regression in the context of discrimination against minority groups, but the effects can be observed using any predictor variable.
It is useful now to consider ML inference. ML is fundamentally a process of achieving the optimal inference from a real world sample. As we saw in the previous section, the theoretically optimal inference (DI) necessarily produces systematic bias. Therefore, we would expect ML algorithms to be subject to similar biases to those described above. Due to the highly sensitive scenarios where ML algorithms are being applied this is a cause for significant concern. That the DI approach systematically effects minority groups more than majority groups exacerbates the situation further.
In most ML data sets there are thousands of data points with many predictor variables under consideration, each with differential association with the target variable. The scale of this data appears to cast doubt over the impact of the bias we've described above as the `plus one over plus two' becomes negligible as K and N increase past 30. However, a more realistic way to look at predictor variables in an ML context would be to consider predictor variable combinations (PVCs). Put simply, when presented with a new individual for classification, the ML system references a subset of similar individuals (i.e. individuals with a similar PVC). The inference is performed on this similar subset, not the dataset as a whole. Therefore, even with a large dataset, if there are few examples of the PVC to reference, the inference takes place on a small sample. To model ML we simply take one of these inference scenarios.
Consider a sample of size 100, with three variables; target, group (majority or minority) and PVC (which we'll call X) membership. The total sample is described in Table <ref>, with a breakdown for majority and minority in Table <ref>.
As before, the target is rare. Therefore, an informative PVC would co-occur with the target variable at a high rate. For simplicity, we assume a `perfect predictor': X=0 when T=0 and X=1 when T=1.
We assume inference takes place with no information beyond that given in the sample, and so apply the RoS to find the most likely generating probabilities for the target variable. The predicted conditional probabilities are:
P(Target=1 | X=1, Maj) = 16+1/(0+16)+2 = 0.94
P(Target=1 | X=1, Min) = 4+1/(0+4)+2 = 0.83
(since in the majority group we have N=16 occurrences of the PVC, and K=16 of these occur with the target, while in the minority group we have N=4 occurrences of the PVC, and K=4 of these occur with the target).
Based on this sample, therefore, a normatively correct reasoner will infer that the probability of the target for a member of the majority with PVC X is 0.95, while the probability of the target for a member of the minority with PVC X is 0.83. Both of these probabilities are less than the sample proportion (Pr(Target|X)=1), and so are underpredictions. Further, the degree of underprediction is greater for members of the minority than for the majority. This differential association is a `rational' (in a statistical sense) consequence of the fact that we are making inference from samples, and that the sample sizes for our two groups are different. Returning to our example, this implies that for an individual with characteristics indicating a high salary, being male makes them more likely to be predicted to have a high salary.
Assuming that individuals are classified as having the target variables based only on the presence of PVC X, and recalling that in our example the PVC occurred with 20% of both groups (because we assume the PVC co-occurs perfectly with the target variable in the sample), we see that the proportion of individuals in each group identified as having the target variable will be
Prop. of majority predicted to have T=1 = 0.94*0.2 = 0.188
Prop. of minority predicted to have T=1 = 0.83*0.2 = 0.166
Finally, multiplying these by the proportion of each group in the sample overall, we get estimated probabilities of association between group membership and target variable (Table <ref>).
Table <ref> shows the resulting probabilities of target prediction for each category. These results show an underprediction for both groups (relative to the actual 20% rate in each class in the sample), but a greater underprediction for the minority. Here we have demonstrated that even with unbiased data (majority and minority have equal target rates in the sample) we still see the same patterns of underprediction. This sample size bias is unavoidable, even in these idealised theoretical examples, without departing from DI.
§.§ Predictions of the Overall Sample
To expand the above analysis to measure overall model bias on a dataset, we must consider the distribution of PVC sizes. The PVCs in many datasets roughly follow a power law distribution (demonstrations of this in `adult' and COMPAS can be seen in the supplementary material). A subset of these will be considered relevant by the model and used for inference. This subset (e.g. the PVCs in the leaves of a decision tree) also roughly follows a power law, implying that the vast majority of PVCs occur infrequently (<100 times).
Therefore, for an example of ML inference, we simply apply the single PVC case, described in section 3.1, distributed by a power law. This highlights the exact source of this type of bias in ML: the distribution of PVC sizes, which we refer to as `exponential spread' (ES):
ES = ∑_ip_i/i
where i ranges across all observed PVC sizes in our dataset and p_i is the proportion of the dataset contained in a PVC of size i. More infrequent PVCs in a group will result in more predictions with high levels of bias, increasing the overall bias for that group. The minority group, being smaller, generally has more infrequent PVCs which leads to it experiencing more bias.
The subset of PVCs selected by the model tend to co-occur with the target at a high rate (assuming the target variable occurs at a low rate in the sample). To simplify, in our example we let S>0.5 be the average co-occurrence with the target for the PVCs selected by the model.
Therefore, for a given leaf containing F instances, the inferred probability will be
S*F+a/F+2a
Suppose we have two groups (majority and minority) with the same combinations of features. Let N be the size of the overall sample, R the proportion of the sample in the minority group, S_1 and S_2 be the average co-occurrence with the target for the majority and minority groups respectively and a=b=1 for the Beta prior. The difference in predicted probability compared to the sample will be:
b(F)= (S_2*F*R+1/F*R+2 - S_2)/(S_1*F*(1-R)+1/F*(1-R)+2 - S_1)
The overall expression for the model's prediction bias will be:
B = ∫_1^N b(F) * P(F) dF
where N is the size of the dataset (and so the largest possible leaf size) and P(F) is the probability of a leaf of size F occurring (since we assume a power law distribution, this will be 1/F^X, where X is defined in the power law equation).
The underprediction seen for these power law distributions for each group can be seen in Fig. <ref>.
These graphs demonstrate the concepts we have seen in the simpler examples. Infrequent combinations will be more affected by the regressive effects of DI and so will be subject to more bias. This effect will be more pronounced in the minority group as it necessarily contains more infrequent combinations. Therefore, underprediction increases with the number of infrequent combinations.
Fig. <ref> reveals a somewhat paradoxical result. The greater the co-occurrence of the PVCs, i.e. the greater quality, the higher the underprediction. However, this is consistent with our previous examples. The effect of DI is regressive towards 0.5, with the highest regression at 1. Therefore, we see this pattern in our numerical example. This result has troubling implications for our inference; choosing better variables results in more bias.
§.§ Decision Tree Bias
The above analysis gives an expression for bias B assuming that leaf size F for leaves in a decision tree with S>0.5 (that is, where the target variable occurs at a rate above 50%) is distributed following a power law. Here we consider the number of leaves of size F with S > 0.5 that we would expect to see in a particular group G of size N, assuming that the target rate for members of that group has some value p. In this situation we can assume that target occurrences are distributed randomly across the set of leaves of size F, so that to a first approximation the probability of a leaf of size F having S>0.5 is given by the complementary cumulative binomial
1-Bin(F/2,F;p)
(the probability of getting a sample of size F containing more than 50% successes, when the probability of a single success is p) and, since the target variable is predicted only for leaves with S>0.5, this expression gives the expected predicted target rate for individuals in leaves of size F. Note that when p<0.5 the value of this expression necessarily falls with increasing F (the binomial distribution becoming more peaked around pF as F increases), and that its value when F=1 is exactly p. This means that when p<0.5, the probability of a leaf of size F having S>0.5 is less than p for all F>1. If leaf size is distributed proportional to some power law with exponent X, the the total predicted target rate for individuals in group G is
∑_1^N 1-Bin(F/2,F;p)/F^X
and, since this is simply an average of predicted target rate across all leaf sizes, this total predicted target rate is necessarily less than the group target rate p when p<0.5 (with the difference falling with increasing exponent X). An analogous argument shows that this this total predicted target rate is necessarily greater than the group target rate p when p>0.5 (with the difference similarly falling with increasing exponent X).
In other words, the threshold of 0.5 used in target prediction introduces another regressive effect that moves the predicted target rate for a given group below the observed target rate: if the target rate is low, it is less likely for a leaf to contain enough examples of the target to reach the 0.5 prediction threshold, producing systematic underprediction of the target. If the target rate is high, by contrast, this threshold effect results in overprediction of the target. In the case of decision trees (or any decision-threshold ML algorithm), we expect that both the SSIE and the decision threshold effects will contribute to produce systematic patterns of underprediction.
§.§ Predicting bias
Given the above results and theory, we can now make a prediction regarding the effect of SSIE and decision threshold bias from the structure of the data.
Both types of bias will be amplified for inferences where the target rate significantly different from 0.5 (much higher or lower). This is due to the regressive effects of DI and the decision threshold acting away from 0.5 and getting weaker the closer the target rate is 0.5.
We also expect the ES to increase SSIE bias (i.e. if a large proportion of the relevant PVCs/leaves in a group are small, then it will be subject to greater bias). Generally, the minority group is more likely to have smaller leaf sizes and so will be subject to greater bias, but this is not always the case.
In cases where the above points do not hold (target rate is close to 0.5 or there are few small leaves), then SSIE bias will not have a significant effect and other forms of bias are more likely influencing the predictions. However, when applying algorithms that use a decision threshold, we will always see this underprediction effect as explained in the previous section. Therefore, our theory gives us a method of explaining two significant sources of bias that we see in predictions.
§.§ Example from the Literature
We now examine an example of ML bias from the literature. <cit.> explores ML predictions on the adult dataset which contains majority and minority groups, a rare target trait and other predictor variables. In it, women were underrepresented in the >50k Target category compared to men (see Table <ref>). A random forest model's predictions increased this underrepresentation (actual: 11% of women and 30% of men in the >50k group, predicted: 8% of women and 26% of men in the >50k group) with the effect more pronounced for women than men (a decline of 25% relative to the actual values for women, but 15% for men; see Table <ref>). The results in Table <ref> show the model's predictions.
As before, there is underprediction for both groups, but a greater underprediction for the minority.
We can rerun the example from section 3.1 using the values from the adult data set (while still using the single PVC case) to get the results seen in Table <ref>.
As predicted above, the single PVC case sufficiently describes the process of ML inference. The infrequent feature combinations dominate the inference and their small sample size contributes significant bias to the predictions.
§ RESULTS
In order to provide greater evidence for our theory, we explored the patterns of bias for various subsets of the `adult' and COMPAS datasets <cit.>. Regarding pre-processing, we took generally standard approaches to convert the variables to categorical and then one hot encoded to binary (mostly following the approach of <cit.> for `adult' and <cit.> for COMPAS). In addition, the `race' variable was converted to `white=0' or `white=1' for `adult' and `nonwhite=0' or `nonwhite=1' for COMPAS (as the `nonwhite' group was in the majority in COMPAS).
Once pre-processed, we examined each possible subset in the dataset based on the majority and minority of the split (e.g. after one hot encoding, individuals with `Never-married'=0, those with `Never-married'=1, 'Local-gov'=0, etc.). The splits were filtered so that the minority group had at least 100 members and that each group had target occurrences. This was to ensure the inference scenarios being tested were realistic and not influenced by the random variation of small datasets. For each subset, we examined the predictions of an sklearn decision tree model <cit.>. This gave us the difference between observed target rate and predicted rate for each subset (the `bias'):
b(D) = pred - act/act
where D is the subset being considered, pred is the predicted target rate and act is the actual target rate.
Given our theory, we have several different approaches for predicting this bias from the data.
The first method would be to examine the target rate of the subset. Given the regressive effects of the SSIE, a target rate of less than 0.5 will result in underprediction of the target rate with the effect increasing as the target rate approaches 0. A target rate of greater than 0.5 will result in overprediction of the target rate with the effect increasing as the target rate approaches 1.
Another approach would be to consider the relationship between ES (Equation. <ref>) and the bias. A larger number of infrequent PVCs will contribute more bias to the overall predicted target rates.
Yet another approach would be the association between some combination of the target rate and ES and the bias. Equation <ref> is a modification of the ES expression that adds the target rate at each sum:
ES = ∑_ip_i + Tr/i
where Tr is the target rate in the sample.
The resulting correlations for the majority and minority subsets of `adult' and COMPAS are summarised in tables <ref> and <ref>. `Tr' is the correlation between that groups target rate and the observed underprediciton while `Tr+ES' is the correlation between the sum of the target rate and ES with the observed underprediction. `Diff' considers the difference in underprediction between groups and `full' combines the results of all subsets.
We see that strong correlations exist between the target rate and observed underprediction in all scenarios. Low target rates will regress towards 0 and high target rates towards 1 due to the SSIE.
The addition of ES improves this correlation for the minority groups in `adult' and the `full' `adult' data. This indicates that in these smaller groups, the high levels of ES are contributing to the overall bias. `adult' contains many more variables, and therefore many more PVCs, than COMPAS, so we would expect ES to play a larger role.
In general, the overall target rate is sufficient to estimate the expected bias due to SSIE for a given dataset with ES being a helpful add on.
Given the many splits considered, and their diversity in size and target rate, this is significant evidence to support our theory of the SSIE.
Obviously, this bias due to SSIE is one of many sources of bias in an ML application. Extensive analysis of bias due to other sources (societal bias, other statistical bias, methodology bias, etc.) can be found in <cit.>. In particular, when the prerequisites for this SSIE (as described in section 3.3) are not met, and bias is still observed in the model's predictions, we expect these other sources are significantly contributing. Our analysis of the COMPAS data initially showed agreement with our theory: greater underprediction for the minority (`white') group than the majority (`non-white'). However, given the evidence proposed in <cit.> that this bias is due to societal factors embedded in the data, we conclude that the SSIE is not a major contributor.
§ DISCUSSION
An alternative perspective of SSIE bias is that it is a bias of sampling, not inference. Given the demonstrable negative effects on minority groups, the `correct' approach would be to not conduct inference on small samples in the first place. If all samples were large enough, no bias would be observed when applying DI.
One may suggest that increasing data collection for this group may solve this issue. However, simply increasing the sample size in absolute terms will not reduce the SSIE bias, notice that we also see this bias in the larger majority group, albeit to a lesser extent. To truly `eliminate' this bias, the data generation process would have to ensure that every PVC in the data occurs more than a set number of times. Given the thousands of possible PVCs in most ML datasets, this quickly becomes infeasible through extra sampling alone. Instead of increasing the sample size, an easier approach would be to reduce the number of possible PVCs by removing predictor variables from the data. This introduces an `information/bias tradeoff', impacting model performance. Further investigation of this solution to SSIE bias is required to determine if the effectiveness of this approach.
§ CONCLUSION
ML bias is a pervasive issue with serious social impacts. We have shown that theoretical models of DI and decision thresholds accurately depict results from the ML literature. Our theory describes the SSIE as baseline of unavoidable bias, inherent to rational decision making, solely due to group size within the sample. In addition, we have demonstrated how decision thresholds introduce a similar bias. These biases effects members of minority groups more severely and are present even in large datasets. Further research is required to explore this effect for other ML model types, such as more advanced tree based models and neural networks.
|
http://arxiv.org/abs/2307.01084v1
|
20230703150621
|
Wasserstein-$1$ distance and nonuniform Berry-Esseen bound for a supercritical branching process in a random environment
|
[
"Hao Wu",
"Xiequan Fan",
"Zhiqiang Gao",
"Yinna Ye"
] |
math.PR
|
[
"math.PR",
"math.ST",
"stat.TH",
"60J80, 60K37, 60F05, 62E20"
] |
21.0truecm 17.0truecm
-0.5cm -0.5cm
theoremTheorem[section]
lemmaLemma[section]
corollaryCorollary[section]
propositionProposition[section]
exampleExample[section]
remarkRemark[section]
pfProof
|
http://arxiv.org/abs/2307.01245v1
|
20230703174424
|
Characterisation of three-body loss in ${}^{166}$Er and optimised production of large Bose-Einstein condensates
|
[
"Milan Krstajić",
"Péter Juhász",
"Jiří Kučera",
"Lucas R. Hofer",
"Gavin Lamb",
"Anna L. Marchant",
"Robert P. Smith"
] |
cond-mat.quant-gas
|
[
"cond-mat.quant-gas"
] |
M. K., P. J. and J. K. contributed equally to this work.
M. K., P. J. and J. K. contributed equally to this work.
M. K., P. J. and J. K. contributed equally to this work.
Present address: STFC Rutherford Appleton Laboratory, Didcot, OX11 0QX, United Kingdom
robert.smith@physics.ox.ac.uk
Clarendon Laboratory, University of Oxford, Parks Road, Oxford, OX1 3PU, United Kingdom
Ultracold gases of highly magnetic lanthanide atoms have enabled the realisation of dipolar quantum droplets and supersolids. However, future studies could be limited by the achievable atom numbers and hindered by high three-body loss rates. Here we study density-dependent atom loss in an ultracold gas of ^166Er for magnetic fields below 4, identifying six previously unknown features which display both a strong temperature dependence and also sensitivity to the polarisation and intensity of the light used to optically trap the atoms. This detailed knowledge of the loss landscape allows us to optimise the production of dipolar BECs with more than 2e5 atoms and points towards optimal strategies for the study of large-atom-number dipolar gases in the droplet and supersolid regimes.
Characterisation of three-body loss in 166Er and optimised production of large Bose–Einstein condensates
Robert P. Smith
August 1, 2023
========================================================================================================
§ INTRODUCTION
Precise knowledge and control of the nature and strength of interparticle interactions have been a key factor in the success of using degenerate ultracold-atom samples for studying many-body quantum phenomena. The application of a magnetic field close to a Feshbach resonance is a highly versatile and convenient tool for tuning the sign and strength of s-wave contact interactions that typically dominate in ultracold gases <cit.>. However, approaching a Feshbach resonance also leads to the enhancement of (detrimental) three-body processes, which result in atom loss and heating <cit.>. Knowing the location of Feshbach resonances and quantifying the associated loss features is thus essential for designing and optimising ultracold-atom experiments.
The realisation of ultracold samples of highly magnetic erbium <cit.> and dysprosium atoms <cit.>, which interact via both long-range, anisotropic dipole–dipole interactions and tuneable contact interactions, has led to the discovery of dipolar quantum droplets <cit.> and a supersolid phase <cit.>, which simultaneously exhibits a global phase order and a spontaneous spatial density modulation. While these first experiments were carried out in cigar-shaped traps leading to (relatively simple) one-dimensional (1D) spatial ordering, more recently droplet arrays and supersolids with two-dimensional (2D) ordering have also been observed <cit.>. Theoretical works predict a plethora of novel patterns in 2D systems, including so-called honeycomb, labyrinthine and pumpkin phases <cit.>. However, reaching these exotic states requires degenerate samples with higher atom numbers than what have been used in these experiments so far (1.4e5 <cit.>).
The maximal achievable atom number in an experiment is often restricted by three-body loss processes which limit the efficiency of evaporative cooling close to degeneracy and can greatly reduce the gas lifetime at (or while approaching) the desired s-wave scattering length. Moreover, in order to map out the parameter space of exotic dipolar phases one needs to tune the relative strength of the contact and dipole–dipole interactions by controlling the strength of the magnetic field. The precise knowledge of the loss landscape as a function of the field strength is therefore paramount. Here we carefully characterise three-body loss in ^166Er for magnetic fields below 4, revealing the presence of six previously unreported resonant loss features which display a strong temperature dependence. In light of this, we describe our optimised procedure for the production of ^166Er Bose–Einstein condensates (BECs), containing more than 2e5 atoms.
§ THREE-BODY LOSS MEASUREMENTS
In alkali atoms the (number) density of Feshbach resonances is typically between 0.01 and 0.1 <cit.>. However, in magnetic lanthanides, including erbium and dysprosium, the anisotropy of the van der Waals and the dipole–dipole interaction potentials leads to coupling between many scattering channels and consequently to an abundance of Feshbach resonances <cit.>, some of which show a strong temperature dependence <cit.>. Here we focus on ^166Er for magnetic fields below 4, where Feshbach resonances and associated loss features have been reported at 0.02(5), 3.04(5) and 4.028 <cit.>.
For our measurements we prepare an ultracold, spin-polarised sample of ^166Er in an (approximately harmonic) optical dipole trap (ODT) formed from 1030 laser light. The full experimental sequence is described in <ref>; here we only note that the final stage of cooling is achieved by evaporation in the ODT, with the temperature of the atom cloud controlled by the ODT depth. To produce clouds at different temperatures, we interrupt the normal evaporation sequence at different times and ramp up the depth of the ODT over 100 to prevent any further evaporative cooling (and associated atom loss) during our measurements. We initiate the loss measurements by quenching the magnetic field B [The magnetic field is calibrated using radio frequency spectroscopy within the ground state Zeeman manifold.] to the desired value in <10. To avoid ramping through wide resonances, for measurements above 3 we evaporatively cool at 3.8, whereas for measurements below 3 we cool at 1.4. We use absorption imaging after a time-of-flight to measure the atom number N and temperature T as a function of the time t the atoms are held in the trap (at a given B). Examples of these N(t) and T(t) curves are shown in <ref>; here the initial temperature T_i=0.5 and B=2.7.
We first consider atom loss. As the atoms are prepared in the lowest Zeeman state at temperatures much lower than the sub-level splitting (≈78), two-body (spin relaxation) loss processes are energetically suppressed. The evolution of the atom number density in thermal samples can therefore be described by a combination of one- and three-body loss terms <cit.>,
ṅ(r) = -n(r)/τ_1 - L_3 n^3(r) ,
where τ_1 is the one-body lifetime (set by e.g. collisions with background gas atoms in an imperfect vacuum), L_3 is the three-body loss coefficient and n(r) is the atom number density. For a thermal cloud (well above the BEC transition temperature), the atomic density distribution in a harmonic trap is Gaussian and <ref> can be written as <cit.>
Ṅ/N = -1/τ_1 - L_3 ( m ω̅^2/2 √(3)π k_B T)^3 N^2 ,
where m is the atomic mass, ω̅ is the geometric mean of the trapping frequencies and k_B is the Boltzmann constant. The trapping frequencies were measured separately by exciting the cloud center-of-mass oscillations in the three perpendicular directions and τ_1 was independently determined to be τ_1 = 33(1) from measurements of low-density clouds for which three-body loss is negligible.
To determine L_3(B) from our N(t) measurements, we fit the numerical solution of <ref> to our data [see solid line in <ref>] using the corresponding measured T(t) as an input. We only fit our data within the time interval over which the temperature stays within 40 of its initial value [gray shaded region in <ref>] to limit any systematic errors arising from either (i) evaporative atom loss due to the finite trap depth or (ii) the fact that L_3 may depend on T [The 40 cutoff is chosen as a tradeoff between minimising systematic errors (with a lower cutoff) and random errors (by choosing a higher cutoff to include more data).].
Regarding the heating of the atom cloud [<ref>], this can be understood to be due to two main processes. First, the loss rate is higher in the central (higher density) part of the trap, preferentially removing atoms with energy lower than the average energy in the cloud, leading to `anti-evaporation' <cit.>. Second, the products of the three-body collision can have significant kinetic energy (acquired due to the released binding energy when two atoms form a molecule), which may be partially deposited in the cloud via secondary collisions.
<Ref> shows the measured three-body coefficient as a function of the magnetic field for initial temperatures of 0.5, 1.5 and 4. In addition to the Feshbach resonances already reported [solid vertical lines in <ref>], we observed six additional loss features (dotted vertical lines). These loss features both broaden and shift to higher B with increasing temperature.
To explore the temperature dependence further, we measured L_3 as a function of B around the newly discovered resonance at ≈0.86 for several additional T_i values [see <ref>]. Given the asymmetric shape of the loss features, for each T_i data series, L_3(B) was fitted with a heuristic skewed Gaussian curve of the form
L_3(B) = A e^-(B-B_c)^2/2σ^2(1 + ( α(B - B_c)/√(2)σ)) + C ,
where B_c, σ, α, A and C are fitting parameters.
<Ref> show, respectively, the peak width Δ (taken as twice the variance of the skewed Gaussian) and B_max [the location of the maximum of L_3(B)] as a function of the average temperature T̅ of the decay series [Note that the average temperature of a decay series T̅ is up to 20 higher than the T_i for the same series due to the heating associated with the three-body loss.]. We observe that Δ grows linearly with temperature and so we parameterise the width of the resonance via a linear function, Δ=Δ_0+(ΔT) T̅, fitted to the data [solid line, <ref>]. Note that all our extracted Δ_0 values are consistent with zero within our ±3 error bounds. Similarly, B_max also grows (approximately) linearly with temperature and so we fit the data using B_max = B_0+(B_maxT) T̅. The parameters of both these fits are tabulated in <ref> for all the newly detected loss features. We also note that for the 0.86 feature the maximum L_3 decreases with increasing temperature within our measured range [see <ref>], however, for other peaks this trend is inconclusive.
The magnetic field dependence of the loss properties arises due to the differential Zeeman shift between the different scattering channels. However, it is also possible for light fields to exert similar differential shifts <cit.>, which can, in some cases, also have vectorial and tensorial parts <cit.>. To check if the optical field from our ODT causes such an effect, we measured the loss features for thermal clouds at the same temperature (2) but for traps with two different light intensities (powers) and for the polarisation of the ODT light ℰ either parallel (ℰ∥B) or perpendicular (ℰ⊥B) to the external magnetic field (and hence the spin-polarisation of the atoms) [Note that for linearly polarised light, there is no vector component of the polarisability <cit.> and so the angle between 𝐁 and the direction of propagation of the light does not matter.]. Data for the 0.861 resonance is shown in <ref>; here, to identify the peak position we simply performed a two-point loss measurement [In our two-point measurements the loss is N(0)-N(t_hold) where t_hold is a fixed hold time, the peak loss is then normalised to 1.]. For ℰ∥B we observe a significant (positive) shift of the loss feature with light intensity, whereas for ℰ⊥B the effect is much less noticeable and (if anything) has the opposite sign. Assuming that the resonance position shifts linearly with light intensity, one can extract a constant of proportionality between the light intensity I and the resonance peak shift which gives B_0I for both orientations. These are tabulated in <ref> for all the newly detected loss features.
Finally, we briefly compare our findings to the qualitative predictions of a `resonant trimer' model previously proposed in the context of temperature-dependent loss features in the lanthanides <cit.>. Within this model, some relatively simple scalings emerge for k_B T ≫Γ_br≫Γ(E), where Γ_br is the trimer decay rate (into an atom and dimer pair) and Γ(E) is the collision energy-dependent width of the trimer resonance. In this regime, one finds that B_max-B_0 ∝ T and Δ∝ T, in qualitative agreement with our observations. However, we note that these two trends may also be consistent with alternative models <cit.>.
§ OPTIMISED BEC PRODUCTION
To produce erbium BECs, we employ standard laser cooling and trapping techniques and then use our knowledge of L_3(B) to optimise the evaporative cooling sequence.
In the initial steps, similarly to Ref. <cit.>, an atomic beam emerging from a high-temperature effusion cell oven is slowed down using a Zeeman slower operating on the broad transition at 401. The slow atoms are then loaded into a narrow-line magneto–optical trap (MOT) operating on the atomic transition at 583. We typically capture e8 atoms after loading the MOT for 12. Afterwards, we ramp to a compressed MOT (cMOT) configuration in 600, where reducing the light detuning and intensity causes simultaneous compression and cooling, resulting in a spin-polarised atomic sample at a temperature of 10.
To cool the sample further, we transfer the atoms into an optical dipole trap (ODT) in which we perform evaporative cooling. As shown in <ref>, the ODT is implemented using two crossed, far-detuned beams at 1030, which we call ODT1 and ODT2. Initially, the 21 x 24 waist ODT1 beam is superimposed onto the cMOT, with a total power of 21 and with a 50 spatial dithering applied using an acousto-optic modulator (AOM) <cit.>, which broadens the horizontal (21) waist by a factor of two. We transfer 1.8e7 atoms into the ODT1 beam during the 40ms overlap period. We then proceed with the evaporation sequence using a Feshbach field of 1.4, as this corresponds to the region of the lowest L_3 coefficient available across the temperature range encountered during evaporation [cf. <ref>].
The evaporation sequence can be split into three stages [see <ref>]. In stage I, in which the ODT2 contributes negligibly to the trapping, we simultaneously reduce the ODT1 power and ramp down its dithering. This leads to evaporation and a change in the trap aspect ratio, but avoids too much decompression. At the start of stage II, the ODT2 beam, with a waist of 140 x 33 and an initial power of 2.4, starts to have a noticeable effect and as the cooling continues, the remaining atoms converge into the crossing of the ODT beams. In stage III, we broaden the ODT1 beam again by ramping up the dithering amplitude and also significantly decrease the power of ODT2. These lower the trap depth and all trapping frequencies, but more importantly they reduce the atomic density and hence the role of inelastic three-body collisions relative to the elastic two-body ones that facilitate evaporation.
In <ref> we show how the peak density n_0 and peak phase-space density ρ_0=n_0λ_T^3 (where λ_T=√(2 πħ^2/m k_B T) is the thermal de Broglie wavelength) evolve with the falling N during the evaporation sequence. This highlights the growing density and justifies the need for our stage III decompression: at the end of stage II we reach a density of n_0=3e20 which gives a characteristic three-body lifetime at the centre of the cloud of only 1/L_3 n_0^2 ≈1. We achieve efficient evaporation throughout the three stages, maintaining a steady increase of ρ_0, with efficiency γ=-d(lnρ_0)/d(lnN)=3.1(1), which results in the onset of condensation being reached with N=8×10^5 atoms and at a temperature of 500. Finally, by evaporating further we achieve a nearly pure condensate with 2.2 × 10^5 atoms [see <ref>].
§ CONCLUSION
In conclusion, we have identified six new strongly temperature-dependent three-body loss features in ^166Er below 4. Both the position and width of these loss features increase linearly with temperature for 0.5<T<15; this is broadly consistent with a `resonant trimer' model previously put forward to explain some loss features in lanthanide atoms <cit.>.
Using our knowledge of the loss landscape to optimise the evaporation procedure enabled the production of large BECs of 2.2e5 atoms, providing a good starting point for the investigation of ultracold dipolar physics. Furthermore, these findings will enable the optimisation of atom numbers in existing and future experiments, and guide the way towards the experimental realisation of more exotic states, including honeycomb, labyrinthine and pumpkin phases. Moreover, precise knowledge of the three-body loss coefficient could enable the measurement of the atom number density, crucial for determining the structure of quantum droplets.
We thank Nathaniel Vilas for contributions to the early stages of the experiment. This work was supported by the UK EPSRC (grants no. EP/P009565/1 and EP/T019913/1). R. P. S. and P. J. acknowledge support from the Royal Society, P. J. acknowledges support from the Hungarian National Young Talents Scholarship, M. K. from Trinity College, Cambridge, J. K. from the Oxford Physics Endowment for Graduates (OXPEG) and G. L. from Wolfson College, Oxford.
|
http://arxiv.org/abs/2307.02652v1
|
20230705205647
|
A palindromic polynomial connecting the earth mover's distance to minuscule lattices of Type A
|
[
"Rebecca Bourn",
"William Q. Erickson"
] |
math.CO
|
[
"math.CO",
"05A15 (Primary) 11B37, 05A17, 05C09 (Secondary)"
] |
Rebecca Bourn
Department of Mathematical Sciences
University of Wisconsin–Milwaukee
3200 N. Cramer St.
Milwaukee, WI 53211
bourn@uwm.edu
William Q. Erickson
Department of Mathematics
Baylor University
One Bear Place #97328
Waco, TX 76798
Will_Erickson@baylor.edu
We prove a conjecture of Bourn and Willenbring (2020) regarding the palindromicity and unimodality of certain polynomials N_n(t).
These polynomials arise as the numerators of generating functions in the context of the one-dimensional earth mover's distance (EMD).
Our proof reveals a connection to recent work by Defant et al. (2023) on the Wiener index of minuscule lattices, which we reinterpret combinatorially to obtain explicit formulas for the coefficients of N_n(t) and for the expected value of the EMD.
[2020]Primary 05A15;
Secondary 11B37,
05A17,
05C09
A palindromic polynomial connecting the
earth mover's distance to minuscule lattices of Type A
William Q. Erickson
August 1, 2023
=================================================================================================
§ INTRODUCTION
§.§ Background
In <cit.>, Bourn and Willenbring derive a recursive formula for the expected value of the one-dimensional earth mover's distance (EMD).
For the purposes of this paper, it is enough to know that the EMD (also called the first Wasserstein distance) measures the distance between two histograms (or more generally, probability distributions), by computing the minimum amount of “work” required to transform one into the other (see <cit.>).
The EMD can also be viewed as the solution to the classical transportation problem (often named after various combinations of Hitchcock, Monge, Kantorovich, and Koopmans), and has an ever-widening range of important applications in mathematics along with the physical and social sciences.
We recommend Villani's monumental reference <cit.> for further reading.
While the main result in <cit.> extends to probability distributions on the set [n] {1, …, n}, their methodology is decidedly combinatorial.
In particular, rather than probability distributions, the authors consider discrete histograms with n bins and s data points — in other words, the set of (weak) integer compositions of s into n parts, denoted by
(s,n) { (α_1, …, α_n) ∈ (ℤ_≥ 0)^n | ∑_i α_i = s }.
Then, temporarily allowing ordered pairs of compositions with different numbers of parts (p and q), they define a bivariate generating function
H_pq(z,t) ∑_s=0^∞(∑_α∈(s,p),
β∈(s,q), z^(α,β))t^s
which tracks the EMD values while summing over all possible numbers s of data points.
Upon differentiating with respect to z and evaluating at z=1, they obtain another generating function in t, which is the subject of the present paper:
H'_pq(t) ∂/∂ z H_pq(z,t)|_z=1
= ∑_s=0^∞(∑_α∈(s,p),
β∈(s,q),(α,β)) t^s.
It is then shown <cit.>*Prop. 6 that this generating function has the rational form
H'_pq(t) = N_pq(t)/(1-t)^p+q,
where N_pq(t) is a recursively defined polynomial (see Section <ref> below).
§.§ Main result
In practice, of course, one is interested in the special case p=q, in which case we replace both parameters by n, and write H'_n(t) H'_nn(t) and N_n(t) N_nn(t).
The main result in this paper is a proof of the conjecture <cit.>*Conj. 1:
For all positive integers n, the polynomial N_n(t) is palindromic and unimodal.
Given a polynomial f(t) = ∑_k=a^b f_k t^k with total degree d a+b (assuming f_a ≠ 0 and f_b ≠ 0), we say that f(t) is palindromic if f_k = f_d-k for all a ≤ k ≤ b.
This is equivalent to the condition f(t) = t^d f(1/t).
Moreover, f(t) is said to be unimodal if
f_a ≤ f_a+1≤⋯≤ f_m ≥⋯≥ f_b-1≥ f_b
for some index m.
The key to our proof of palindromicity of the polynomials N_n(t) is a combinatorial interpretation of the recursion defining N_pq(t), in terms of the symmetric difference of pairs of Young diagrams bounded by certain rectangles (see Theorem <ref>).
The palindromic property of N_n(t) then becomes clear from the fact that the size of the symmetric difference is unchanged by reflecting both diagrams.
§.§ Connections to other work
In Section <ref> we point out a surprising connection to a recent paper by Defant et al. <cit.>.
In particular, the distance defined on a minuscule lattice of Type A measures the symmetric difference between Young diagrams, and the Wiener index is the sum of all these pairwise distances.
By aligning this fact with our combinatorial view in Theorem <ref>, we realize that the Wiener index formula in <cit.> yields the coefficients of N_n(t), giving an explicit description of these once-mysterious polynomials. (See Theorem <ref>.)
This leads to a straightforward proof (Corollary <ref>) of the unimodality of N_n(t), thereby settling Conjecture <ref>.
As it turns out, the Wiener index in <cit.> gives the explicit coefficient formula not only for the numerator N_n(t), but also for the series expansion H'_n(t).
This is sufficient to close the recursion in <cit.> for the expected value of the one-dimensional EMD on (s,n) ×(s,n), and the closed form is surprisingly simple (see Theorem <ref>):
𝔼[(α,β)] = s(n-1)/4s+4n-2·2s+2n2s+1/s+n-1s^2.
We emphasize that this result complements two similar extensions of the problem from the original paper <cit.>.
First, the authors of <cit.> derive a non-recursive formula for the expected value of the EMD, but on pairs of probability distributions rather than discrete histograms. (See our Remark <ref>, where we recover this result by taking a single limit.)
Second, the authors of <cit.> present a non-recursive approach in the discrete setting, which is less direct than (<ref>) but also more flexible.
Specifically, their formula gives the entries of a certain matrix, which (upon taking the trace of its product with an arbitrary Monge cost matrix C) yields the expected value of the modified EMD determined by C.
§.§ Open problems
In Section <ref> we conclude with two conjectures of our own.
First, it seems that the values N_n(1) are sums of symmetric differences in a quite different context, previously studied in <cit.>.
We are especially interested in finding a bijective proof of this conjecture.
Second, we conjecture that N_n(t) has only real roots, for any n.
Real-rooted polynomials are of special interest in combinatorics, and we mention a few examples at the end of the paper.
§ COMBINATORIAL PRELIMINARIES
§.§ Recursive definition of N_pq(t)
Recall from (<ref>) the generating function H_pq(z,t), and consider the specialization evaluated at z=1:
H_pq(1,t) = ∑_s = 0^∞ |(s,p)| · |(s,q)| · t^s.
In <cit.> this is rewritten in the rational form
H_pq(1,t) = W_pq(t)/(1-t)^p+q-1,
where the coefficients of the numerator are well known to be given by
W_pq(t) = ∑_k=0^min{p,q}-1p-1kq-1k t^k.
By first proving a recursion for H_pq(t) and then manipulating generating functions, the authors of <cit.> derive the following recursive definition of N_pq N_pq(t) in their equation (4.3):
N_pq = N_p-1,q + N_p,q-1 - (1-t)N_p-1,q-1 + |p-q| · t W_pq,
with initial values N_0,q = N_p,0 = N_1,1 = 0.
Below we list the polynomials N_n(t) N_nn(t) for the first few values of n:
N_1(t) = 0,
N_2(t) = 2t,
N_3(t) = 8t + 8t^2,
N_4(t) = 20t + 56t^2 + 20t^3,
N_5(t) = 40t + 216t^2 + 216t^3 + 40t^4,
N_6(t) = 70 t + 616 t^2 + 1188 t^3 + 616 t^4 + 70 t^5,
N_7(t) = 112 t + 1456 t^2 + 4576 t^3 + 4576 t^4 + 1456 t^5 + 112 t^6,
N_8(t) = 168 t + 3024 t^2 + 14040 t^3 + 22880 t^4 + 14040 t^5 + 3024 t^6 +
168 t^7.
In <cit.>*eqn. (1.4.4), the series H_pq(1,t) is shown to be the Hilbert series of the first Wallach representation of the Lie algebra 𝔰𝔲(p,q).
It can also be viewed as the Hilbert series of the determinantal variety consisting of p × q matrices of rank at most 1; or again, as the Hilbert series of the Stanley–Reisner ring of the order complex Δ_pq on the poset [p] × [q].
In this last context, the coefficient of t^k in the numerator W_pq(t) equals the number of facets of Δ_pq whose restrictions have cardinality k; equivalently, this is the number of lattice paths from (1,1) to (p,q) containing exactly k “turns” from the north to the east.
This is a special case of lattice path enumeration as detailed in <cit.>*Prop. 28 and Fig. 8, for example.
See also <cit.>, which introduced these methods in the context of the harmonic oscillator.
§.§ Symmetric difference of partitions
A partition λ = (λ_1, …, λ_r) is a weakly decreasing sequence of positive integers.
We call r the length of λ, and we call λ_1 the width of λ.
These terms are quite natural when we identify a partition λ with its Young diagram: the array of boxes, justified along the left edge, in which the row lengths from top to bottom are λ_1, …, λ_r.
We write λ' to denote the conjugate partition obtained by reflecting the Young diagram λ across the main diagonal.
For example, if λ = (6,5,2,2,1), then we have
centertableaux,boxsize=6ptλ = (6,5,2,2,1) = 6,5,2,2,1, λ' = (5,4,2,2,2,1) = 5,4,2,2,2,1,
where λ has length 5 and width 6, while λ' has length 6 and width 5.
For nonnegative integers a,b, let
(a × b) {partitions λ | λ has length ≤ a and width ≤ b}.
It is convenient to regard (a × b) as the set of Young diagrams fitting inside a rectangle with a rows and b columns.
The number of such partitions is well known <cit.>*Prop. 1.2.1 to be given by the binomial coefficient
|(a × b)| = a+ba.
Clearly λ↦λ' is a bijection between (a × b) and (b × a).
Note that (0 × b) = (a × 0) = {∅}, where ∅ is the empty Young diagram.
Given two Young diagrams λ, μ, their symmetric difference is
λμ (λ∪μ) ∖ (λ∩μ),
where as usual λ∪μ denotes the set of boxes occurring in either diagram, and λ∩μ denotes the set of boxes occurring in both diagrams.
Hence λμ (often written as λμ) is the set of boxes which occur in exactly one of the two diagrams.
For example,
if λ = [*(red!30)]6,5,2,2,1 and μ = [*(blue!30)]4,4,4,3,0, then λμ = [*(red!30)]4+2,4+1,0,0,1*[*(blue!30)]0,0,2+2,2+1*
6,5,4,3,1,
and so |λμ| = 7, visualized as the seven shaded cells on the right-hand side.
Crucial to our result will be the following sum of cardinalities of symmetric differences, which we denote by
(a,b | c,d) ∑_λ∈(a × b),
μ∈(c × d), |λμ|.
We have (a, b | c,d) = (b,a | d,c).
Recalling the bijection λ↦λ', we see that (b,a | d,c) is obtained from (<ref>) by replacing λ with λ' and μ with μ'.
Since λ' μ' is just the set of boxes obtained by reflecting the set λμ across the main diagonal, we have |λμ| = |λ' μ'|.
For positive integers k, ℓ, m, we have
(k, ℓ| k, m) = (k, ℓ-1 | k, m) + (k, ℓ| k, m-1) - (k, ℓ-1 | k, m-1)
+ (k-1, ℓ| k-1, m)
+ |ℓ - m| · |((k-1) ×ℓ)|· |((k-1) × m)|.
This can be seen by the inclusion–exclusion principle, as follows.
We claim that
∑_(λ,μ)
∈ (k ×ℓ) ×(k × m) ∖ Q|λμ| = first line of the right-hand side of (<ref>),
where Q is the subset containing those pairs (λ,μ) such that λ has ℓ nonempty columns and μ has m nonempty columns.
It is clear that any pair (λ,μ) ∈ Q is not counted in the first line of (<ref>); to prove the claim (<ref>), we observe that each pair (λ,μ) ∈(k × (ℓ-1)) ×(k × (m-1)) is counted in both of the first two terms, which we correct by subtracting the third term.
Now we claim that the last two lines on the right-hand side of (<ref>) equal
∑_(λ,μ) ∈ Q|λμ|.
We start with the third line.
Note that (λ,μ) ∈ Q if and only if λ has width ℓ and μ has width m.
Hence each pair (λ, μ) ∈ Q, if we restrict our attention to the top row of each diagram, contributes |ℓ - m| to (<ref>).
Moreover, we have
|Q| = |((k-1) ×ℓ)|· |((k-1) × m)|,
since each (λ, μ) ∈ Q is determined by the unique pair (λ̂, μ̂) where
λ̂ ∈((k-1) ×ℓ) is the complement of the top row in λ,
μ̂ ∈((k-1) × m) is the complement of the top row in μ.
Hence we have
∑_(λ,μ) ∈ Q|λμ| = |ℓ - m| · |Q|_third line of (<ref>) + ∑_(λ, μ) ∈ Q|λ̂μ̂|._second line of (<ref>)
The result (<ref>) follows, upon comparing (<ref>) with (<ref>) and (<ref>).
The proof of Lemma <ref> can be conveniently visualized as follows.
Define the symbols
(
[scale=.6,baseline=([yshift=-222]
current bounding box.center),thick]
[fill=gray!50] (0,0) rectangle (1.5,1);
[scale=.6,baseline=([yshift=-222]
current bounding box.center),thick]
[fill=gray!50] (0,0) rectangle (2,1);
) (k, ℓ| k, m), [scale=.6,baseline=2mm,thick]
[gray,dashed] (0,0) rectangle (1.5,1);
[gray,dashed] (1.5,1) – (2.2,1) – (2.2,0) – (1.5,0);
[Latex[scale=.7]-Latex[scale=.7]] (1.5,.5) – (2.2,.5);
[label=right:|ℓ - m|,] at (2.2,.5) ;
where we have arbitrarily depicted ℓ≤ m.
Then the recursive identity (<ref>) can be expressed in terms of these symbols, where a missing strip denotes exactly one row or column:
(
[scale=.6,baseline=([yshift=-222]
current bounding box.center),thick]
[fill=gray!50] (0,0) rectangle (1.5,1);
[scale=.6,baseline=([yshift=-222]
current bounding box.center),thick]
[fill=gray!50] (0,0) rectangle (2,1);
) = (
[scale=.6,baseline=([yshift=-222]
current bounding box.center),thick]
[fill=gray!50] (0,0) rectangle (1.3,1);
(0,0) rectangle (1.5,1);
[scale=.6,baseline=([yshift=-222]
current bounding box.center),thick]
[fill=gray!50] (0,0) rectangle (2,1);
) + (
[scale=.6,baseline=([yshift=-222]
current bounding box.center),thick]
[fill=gray!50] (0,0) rectangle (1.5,1);
[scale=.6,baseline=([yshift=-222]
current bounding box.center),thick]
[fill=gray!50] (0,0) rectangle (1.8,1);
(0,0) rectangle (2,1);
) - (
[scale=.6,baseline=([yshift=-222]
current bounding box.center),thick]
[fill=gray!50] (0,0) rectangle (1.3,1);
(0,0) rectangle (1.5,1);
[scale=.6,baseline=([yshift=-222]
current bounding box.center),thick]
[fill=gray!50] (0,0) rectangle (1.8,1);
(0,0) rectangle (2,1);
)
+ (
[scale=.6,baseline=([yshift=-222]
current bounding box.center),thick]
[fill=gray!50] (0,0) rectangle (1.5,0.8);
(0,0) rectangle (1.5,1);
[scale=.6,baseline=([yshift=-222]
current bounding box.center),thick]
[fill=gray!50] (0,0) rectangle (2,0.8);
(0,0) rectangle (2,1);
)
+ [scale=.6,baseline=([yshift=-222]
current bounding box.center),thick]
[gray,dashed] (0,0) rectangle (1.5,1);
[gray,dashed] (1.5,1) – (2.2,1) – (2.2,0) – (1.5,0);
[Latex[scale=.7]-Latex[scale=.7]] (1.5,.5) – (2.2,.5);
·#(
[scale=.6,baseline=([yshift=-222]
current bounding box.center),thick]
[fill=gray!50] (0,0) rectangle (1.5,.8);
(0,0) rectangle (1.5,1);
)
·#(
[scale=.6,baseline=([yshift=-222]
current bounding box.center),thick]
[fill=gray!50] (0,0) rectangle (2,.8);
(0,0) rectangle (2,1);
)
.
§ MAIN RESULT
Let N_pq(t) be the polynomial defined recursively in (<ref>).
Then
N_pq(t) = ∑_k=1^min{p,q}(k, p-k | k, q-k) · t^k,
where the coefficients ( - ) are defined in (<ref>).
Recall from (<ref>) the recursion defining N_pq(t), which we expand below (labeling each term for clarity later in the proof):
N_pq = N_p-1,q_𝐀 + N_p,q-1_𝐁 - N_p-1, q-1_𝐂 + t ·N_p-1,q-1_𝐃 + t ·|p-q| · W_pq_𝐄,
with initial values N_0,q = N_p,0 = N_1,1 = 0.
We begin by verifying the initial values.
Clearly when p or q is zero, the sum in (<ref>) is empty.
If p=q=1, then there is a single summand where k=1, namely
(1,0 | 1,0) · t.
Since (1 × 0) contains only the empty partition, the expression above is zero, as required.
It remains to verify the recursion itself.
As our induction hypothesis, assume that (<ref>) is valid for all N_p'q' with p' < p or q' < q.
It suffices to show, for all k, that the coefficient of t^k in the right-hand side of (<ref>), namely (k, p-k | k, q-k), equals the coefficient of t^k in the right-hand side of (<ref>).
That is, writing [t^k] f to denote the coefficient of t^k in a polyomial f(t),
we need to show that
(k, p-k | k, q-k) = [t^k](𝐀 + 𝐁 - 𝐂 + t𝐃 + t𝐄)
= [t^k]𝐀 + [t^k]𝐁 - [t^k]𝐂 + [t^k-1]𝐃 + [t^k-1]𝐄.
For ease of notation (and to align with Lemma <ref>), set ℓ p-k and m q-k.
The required coefficients in 𝐀, …, 𝐃 follow directly from (<ref>) via the induction hypothesis:
[t^k]𝐀 = [t^k]N_p-1,q =(k, ℓ-1 |k, m),
[t^k]𝐁 = [t^k]N_p,q-1 = (k, ℓ|k, m-1),
[t^k]𝐂 = [t^k]N_p-1,q-1 = (k, ℓ-1 |k, m-1),
[t^k-1]𝐃 = [t^k-1]N_p-1,q-1 = (k-1, ℓ|k-1, m).
The coefficient [t^k-1]𝐄 can be expressed by means of previous identities:
[t^k-1]𝐄 = |p-q| · [t^k-1]W_pq
= |p-q| ·p-1k-1q-1k-1 by (<ref>)
= |(ℓ+k) - (m+k)| ·ℓ + k-1k-1m + k-1k-1
= |ℓ - m| · |((k-1) ×ℓ)| · |((k-1) × m)| by (<ref>).
Upon substituting these five coefficients back into (<ref>), we obtain the equation in Lemma <ref>, which verifies (<ref>) as desired.
Finally, to determine the range of the sum in (<ref>), we observe that (k, p-k | k, q-k) is nonzero only when 1 ≤ k ≤min{p,q}.
The following corollary settles the palindromic part of Conjecture 1 from <cit.>.
For all positive integers n, the polynomial N_n(t) is palindromic.
Since p=q=n, the coefficient of t^k in (<ref>) equals (k,0 | k,0) = 0; therefore N_n(t) has degree n-1, and total degree 1 + (n-1) = n.
By the definition of palindromicity, we need to show that
(k, n-k | k, n-k) = (n-k, k | n-k, k)
for all 1 ≤ k ≤ n-1, and this follows directly from Lemma <ref>.
Recall from Section <ref> the case n=4:
N_4(t) = 20t + 56t^2 + 20t^3.
By Theorem <ref>, the coefficients are sums of sizes of symmetric differences of ordered pairs of partitions, where the partitions range over (k × (4-k)), for k=1,2,3.
The three tables below give the values |λμ|, taken over all ordered pairs (λ,μ) of partitions with the requisite shapes:
boxsize=4pt(1 × 3)
.[ ∅ 1 2 3; ∅ 0 1 2 3; 1 1 0 1 2; 2 2 1 0 1; 3 3 2 1 0 ]_sum = 20 (2 × 2)
.[ ∅ 1 2 1,1 2,1 2,2; ∅ 0 1 2 2 3 4; 1 1 0 1 1 2 3; 2 2 1 0 2 1 2; 1,1 2 1 2 0 1 2; 2,1 3 2 1 1 0 1; 2,2 4 3 2 2 1 0 ]_sum = 56 (3 × 1)
.[ ∅ 1 1,1 1pt1,1,1; ∅ 0 1 2 3; 1 1 0 1 2; 1,1 2 1 0 1; -4pt1,1,1 3 2 1 0 ]_sum = 20
§ EXPLICIT FORMULAS VIA MINUSCULE LATTICES OF TYPE A
Armed with the combinatorial interpretation in Theorem <ref>, we show in this section that a recent result by Defant et al. <cit.>, concerning the Wiener index of minuscule lattices, yields the explicit formula for the coefficients of N_n(t).
We use this to prove the unimodality of N_n(t).
The connection goes even further.
By reinterpreting the results of <cit.>, we are now able to close the recursion in <cit.> for the expected value of the one-dimensional EMD.
§.§ Wiener index of minuscule lattices of Type A
The minuscule lattices form a special class of Hasse diagrams, in which the underlying poset is the set of weights of a minuscule representation of a complex simple Lie algebra 𝔤.
(In this paper we appeal only to the combinatorial structure of certain of these lattices, and therefore the following paragraph may be omitted without loss of continuity.)
A minuscule weight of a complex simple Lie algebra 𝔤 is a minimal element ϖ in the poset of dominant integral weights of 𝔤 (in the partial order whereby ϖ≤ϖ' if ϖ' - ϖ is a nonnegative sum of simple roots).
For ϖ minuscule, and V_ϖ the irreducible representation of 𝔤 with highest weight ϖ, the
set of weights of V_ϖ is precisely the Weyl group orbit of ϖ, and its Hasse diagram is said to be a minuscule lattice; see <cit.> for details.
Minuscule lattices have various incarnations in the theory of Hermitian symmetric pairs (𝔤, 𝔨).
For example, minuscule lattices are the Hasse diagrams (with respect to the Bruhat order) of the minimal-length representatives of the right cosets of the Weyl group of 𝔨 inside the Weyl group of 𝔤.
Equivalently, minuscule lattices are Hasse diagrams of the poset of lower-order ideals of the positive noncompact roots of 𝔤.
See <cit.> for an expansive exposition on the subject.
Let G = (V,E) be a finite connected graph, and let d(x,y) denote the distance in G between x,y ∈ V.
The Wiener index of G is defined to be the sum of the distances between all ordered pairs of vertices:
d(G) ∑_(x,y) ∈ V × V d(x,y).
By extension, if P is a finite poset, then d(P) is defined to be the Wiener index of the Hasse diagram of P.
When 𝔤 is of Type A (in the Killing–Cartan classification), a minuscule lattice is the Hasse diagram of the poset of order ideals in a rectangle, say of dimensions a × b, ordered by inclusion.
In <cit.>, this poset is denoted by P_a,b.
The key fact, for our purposes, is that P_a,b≅(a × b) as posets, if we order Young diagrams by inclusion.
In other words, in (a × b), we declare μ≤λ if λ is obtained by adding boxes to μ.
Hence ∅ is the minimal element of (a × b), while (b^a) is the maximal element.[This fact can be restated in somewhat different, but quite standard, combinatorial language:
if 𝒴 denotes Young's lattice (the Hasse diagram of all Young diagrams ordered by inclusion), then we can identify P_a,b≅(a × b) with the sublattice of 𝒴 generated by the rectangular Young diagram with a rows and b columns.]
It is easy to see (and is noted in <cit.>*1.3) that distance on the Hasse diagram of P_a,b measures the symmetric difference of the corresponding Young diagrams;
after all, neighboring elements in the Hasse diagram differ by exactly one box.
Therefore, given λ,μ∈(a × b), we have d(λ,μ) = |λμ|.
The upshot of this is that the Wiener index of (a × b) equals the sum of the sizes of the symmetric differences of all ordered pairs.
Translating between <cit.> and our present paper, this means that
d(P_a,b) = (a,b) (a,b | a,b).
(From now on we will need only the specialized shorthand (a, b), since we consider ordered pairs of elements from the common set (a × b).)
By manipulating the generating functions of certain Motzkin paths, the authors of <cit.> derive the following formula (Thm. 1.2) for the Wiener index of P_a,b:
d(P_a,b) = ab/4a + 4b + 22a+2b+22a+1.
Combining this result with (<ref>) and our Theorem <ref>, we obtain the following explicit formula for the coefficients of N_n(t):
For all positive integers n, we have
N_n(t) = 1/4n + 2·∑_k=1^n-1 k(n-k) 2n+22k+1 t^k.
We now prove the second part of Conjecture <ref>:
For all positive integers n, the polynomial N_n(t) is unimodal.
Using Theorem <ref>, it suffices to show that both factors k(n-k) and 2n+22k+1 are nondecreasing as functions of k, for 1 ≤ k < ⌊ n/2 ⌋.
In fact, both factors are strictly increasing.
In particular, it is well known that the binomial coefficients ab are increasing as functions of b, for b < a/2, and therefore 2(n+1)2k+1 is increasing for k < n/2.
Moreover, treating k(n-k) as a real function of k, we have d/dk [k(n-k)] = n-2k, which is positive for all k < n/2.
This completes the proof.
§.§ Expected value of the EMD
Not only does the formula (<ref>) lead to the complete description of N_n(t) in Theorem <ref>, but we now show that it also solves the recursion for the expected value of the EMD, which was the main result of <cit.>.
The paper <cit.>*eqn. (10) employs the bijection
(s,n) ⟶(s × (n-1)),
α ⟼((n-1)^α_1, (n-2)^α_2, …, 2^α_n-2, 1^α_n-1),
where the exponents denote repeated parts in a partition.
Now let α,β∈(s,n), and suppose that α↦λ and β↦μ under this bijection.
It is shown in <cit.>*Prop. 3.1 that the EMD is precisely the size of the symmetric difference of the corresponding Young diagrams:
(α,β) = |λμ|.
Comparing this with (<ref>) and (<ref>), it follows that
[t^s] H'_n(t) = ∑_(α,β)(α,β) = ∑_(λ,μ) |λμ| (s, n-1),
where α,β∈(s,n) and λ,μ∈(s × (n-1)).
Therefore, the expected value of the EMD is simply (s,n-1) divided by the square of |(s,n)|, or equivalently, divided by the square of |(s × (n-1))|.
Until <cit.>, the unknown quantity here was (s,n-1); now, however, in light of (<ref>) and (<ref>), we are able to write down a surprisingly simple formula for this expected value:
Let (α, β) ∈(s,n-1) ×(s,n-1) be chosen uniformly at random.
Then
𝔼[(α,β)] = s(n-1)/4s+4n-2·2s+2n2s+1/s+n-1s^2.
We have
𝔼[(α,β)] ∑_(α,β)(α,β) / |(s,n)|^2
= (s, n-1) / s+n-1s^2 by (<ref>) and (<ref>)
= s(n-1)/4s+4(n-1) + 2·2s + 2(n-1)+22s+1/ s+n-1s^2 by (<ref>) and (<ref>),
which yields the expression in the theorem.
The explicit formula in Theorem <ref> gives us an easy way to compute the expected value of the EMD on ordered pairs of probability distributions on [n]:
namely, divide by s so that α/s and β/s are probability distributions with rational values, and then take the limit as s →∞.
This was the method used in <cit.> to convert their recursive formula from compositions to probability distributions.
In this case, starting with the expression in Theorem <ref>, it can be shown that
lim_s →∞1/s·𝔼[(α,β)] = √(π) (n-1) Γ(n)/4 Γ(n + 1/2).
This is, although somewhat disguised, equivalent to the expected value formula derived in <cit.>*Thm. 1, namely
2^2n-3(n-1)/(2n-1)! (n-1)! ^2,
which was obtained analytically by solving the recursion in <cit.>.
§ FURTHER CONJECTURES REGARDING N_N(T)
The following symmetric difference identity is proved in <cit.>*eqn. (26):
S(n) ∑_(X,Y)
∈ 2^[n]× 2^[n] | X Y| = n · 2^2n-1,
where as usual 2^[n] denotes the power set of [n].
This is the sequence A002699 in the OEIS, which also gives an alternative formula S(n) = ∑_k=1^n k 2nk.
Evidence suggests that this sequence (offset by 1) gives the sum of the coefficients of N_n(t).
Below are results for the first few values of n:
[ n N_n(1) S(n); 0 0 0; 1 0 2; 2 2 16; 3 16 96; 4 96 512; 5 512 2560; 6 2560 12288; 7 12288 57344; 8 57344 262144 ]
For all positive integers n, we have N_n(1) = S(n-1).
In light of Theorem <ref>, which expresses the individual coefficients of N_n(t), we expect that Conjecture <ref> can be proved by means of some clever manipulations with binomial coefficients.
We are most interested, however, in a bijective proof of some kind, since it seems that the symmetric differences on each side can hardly be coincidental.
The polynomial N_n(t) has only real roots.
This conjecture is supported by computer evidence for n ≤ 1000.
Real-rooted polynomials have garnered significant interest in combinatorics; famous examples include the Eulerian polynomials (which are the generating polynomials of the descent statistic on the symmetric group, or more generally on multiset permutations), graph matching polynomials, rook polynomials, and polynomials with interlacing or interweaving roots.
We recommend the excellent survey <cit.>.
alpha
|
http://arxiv.org/abs/2307.01826v1
|
20230704170933
|
On computing finite index subgroups of PSL(2,Z)
|
[
"Nicolás Mayorga Uruburu",
"Ariel Pacetti",
"Leandro Vendramin"
] |
math.NT
|
[
"math.NT",
"math.GR",
"11F06, 05C85"
] |
[N. Mayorga]FAMAF-CIEM, Universidad Nacional de
Córdoba. C.P:5000, Córdoba, Argentina.
nmayorgau@unc.edu.ar
[A. Pacetti]Center for Research and Development in Mathematics and Applications (CIDMA),
Department of Mathematics, University of Aveiro, 3810-193 Aveiro, Portugal
apacetti@ua.pt
[L. Vendramin]
Department of Mathematics, Vrije Universiteit Brussel, Pleinlaan 2, 1050 Brussel, Belgium
Leandro.Vendramin@vub.be
N.M.U. was partially supported by a Conicet grant, AP was
partially supported by the Portuguese Foundation for Science and
Technology (FCT) within project UIDB/04106/2020 (CIDMA), L.V. was
supported by project OZR3762 of Vrije Universiteit Brussel
[2010]11F06, 05C85
We present a method to compute finite index subgroups of _2(). Our strategy follows Kulkarni’s ideas, the main contribution being a recursive method to compute bivalent trees as well as their automorphism group. As a concrete application, we compute all subgroups of index up to 20. We then use this database to produce tables with several arithmetical properties.
On computing finite index subgroups of _2()
Leandro Vendramin
August 1, 2023
===========================================
§ INTRODUCTION
Let _2(ℤ) be the modular group, obtained as the
quotient of the set of all two by two matrices with integral entries
of determinant 1 modulo the subgroup
{±([ 1 0; 0 1 ])}. It is a classical problem that of
determining all subgroups of _2() of a given finite index. In
a remarkable article Newman (<cit.>) computed the number of
subgroups of index up to a hundred (including the asymptotic behavior
of the counting function). The group _2() acts on the set of
subgroups by conjugation, and it is also a natural problem that of
determining the subgroups (or their number) up to
_2()-equivalence. Similarly, the group _2() acts on
_2() so the same sort of questions for
_2()-equivalence classes can be considered. In <cit.> the author
gave a method to compute the number of _2()-equivalence
classes, whose sequence corresponds to the integer sequence
A121350 (see <https://oeis.org/A121350>).
From a computational point of view, it is challenging to actually
compute tables of subgroups of _2() (modulo conjugation). To
our knowledge, it was Kulkarni (in <cit.>) the first one
who actually proposed an algorithm to compute them. In the
aforementioned article (in Appendix 1) he even gives a list of
subgroups of index up to 6 (for these small indexes, there are only
a few of them). Let us give some details on Kulkarni's approach, which
also shows the relevance of the problem.
Let
^∗={z∈ℂ|Imz>0}∪{∞}∪ℚ
denote the extended complex upper half plane. The group _2()
acts on ^∗ by Möebius transformations. Since the matrix
([ -1 0; 0 -1 ])
acts trivially, we get a well defined action of the modular group
_2(). The typical fundamental domain 𝒯 for the
action of _2() on ^∗ is the hyperbolic triangle with
vertices ρ= exp(2 π i /6), ρ^2 and ∞. The
half-plane has an extra transformation ι (as a real
variety) given by ι(z)= -z. This allows to extend the
action of _2() on ^∗ to an action of _2() on
^∗ given by
[ a b; c d ]· z =
az+b/cz+d if ad-bc = 1,
az+b/cz+d if ad-bc = -1.
The hyperbolic triangle with vertices ρ, i and ∞
(see Figure <ref>) is a fundamental domain for the action of
_2() on ^∗. Clearly = ∪ι().
Following <cit.> in the present article we use the following terminology.
The points of in the orbit of i (resp. in the orbit of
ρ) by the action of _2() are called even or red
vertices (resp. odd or blue vertices). The points in the orbit of
_2() ·∞ are called cusps.
The _2()-translates of the triangle provide a tessellation of
. The hyperbolic geodesic of joining i with ∞
(resp. joining ρ with ∞) is called an even edge
(resp. odd edge) and so is all of its translates by
_2(). The hyperbolic geodesic of joining i with ρ
is of finite length. Its _2()-translates are called
f-edges.
Let be a finite index subgroup of _2(). Then a fundamental
domain for its action is given by the union of translates of . A
key idea introduced by Kulkarni in <cit.> was to replace
the fundamental domain by
= ∪ι(S·),
where
S = ([ 0 -1; 1 0 ]). In this way, any fundamental domain
obtained as the union of translates of will have no f-edges
on its border. Furthermore, if the fundamental domain has nice
properties, the set of f-edges inside the fundamental domain form a
graph with an orientation on it. It is expected that the subgroup
could be recover (up to conjugation) from the graph. This
allowed Kulkarni to give three different ways to describe a finite
index subgroup:
* Via the graph of f-edges on the quotient \, what Kulkarni called a bipartite cuboid graph.
* Via removing from the whole graph of f-edges those vertices
having valence 2 and making some cuts to the graph in order to get
a tree, what Kulkarni called a tree diagram.
* Via listing the cusps appearing on a “special” fundamental
domain, and listing how the paths going throw them are glued
together. This was called a Farey symbol by Kulkarni.
For theoretical purposes, the first approach is the best one, as on
the one hand it contains all the information needed to recover the
subgroup and on the other one it is in bijection with the set
conjugacy classes of subgroups. However, the second one is better for
computational purposes (which is the goal of the present article)
since the tree has a smaller number of edges.
In <cit.> the authors gave an alternative method to
describe a finite-index subgroup of _2(), following the ideas
introduced by Millington in <cit.>. Roughly speaking, if
is a subgroup of _2() of index d, then _2()
acts by left multiplication on right coset representatives of
\_2() (in particular the action of each element of
_2() is given by a permutation of _d). To describe the
action completely it is enough to determine the permutations attached
to a set of generators of _2(), for example the permutation
attached the matrix S introduced before and the one attached to the
matrix
T=([ 1 1; 0 1 ])
(the usual translation matrix). This way of representing a subgroup is
also called a “passport” in the literature. The method was extended
in <cit.>, where a table containing the permutations
attached to all subgroups (up to _2()-equivalence) of index up
to 12 (and up to index 18 in electronic format) is
given. Stromberg's computations were extended in different ways
(including computing tables of modular forms) in <cit.> and
<cit.>.
The purpose of the present article is to provide an algorithm to
compute finite index subgroups (up to _2() and _2()
equivalence) via computing tree diagrams. A bi-valent tree is a
tree whose set of valences has two elements. Our main contribution
(of interest on its own) is to provide a recursive algorithm to
construct bi-valent trees. Along the way we prove interesting
properties of the automorphism group of a bi-valent tree (see
Theorem <ref>) which is crucial to speed up the algorithms
used to computed tree diagrams.
Since our final goal is to provide tables of finite index subgroups of
_2() (up to both _2() and _2() equivalence)
we also provide the needed algorithms to construct the subgroup
attached to a tree diagram (as well as its passport
representation). Most of these algorithms are already part of the
literature. Our humble contribution is to include missing details
(that appeared while writing the code) regarding “boundary issues”
as well as correcting a few mistakes. Of particular interest is the
algorithm to construct the generalized Farey symbol attached to a tree
diagram (see Theorem <ref>). Although
such an algorithm plays a crucial role in <cit.>, somehow
there is no description of such an algorithm in the article, so we take the opportunity to present it here, and prove its correctness.
Our algorithms are implemented in the computer software
<cit.>. The
code can be downloaded from
<https://github.com/vendramin/subgroups> with DOI .
The GitHub repository contains some precomputed data as well as a tutorial
(with examples) on how to use the code. The package depends on the packages:
for graph theory, <cit.> for some calculations related
to generalized Farey symbols, and for speeding up
recursive functions.
The article is organized as follows: Section <ref>
contains a quick review of Kulkarni's main results used in the
present article. It includes some definitions (used during the
article) as well as the correspondence between subgroups and some
particular fundamental domains (called special polygons). This result
justifies our computation of tree
diagrams. Section <ref> contains the main recursive
algorithm to compute bi-valent trees as well as their automorphism
group. Although our trees have valence {1,3}, the proven results hold for
any tree with valence set {1,n}, n being arbitrary. Of
particular interest is Theorem <ref>, which describes the
automorphism group of a bi-valent tree as a semi-direct product of two
other ones. The section includes algorithms used to add an orientation
and a coloring on the set of external vertices to a bi-valent tree.
Section <ref> contains the definition of a
generalized Farey symbol (g.F.s. for short), which consists of the set
of cusps of a special polygon. As previously mentioned, a given
g.F.s. together with the gluing of the sides allow to give an
alternative description of a finite index subgroup of
_2(). This section contains an algorithm to compute a
g.F.s. attached to a tree diagram
(Theorem <ref>), providing a way to go
from the second description to the third one. A tree diagram together
with a g.F.s. is called a Kulkarni diagram. The same section
contains different algorithms to, given a Kulkarni diagram
corresponding to a subgroup , compute arithmetic information of
the quotient curve \^∗ (like cusp
representatives and width of the cusps, genus, ramification points,
etc) as well as information of the subgroup itself (including a
reduction algorithm needed to solve the word problem, and finding a
set of coset representatives).
Section <ref> contains the definition of a passport and
algorithms to compute the passport attached to a Kulkarni diagram (to
compare with other tables in the literature). At last,
Section <ref> contains some tables and interesting facts that
can be deduced from the data we computed.
It should be clear to the reader how Kulkarni's article crucially
influenced our work, so we strongly recommend they to look at the
article <cit.> which contains more details on the
correspondences between tree diagrams, Farey symbols and bipartite
cuboid graphs.
§ A QUICK GUIDE TO KULKARNI'S APPROACH
Recall the following definition from <cit.>.
A bipartite cuboid graph is a finite graph whose vertex set
is divided into two disjoint subsets V_0 (the red ones) and V_1
(the blue ones) such that
* every vertex in V_0 has valence 1 or 2,
* every vertex in V_1 has valence 1 or 3,
* there is a prescribed cyclic order on the edges incident at each vertex of valence 3 in V_1,
* every edge joins a vertex in V_0 with a vertex in V_1.
Let be a finite index subgroup of _2(). The quotient
^∗\ is a compact oriented real surface, with a
tessellation given by the translates of . The graph of
f-edges of the surface is an example of a cuboid graph, where V_0
corresponds to the red vertices (i.e. the translates of i) while
V_1 corresponds to the blue vertices (the translates of ρ).
We will only use bipartite cuboid graphs twice on the present article:
once in Example <ref>, to prove that two non-isomorphic tree
diagrams provide conjugated subgroups and also while computing the
generalized Farey symbol attached to a tree diagram. In contrast to a
bipartite cuboid graph, the notion of a tree diagram appears while
working with a fundamental domain (a special polygon) for the
quotient. The advantage of this approach is that
we do not need to glue its edges together, it is enough to store
which/how edges get identified. In this way we can work with a tree
instead of a graph.
A special polygon is a convex hyperbolic polygon whose boundary ∂ is a union of even and odd edges, satisfying the following properties:
* The even edges in ∂ come in pairs, each pair forming
a complete hyperbolic geodesic called even line.
* The odd edges in ∂ come in pairs. The edges in each pair meet at an odd vertex, making and internal angle of 2π/3.
* There exists an involution on the edges of ∂ so that no edge is carried into itself.
* The involution sends an odd edge into another odd edge, making an internal angle of 2 π/3 between themselves.
* Let e_1,e_2 be two even edges in ∂ forming a even line. Then
either e_1 is paired to e_2, or else {e_1,e_2} form a
free side of ∂ and this free side is paired to
another such free side of ∂.
* 0 and ∞ are two of the vertices of .
Note that any special polygon has a canonical orientation induced
from the “counterclockwise” orientation of . The action of
_2() on a special polygon preserves the orientation (since it
corresponds to a holomorphic function). However, the action of the
matrix
([ -1 0; 0 1 ])
has the effect of reversing the orientation of our special
polygon. The choice of the opposite orientation corresponds to the
action by ι on . The choice of the counterclockwise
orientation or its inverse accounts for the difference between
equivalence classes of subgroups up to _2() conjugation or up
to _2() conjugation. This will prove crucial later (see
Remark <ref>).
A special polygon is a fundamental domain for the subgroup
generated by the side-pairing transformations and these
transformations form an independent set of generators for
. Conversely every subgroup of finite index in _2()
admits a special polygon as a fundamental domain.
See page 1055 of <cit.>.
A special polygon is the union of translates of , so as
explained before, we can look at its tree of f-edges. Properties
(4) and (5) of a special polygon imply that the end points of the
f-edges tree will be points that are one of:
* the intersection of two odd edges, or
* the intersection of two even edges paired together, or
* the intersection of two even edges forming a free side.
Any vertex of the tree has valence 1 (corresponding to an end
point), 2 or 3. To avoid redundant information on the tree, one
can remove the vertices having valence 2, obtaining the so called
tree diagrams. A great advantage of the tree diagram is that
its number of vertices tends to be much smaller than that of the
original bipartite cuboid graph. Remark <ref>
implies that the vertices with valence 3 have a natural orientation.
A tree diagram is a tree with at least one edge such that
* all internal vertices are of valence 3,
* there is a prescribed cyclic order on the edges incident at each internal vertex (orientation),
* the terminal vertices are partitioned into two possibly empty subsets R and B (red and blue vertices),
* there is an involution ι on R.
By (n,3) we will denote the set of tree diagrams with n internal vertices.
It is not true in general that if is a tree diagram, and we
consider the tree diagram where the orientation at all internal
vertices are inverted (by considering the inverse of the
permutation) we get isomorphic tree diagrams. For example, the
following planar (with the anti-clockwise orientation) tree diagrams
in (2,3) are not isomorphic. They correspond to
two subgroups of _2() which are not isomorphic via
conjugation under _2(), but are isomorphic under conjugation
by _2().
The way to relate a tree diagram with a bipartite cuboid graph as
follows: remove from the bipartite cuboid graphs all vertices of
valence 2 (connecting the remaining endings) and make some cuts to
the graph so that it becomes a tree (see <cit.>
for details). Conversely, given a tree diagram, color blue
all of its internal vertices (i.e. they are odd vertices) and keep the
coloring of the tree diagram on the external vertices. If two
consecutive vertices of the tree are blue, enlarge the tree by adding
one extra vertex and paint it with red color. Finally, we need to
glue together the identified external vertices: identify two external
vertices v and w in R if ι(v) = w, where ι is the
involution of R. It is not hard to verify that the resulting graph
is a bipartite cuboid graph.
Following Kulkarni's notation, the way we encode the involution
ι is as follows: if v is a vertex fixed by ι then
we just save its color. Otherwise, there exists w ≠ v such that
ι(v)=w. In this case, we add a label to each of the two
vertices.
As will be described in Section <ref>, there
is a finite-to-one surjective map between the set of tree diagrams
and the set of conjugacy classes of finite index subgroups of
_2(). This map unfortunately is not injective.
Consider the following tree diagrams with anti-clockwise orientation:
These two tree diagrams are not isomorphic: the right tree satisfies
that if we move from a red vertex (following the orientation) we end
on a free one, while this is not true for the left one. The bipartite
cuboid graph attached to each one of them are the following:
where the edges labeled with the same name are glued together. The
two new graphs are isomorphic under the map
([ 1 2 3 4 5 6; 2' 1' 6' 5' 3' 4' ]).
It is easy to verify that this map preserves orientations.
This is the first example where two non-isomorphic tree diagrams give
conjugated subgroups (corresponding to subgroups of index
6). However, it seems that for larger index subgroups (when there
are many free sides) the number of tree diagrams is much larger than
the number of _2()-classes of subgroups, for example there are
28 tree diagrams corresponding to subgroups of order 9 while only
14 _2()-equivalence classes. See the data in Table <ref>.
Working with bipartite cuboid graphs has the advantage that their
isomorphism classes are in bijection with _2() conjugacy
classes of subgroups of _2(), but as already mentioned, has
the disadvantage that it implies working with much larger graphs (which
are not trees in general).
Let be a tree diagram and ι its involution. The set of
its external vertices is naturally decomposed as the disjoint union of
three sets
V_e() = B ∪ R_0 ∪ R_1,
where B are the blue vertices, R_0 are the red vertices fixed by
the involution, and R_1 are the red vertices not fixed by the
involution (an even set). Let b =|B| (its cardinality), r = |R_0|
and f = |R_1|/2. The main goal of the next section is to
provide an algorithm to compute (equivalence classes of) tree diagrams
with parameters (b,r,f).
§ BI-VALENT TREES
A bi-valent tree is a tree satisfying that the valence
function on vertices takes at most two different values.
Clearly the valence function on any tree with at least three vertices
must take at least two different values, hence the valence function on
a bi-valent tree with at least three vertices takes precisely two
different values.
Let T be any bi-valent tree. The set of
external vertices (resp. internal vertices) that we
denote by V_e (resp. V_i) is the set of vertices of T having
valence 1 (resp. having valence >1).
Denote by (m,n) the set of bi-valent trees
(up to isomorphism) with valences set {1,n} made of m internal vertices.
There is a unique bi-valent tree T in (2,3) given by the graph shown in Figure <ref>.
Its adjacency matrix equals
(
[ 0 1 1 1 0 0; 1 0 0 0 1 1; 1 0 0 0 0 0; 1 0 0 0 0 0; 0 1 0 0 0 0; 0 1 0 0 0 0 ]),
where the red block is precisely the adjacency matrix of the internal
vertices.
Let T be a bi-valent tree, satisfying that the valence of any vertex belongs to the set {1,n} for some n>1. Then
|V_e| = (n-2)|V_i|+2.
By induction on the size |V_i|. If |V_i|=1 then there must be
n external vertices (all joined to the unique internal one), hence
the result. Suppose that m = |V_i|>1 and that the result
holds for all bi-valent trees with at most m-1 internal
vertices. Let v,w be internal vertices joined by an edge (such a
pair always exists because m>1). Cut the tree into two disjoint
trees T_1, T_2, by removing the edge (v,w) and add to each of
the new trees an external edge joining v (respectively w). Now
each tree is again a bi-valent tree, say with r and s internal
vertices (so r+s = m). The inductive hypothesis implies that
|V_e(T_1)| = |V_i(T_1)|(n-2) + 2 and |V_e(T_2)| = |V_i(T_2)|(n-2) + 2.
But | V_i(T)| = | V_i(T_1)| + | V_i(T_2)| and
|V_e(T)| = |V_e(T_1)| + |V_e(T_2)|-2 (because we add a new external
vertex to both T_1 and T_2), hence the result.
The lemma implies that if T ∈(m,n) then its number of vertices equals
|V(T)| = (n-1)m + 2.
§.§ Automorphisms of bi-valent trees
Let T be a bi-valent tree, and let T_i = (V_i,E_i) be the sub-tree
of internal vertices, namely the tree whose set of vertices
V_i = V_i(T) is the set of internal vertices of T, and whose set of
edges E_i is the set of all edges of T between internal
vertices. Given an automorphism σ of (T), it makes sense to
restrict σ to the subgraph T_i, giving a sequence
1⟶Aut_e⟶Aut(T)Res⟶Aut(T_i),
where Res is the restriction map (a group morphism) and
_e is (by definition) its kernel. It turns out that the
sequence (<ref>) is exact and furthermore it
splits. Before proving these facts we need a characterization of the
subgroup _e.
Recall that given two vertices of a tree, there exists a unique reduced path
joining them, so there is a natural definition of distance between
vertices. For latter purposes, if the bi-valent tree T has at least three
vertices, we denote by
Φ V_e → V_i,
the function which assigns
to an external vertex v the unique internal vertex which is at
distance one from it.
Let v∈ V_e be an external vertex, and let (v) be the
set of external vertices at distance at most 2 from v. Since
v ∈(v), (v) ≠∅. Consider the following function
m V_e → [1,…,n], m(v) = |(v)|.
For 1 ≤ i ≤ n, let
V_e^(i) :={v ∈ V_e | m(v) = i}.
Then we can decompose the set of external vertices as the disjoint
union of the sets V_e^(i). For i ∈ [1,…,n], define
r(i) = max{|S|:S⊆ V_e^(i) and d(v,w)≠2
for all v,w∈ S}.
There is a non-canonical group isomorphism:
_e ≃∏_i=1^n_i^r(i),
where _i denotes the symmetric group on i elements.
Let σ∈(T) be an automorphism in the kernel of the
restriction map and let v ∈ V_e^(i) be an external vertex. Let
w = Φ(v) the the internal vertex adjacent to v. Since
σ is in the kernel of the restriction map, σ(w) = w.
Then σ(v) is an external vertex which is also adjacent to
w (since any automorphism preserves distances). In particular,
σ(v) ∈(v) ⊆ V_e^(i) and
σ induces a permutation of the elements of (v) (a
set with i elements). By definition, there exists elements
w_1,…,w_r(i) such that the set V_e^(i) can be
written as the disjoint union
V_e^(i) = _j=1^r(i)(w_j),
so the action of σ on V_e^(i) can be represented as r(i)
permutations on _i. It is clear that such map is bijective,
i.e. permutations on the (disjoint union of the) sets
(w_j) induce an automorphism of the tree T which fixes
all internal vertices.
Let T and T' be two bi-valent trees in (m,n). For the
purposes of the present article, an endomorphism between T
and T' is a bijective map between both the set of vertices of T to
the set of vertices of T' and between the set of edges of T to the
set of edges of T' (with the usual compatibility condition). We
denote by (T,T') the set of all such maps.
For computational purposes, all trees considered will have its set of edges
labeled by positive integers. This provides a total order on the
graph's set of vertices.
Let T and T' be two trees with a total order on their set of
vertices. We say that a map σ∈(T,T') is 2-ordered
if it satisfies that for all pair of external vertices
v, w ∈ V_e(T) at distance two, it holds that if v < w then
σ(v) < σ(w).
Let T and T' be bi-valent trees in (m,n) with a total order
on its set of vertices. Let T_i (resp. T_i') be the sub-tree of
inner vertices of T (resp. T_i' be the sub-tree of inner
vertices of T'). Let ϕ T_i → T_i' be a morphism of
graphs. Then ϕ can be extended in a unique way to a 2-ordered
morphism ψ of (T,T'). Furthermore, if T_i = T_i' and
T = T', the natural map ψ(T_i) →(T) is a
group morphism.
To avoid confusion, we denote by the valence function on
either tree T or T' and by _i the valence function on
their sub-tree of internal vertices. Let v ∈ V_e(T) be any
external vertex and let w = Φ(v) be the internal vertex
adjacent to it. Since T is bi-valent and w ∈ T_i, (w)=n
and _i(w)<n (because v is not in the internal sub-tree). The
map ϕ preserves valences (since it is a morphism from T_i to
T_i') hence _i(ϕ(w))<n and so it must be joined to an
external edge v' of T'. The two sets (v) and (v')
have the same cardinality (equal to n-_i(w)). Clearly any
extension of ϕ must send the elements of (v) to the ones
in (v'), and since both sets are ordered sets, there exists a
unique bijection between them preserving the order, providing the
required extension.
When T= T' and T_i = T_i', a priori the map
ψ(T_i) →(T) is only a map of sets, but
since the composition of two 2-ordered maps is a 2-ordered map,
and from the uniqueness of the extension, the map ψ is actually
a group morphism.
The sequence
1⟶Aut_e⟶Aut(T)⟶Aut(T_i)⟶1,
is exact. Furthermore, the map has a section
ψ(T_i) →(T), hence
(T) ≃_e ⋊(T_i).
The existence of the section ψ follows from
Proposition <ref>, providing the surjectivity of the
map . The last statement is a well known fact of split short
exact sequences of groups.
Recall that two graphs G_1 and G_2 are isomorphic if and only if
their adjacency matrices are conjugated by a permutation matrix. The
advantage of working with bi-valent trees is that the sub-tree of inner
vertices determines its isomorphism class uniquely (so instead of
computing with matrices of size (n-1)|V_i|+2 times (n-1)|V_i|+2,
we can work with matrices of size |V_i| × |V_i|, where
{1,n} is the valuation set). In our applications (to construct
subgroups of _2()) n will be 3, so will roughly speaking
half the size of our matrices.
Two bi-valent trees T and T' are isomorphic if and only if
their internal vertices sub-trees T_i and T_i' are isomorphic.
Clearly, if ϕ T → T' is an isomorphism of graphs, its
restriction to T_i gives an isomorphism between T_i and
T_i'. Conversely, let ϕ be an isomorphism between T_i and
T_i'. By Proposition <ref>, the map ϕ extends
to a morphism between T and T', hence the statement.
§.§ Algorithm to compute bi-valent trees
Let m≥ 1 and n > 1 be positive
integers. Lemma <ref> implies that any element
of (m,n) has m(n-2)+2 external vertices. Trees will be
represented by their adjacency matrix, with the convention that on a
tree with m(n-2)+2 vertices, the first m labels are for the
internal vertices and the remaining ones for the external vertices.
Let T∈(m,n) and let v∈ V_e(T). The extension of T
at v, that will be denoted by (T,v), is the tree obtained
by adding n-1 new vertices v_1, …, v_n-1 to the set of
vertices of T, and n-1 edges to the set of edges of T, joining
v with v_i for each i=1,…,n-1.
It is clear from its definition that if T ∈(m,n) and
v ∈ V_e(T) then (T,v) ∈(m+1,n).
Let T ∈(m,n) with m>1. Then there exists v ∈ V_e such that |(v)| = n-1.
Recall the definition of the map Φ V_e → V_i, which assigns
to an external vertex v its adjacent internal vertex. If v ∈ V_e, then |(v)| = |Φ^-1(Φ(v))|. Clearly,
|V_e| = ∑_v ∈ V_i|Φ^-1(v)|.
If all sets (v) have cardinality smaller than n-1, then
|Φ^-1(v)| ≤ n-2, so
|V_e| ≤ (n-2)|V_i| < (n-2)|V_i|+2 = |V_e|,
a contradiction.
The following algorithm gives a set of representatives of
isomorphism classes of elements in (m,n) for m≥ 1, n>1:
If m=1, the tree has a unique internal vertex, hence n
external ones (joined to the internal vertex). In particular, there
is only one possible tree whose adjacency matrix is the given
one.
Assume now that m>1. If T ∈(m,n), by
Lemma <ref>, there exists a vertex
v ∈ V_e(T) such that (v) has exactly n-1 elements. Let
w = Φ(v) be the internal vertex adjacent to v. Let T' be
the tree obtained from T by removing all vertices of (v),
as well as the edges joining w with elements of
(v). Then T' ∈(m-1,n) and clearly T is isomorphic to
(T',w). In particular, T is obtained from extending an
element of (m-1,n) by an external vertex, so T is isomorphic
to an element of S.
Actually, the previous algorithm can be improved since clearly if v, w are
two external vertices of a bi-valent tree T at distance 2 (or
equivalently w ∈(v)), then the extension of T by v is
isomorphic to the extension of T by w. More generally:
Let T ∈(m,n), let v,w ∈ V_e and let φ∈(T)
be such that φ(v)=w. Then φ induces an
isomorphism
φ̃(T,v)→(T,w).
Let H=(T,v) and G = (T,w). Recall that H is obtained
by adding n-1 vertices to T, say v_1,…,v_n-1, and G is
obtained by adding n-1 vertices to T, say w_1,…,w_n-1. Let
φ̃ H → G be the map given by
φ̃(v)=φ(v), if v∈ T,
w_i, if v=v_i.
It can easily be verified that φ̃ is a well-defined isomorphism between the two trees.
In the particular case when w ∈(v), the previous map
corresponds precisely to the flip isomorphism in _e (the
kernel of the restriction map) sending v ↔ w and
fixing all other elements.
Define an equivalence relation on V_e by determining that v ∼ w
if there exists σ∈_e such that σ(v) = w. The
notation [V_e/∼] will be used to denote any set of
representatives for the equivalent classes. Then we have the following improvement of Theorem <ref>.
The following algorithm gives the elements in (m,n) for
m≥ 1, n>1, up to isomorphism:
The last step of the algorithm is done by checking whether
two elements of S have isomorphic sub-trees of inner vertices or not.
§.§ Orientations on bi-valent trees
Recall that the set of vertices on a bi-valent tree is the union of
the external vertices V_e (those with valence one) with the internal
ones V_i (those having valence greater than one).
An orientation on a bi-valent tree T ∈(m,n) is an
orientation on its set of internal vertices, i.e. give for each
vertex v ∈ V_i an ordering of the set (of size n) consisting of
vertices adjacent to v.
The way we represent an orientation algorithmically is by adding to
each internal vertex v ∈ V_i an n-cycle permutation σ_v.
Thus a bi-valent tree with an orientation is a pair consisting of the adjacency
matrix together with a list of n-cycles indexed by the internal vertices.
Let T the unique tree (up to isomorphisms) in (2,3) (from
Example <ref>). Since T is already presented as a planar
graph, an orientation is given by the anti-clockwise choice at each
vertex (as mentioned in Remark <ref>), namely
We represent this orientation of T by the 3-cycles
σ_1=(2,3,4) and σ_2=(1,5,6).
Let T ∈(m,n) be a bi-valent tree with an orientation and let p={v_1,…,v_n} be a
path between two external vertices. We say that the path
p is well oriented if for each internal
vertex v_i in the path p it holds that σ_v_i(v_i-1)=v_i+1 (i.e. the
orientation on the vertex v sends the vertex of the path prior to
v to the subsequent one).
If n=3 (i.e. all vertices have valence 1 or 3, which is the
case we are really interested in), a path is well oriented on a
vertex v_i if and only if the 3-cycle σ_v_i equals
(v_i-1,v_i+1,w), where w is the third vertex adjacent to
v_i.
Let T ∈(m,n) be a bi-valent tree with an orientation. Let v ∈ V_e
be an external vertex. Then there exists a unique w ∈ V_e and a
unique reduced and well oriented path p between v and w.
Let v_1 be the internal vertex adjacent to v, and let
v_2 = σ_v_1(v), the unique choice so that the path
{v,v_1,v_2} is well oriented. If v_2 is an external vertex,
then the path {v,v_1,v_2} is a well oriented path between
external vertices. Otherwise, let v_3 = σ_v_2(v_1) (once
again this is the unique choice so that the path {v,v_1,v_2,v_3}
is well oriented) and continue this process. Since the number of
vertices is finite, either at some point we reach an external
vertex, or we get a “period”, i.e. there exist i<j such that
v_i = v_j. Since T is a tree, the only way to get a path
starting and ending at the same vertex is that there exists an index
t, with i < t < j such that v_t-1 = v_t+1, i.e.
v_t+1 = σ_v_t(v_t-1) = v_t-1, which cannot happen
since the orientation at the vertex v_t is an n-cycle (so has no
fixed points).
Let T ∈(m,n) be an oriented bi-valent tree and let
v ∈ V_e be an external vertex. A vertex w ∈ V_e is to
the right of v if there exists a reduced and well oriented path p from
v to w. Analogously, a vertex w ∈ V_e is to the left
of v if there exists a reduced and well oriented path from w to v.
In particular, we can define the function
r V_e → V_e, r(v) = w (vertex to the right of v).
Respectively, we have a function l V_e → V_e sending v to
the vertex to the left of v. Recall our definition of the map
Φ V_e → V_i which assigns to an external vertex the
internal one adjacent to it.
Let T∈(m,n) a bi-valent tree with an orientation and v be
an external vertex. The following algorithm computes the value of
the function r at an external vertex v:
Let T ∈(m,n) be a bi-valent tree with an orientation, and
let v ∈ V_e be an external vertex. Then
V_e = {r^i(v) : 0 ≤ i < |V_e|}.
Let R(v)={v=v_1,v_2,…,v_N} be the set of external vertices
to the right of each other (so r(v_N)=v). Let p_i be the reduced
and well oriented path between v_i and v_i+1 for
i=1,…,N-1 and p_N the reduced and well oriented path
between v_N and v_1. Each path is made of edges, so let
Ẽ be the union of all the (directed) edges appearing in
p_i for any 1 ≤ i ≤ N and let Ṽ be the set of
vertices of elements of Ẽ. Suppose we prove that for
each v ∈Ṽ which is an internal vertex of V it holds
that its n adjacent elements are also in Ṽ, then it
must be the case that Ṽ = V. The reason is that given any
w ∈ V, there is a unique reduced path joining v to w. By our
assumption, all edges of the path are elements of Ẽ, so
w ∈Ṽ as claimed.
To prove the stated property, note that since r(v_N) = v, the
compositum p_N ∘⋯∘ p_1 is a path between v_1 and
v_1, hence is the trivial path, i.e. in this walk there is a
complete cancellation of edges. In particular, if a directed edge
lies in Ẽ, the edge with the opposite direction must also be a member
of Ẽ.
Let b ∈Ṽ be an internal vertex which is part of a path
p_i, say {a,b,c} is part of p_i. Since paths are well
oriented, σ_b(a)=c. The complete cancellation property
implies that the (directed) edge (b,a) is part of some path p_j
(for 1 ≤ j ≤ N), i.e. {σ_b^-1(a),b,a} appears in
the path p_j (because p_j is also well oriented). This proves
that if (a,b) ∈Ẽ then (σ_b^-1(a),b) also
belong to Ẽ. Repeating the argument,
(σ_b^-i(a),b) ∈Ẽ for all 1 ≤ i ≤ n, and
since σ_b is an n-cycle, all these edges are different
hence all the vertices adjacent to b are in Ṽ.
As mentioned before, we are mainly interested in the case n=3. The
way we compute representatives for elements in (m,3) with an
orientation is the following: start with a set of isomorphism
classes representatives S for (m,3) as described in
Theorem <ref>. Given an element
T ∈ S, compute for each internal vertex the two possible
orientations, and store the resulting bi-valent trees with
an orientation. This produces a set whose number of elements is
2^|V_i| times the size of S.
One can do better by the following observation: if v is an
internal vertex adjacent to two external vertices w_1,w_2, then
the two possible orientations at the vertex v are isomorphic (via
the isomorphism sending w_1 ↔ w_2 and fixing the
other vertices). For this reason, our code just picks one
orientation for each internal vertex adjacent to two exterior ones. This
trivial improvement gives in practice a huge saving.
§.§ Coloring external vertices
Recall from
Definition <ref> that a tree diagram is an element
of (m,3) (for some m) with an orientation and a particular
coloring on the set V_e (the external vertices of ). Let b, r, f
be non-negative integers such that b+r+2f = m+2 = |V_e|. A coloring
on V_e with parameters (b,r,f) is equivalent to a decomposition of the set
V_e as a disjoint union of the form
V_e=B∪ R_0∪_i F_i,
(as in (<ref>)) where
* the set B has b vertices; its vertices are the blue ones,
* the set R_0 has r vertices; its vertices are the red ones
fixed by the involution ι, and
* each set F_i has two red vertices, and the involution ι
sends one to the other. The union of the sets F_i equals R_1 in
(<ref>). It is a set of size 2f corresponding to
the “free sides”.
Given T ∈(m,3) with an orientation, the way to compute all
possible coloring with parameters (b,r,f) is first to compute all
possible subsets B of V_e of size b, and for each choice,
compute all possible choices of the subset R_0 of r elements on
its complement. The complement S=V_e ∖ (B ∪ R_0) has then
2f-elements. A way of decomposing the set S as a disjoint union of
f subsets of size 2 will be called a 2-partition.
Given a set S with 2f elements, how to compute its 2-partitions?
Let n be a positive integer. A 2n-tuple (v_1,…,v_2n) of
integers is ordered if it satisfies the following two properties:
* The sub-tuple (v_1,v_3,…,v_2n-1) made of the odd
entries is ordered.
* For any odd index i, 1 ≤ i ≤ 2n-1, v_i < v_i+1.
Similarly, we say that an element of _2f is ordered if so is the
2f-tuple it represents.
Let S={1,…,2f} and let denote the set of ordered
2f-tuples made of elements of S. In particular, the coordinates of
elements of S are all different. Let
Ψ→{2-partitions} be the map given by
Ψ(v_1,…,v_2f)={v_1,v_2}∪⋯∪{v_2f-1,v_2f}.
With the previous notation, the map Ψ is bijective.
Injectivity: let (v_1,…,v_2f) and (w_1,…,w_2f) be
elements of such that
Ψ(v_1,…,v_2f) = Ψ(w_1,…,w_2f).
Then
{{v_1,v_2},…,{v_2f-1,v_2f}} =
{{w_1,w_2},…,{w_2f-1,w_2f}}, say
{v_1,v_2} = {w_i,w_i+1} for some odd index i. The second
condition of an ordered tuple implies that v_1<v_2 and
w_i < w_i+1 so v_1 = w_i and v_2 = w_i+1. Repeating this
argument it follows that the set of odd entries of the first
tuple must equal the set of odd entries of the second one, and since
both sets are ordered, the tuples must be the same.
Surjectivity: let S = ⋃_i=1^f{v_2i-1,v_2i} be a
2-partition. Clearly we can order each set so that
v_2i-1 < v_2i for all values of i, and we can also order the
sets so that v_2i-1< v_2i+1 for all i=1,…,f-1. By definition,
the tuple (v_1,…,v_2f) is ordered (so is an element of P).
Let T∈(m,3) be a bi-valent tree with an orientation, and let
(b,r,f) be non-negative integers satisfying that b+r+2f=m+2. The
following algorithm computes all possible coloring on V_e with
parameters (b,r,f):
Let be a tree diagram coming from a subgroup of
_2(). Then
[_2():]=3m+b,
where m is the number of internal vertices of and b is the
number of blue external vertices (see for example the proposition in
page 1078 of <cit.>). This implies that there are finitely many
triples (b,r,f) corresponding to index d-subgroups of
_2().
Let d be positive integer. The following algorithm computes all
possible triples (b,r,f) attached to index d subgroups of
_2():
Let be a subgroup of _2() of index d, and let
m+2 be the number of cusps belonging to a special polygon. The
formula d=3m+b together with the fact that the number of odd vertices cannot exceed
the number of cusps, give the restriction
3m≤ d≤ 4m+2.
This completes the proof.
Combining the different algorithms described in the present section,
we can compute given a positive integer d the set of non-equivalent
tree diagrams corresponding to subgroups of index d in _2().
§ TREE DIAGRAMS, GENERALIZED FAREY SYMBOLS AND SUBGROUPS
As explained in the introduction, our tree diagram is obtained as the
tree of f-edges of a special polygon. It is a natural question how to
recover information of the special polygon that is missing in the tree
diagram. For example, how can we compute it set of cusps? (note that
since the special polygon attached to a subgroup is not unique, the
answer will also not be unique).
As explained in <cit.>, the cusps of a special polygon
form what is called a generalized Farey symbol, since if
a/b and c/d are two consecutive cusps of a special
polygon then |ad-bc|=1.
A generalized Farey symbol (g.F.s. for short) is an expression
of the form
{-∞,c_2,c_3,…,c_n,∞}
where
* c_2 and c_n are integers, and some c_i = 0,
* c_i = a_i/b_i are rational numbers in their reduced
forms and ordered according to their magnitudes, such that
|a_ib_i+1-b_ia_i+1| = 1, i=2,3,…,n-1.
Our definition is a little different from Kulkarni's one. The
elements -∞ and ∞ are identified as points on the
polygon, so they correspond to the same point at infinity, but for
algorithmic purposes it is better to represent them as different
elements. In particular, we will represent -∞ by the fraction
-1/0 and ∞ by 1/0. This is consistent
with the package developed in <cit.> for computing with
generalized Farey symbols.
Any path between two cusps (members of a g.F.s.) will necessarily go
through an end point of a tree diagram. As suggested in
<cit.>, one can add the information of the vertex color
to the g.F.s. forming what is called a Farey symbol.
A Farey symbol
is an expression
of the form:
where
* {-∞,c_2,…,c_n,∞} is a g.F.s.,
* each symbol p_i is one of ∘, ∙ or a number label,
depending on whether the line on the boundary that joins c_i
with c_i+1 is even, odd or a free side (in which case the same
label is used for its matching line).
Let be a tree diagram, and let v,w be two external vertices
next to each other (i.e. one is to the left or right of the
other). Define the bipartite distance between v and w
(denoted by (v,w)) as the distance between v and w in the
tree obtained from while constructing the bipartite cuboid graph
before identifying the glued vertices.
Let be a tree diagram, and let v,w be external vertices next
to each other. Let d denote the distance between v and w
on the tree . Then
(v,w) = 2d -2 +
0, if v and w are both even (red),
2, if v and w are both odd (blue),
1, otherwise.
Since is a tree, there is a unique reduced path between v and w, and
since the distance between v and w is d, the path goes through
d-1 internal vertices. The bipartite graph is obtained by adding
an extra (red) vertex between each pair of internal vertices, giving
an extra d-2 steps. At last, if either ending point is odd
(i.e. blue), we need to add an extra vertex next to it.
Let be a tree diagram, and let v,w be external vertices next to
each other. Let x=a/b, y and z=c/d be three
consecutive cusps of a special polygon attached to satisfying that
v is between x and y and that w is between y and z. Then
(v,w)= 2|ad-bc| +
0, if v and w are both even (red),
2, if v and w are both odd (blue),
1, otherwise.
Since everything is invariant under the action of _2(), we
can assume that y is the infinity cusp. Then x and
z are both integers (i.e. b=d=1) with c<a. The
tessellation of the upper part of our special polygon then is one of
the following:
* If v and w are both even (red) then it looks like
Figure <ref>. Note that
2(a-c) = (v,w)
(since there are two
f-edges between consecutive red vertical lines).
* If v and w are both odd (blue) then the tessellation looks like
Figure <ref>. Note that 2(a-c)+2 = (v,w) (since there are two
f-edges between consecutive blue vertical lines).
* If v and w have different colors (say v is blue and w
is red) then the tessellation looks like Figure <ref>. Note that
2(a-c)+1 = (v,w).
From equations (<ref>) and (<ref>) it
follows that if x=a/b, y and z=c/d are three
consecutive cusps with v between x and y and w between
y and z, then
|ad-bc| = d(v,w)-1.
These relations determine the g.F.s. uniquely (up to _2()-equivalence).
Let be a tree diagram, and V_e={v_1,…,v_m} denote the set
of external colored vertices. Assume that the set V_e is ordered,
i.e. v_i+1 is to the right of v_i for 1 ≤ i ≤ m (with the
convention that v_m+1 = v_1). The following algorithm computes a
g.F.s. attached to :
Up to _2()-equivalence, we can always assume that the first
cusp (to the left of v_1) equals ∞ and the next one equals
0. Since the number of cusps equals the number of external
vertices, if there are only two such cusps, we are done. Suppose
that there are at least three cusps, say
{-1/0,0/1,a/b}, for some
positive integers a,b. The integers a,b satisfy the relations:
|-b +0 · a| = d(v_1,v_2)-1,
0· b -a = -1,
where the second condition comes from (<ref>). Then a = 1
and b = d(v_1,v_2)-1. If we constructed the first elements of our
g.F.s., say {-1/0,0/1,x_1,…,x_i},
with x_i-1 = α/β and
x_i = γ/δ, the next element x_i+1=X/Y
is a solution of the system
|α Y-β X| = d(v_i,v_i-1)-1,
γ Y-δ X = -1.
The fact that x_i+1 > x_i-1 together with the fact that both the
numerators and denominators of x_i-1 and x_i+1 are positive
integers (because i ≥ 2) imply that |α Y-β X| = -α Y+β X.
A Kulkarni diagram is an object consisting of a tree diagram,
together with a choice of a g.F.s. (denoted by G) attached to it.
The particular choice of the g.F.s. is not important, a different
choice will correspond to another subgroup of _2() which is
conjugate (by a matrix in _2()) to it. The elements of G will
be called cusps.
§.§ Some important algorithms
The purpose of the present
section is to gather together some algorithms to compute with Kulkarni
diagrams as well as with its attached subgroup . The most
important algorithms are that of computing generators for , and
an algorithm to determine whether an element g ∈_2() belongs to
or not (crucial to compute the passport attached to a Kulkarni diagram).
To make statements more clear, we will use the following notation:
denotes a Kulkarni diagram,
G={-∞,c_2,…,c_m+2,∞} denotes its g.F.s. and
the subgroup attached to it. As proven in <cit.>, the group has a set of generators
{α_1,…,α_m+2} indexed by the elements of G,
representing how the paths in the boundary of a special polygon are
glued together.
[Generator of a cusp] Let be a kulkarni diagram,
V_e={v_1,…,v_m+2} be its set of (external) colored and ordered vertices
and let G be its underlying g.F.s. Given a cusp c_k ∈ G, the
following algorithm computes the generator corresponding to the cusp
c_k:
See <cit.>.
[Cusps representatives]
Let be a Kulkarni diagram and let V_e={v_1,…,v_m+2} be its set
of (external) colored and ordered vertices. The following algorithm
computes the subgroup of 𝕊_m+2 whose orbits correspond to
equivalent cusps:
Clear from the gluing of the border paths.
Let C_1,…,C_t be the cusp representatives of the Kulkarni diagram
, i.e. each set C_i is made of elements of G which are in the
i-th orbit under the action of the group S computed using
Algorithm <ref>.
How to compute W(C_i), the width of C_i?
As before, let G={c_1=-∞,c_2,c_3,…,c_m+2,c_m+3=∞}. Let d G → be the function
d(c_i) = |a_i-1b_i+1-a_i+1b_i-1|,
where a_i (resp. b_i) is the numerator (resp. denominator) of the
cusp c_i, and the indices are taken “in a cyclic order”
(i.e. c_m+3 = c_1).
[Width of a cusp]
Let be a Kulkarni diagram, V_e={v_1,…,v_m+2} be
its colored and ordered external vertices and let
c_i_1,…,c_i_r be the cusps belonging to the class
C. The following algorithm computes the width of C:
See the proposition of page 1079 in <cit.>.
§.§ The word problem
The classical solution to determine whether an element
g ∈_2() belongs to (the subgroup attached to the
Kulkarni diagram ) is to produce a “reduction algorithm”
modulo elements of . Based on the article <cit.>, in
<cit.> (Algorithm, page 13) the authors give such an
algorithm. The problem with the stated method is that it has some
“border” conditions which are not clearly stated, so we take the
opportunity to add to it the missing details. If a and b are
integers, we denote by (a,b) the cusp a/b, with the
usual convention that -∞ = -1/0 and
∞ = 1/0.
Let be a Kulkarni diagram, let be its attached group, G
be its g.F.s. and let C denote the vector consisting of the
coloring on G. Let g ∈_2() be any element. The
following algorithm (Reduction) gives a reduced element in
the coset g:
Let
g = ([ a b; c d ]), and let α = (a,c),
β=(b,d) be the cusps corresponding to the first and the
second column. The boundary issues appear when either α or
β are the infinity cusp (depending also on whether they
correspond to the -∞ or the ∞ cusp). The first case
corresponds to α = ∞, the second case to
β = ∞ and the remaining ones to the “generic” case. In
<cit.> the authors start assuming that β < α
(following their main reference <cit.>). If we multiply the
matrix g on the right by
S = ([ 0 1; -1 0 ]) has the effect of interchanging
α↔β, so in case α < β we
reduce the matrix gS and multiply the reduced matrix by S to the
right. Multiplication by S sends
[ a b; c d ]→[ -b a; -d c ].
This justifies the steps 10-12 of the algorithm. In the non-generic
cases, this idea might not be enough to get the “right” reduction
step (as explained in <cit.>, the idea behind the algorithm
is to shorten a distance). Suppose that |α|=∞ (so we are in the first case), then the border condition is the following:
* If α = ∞ and β is positive, we reduce g.
* If α = ∞ and β is negative, we reduce -gS (i.e. instead of considering the pair (∞,β) we work with (β,-∞)).
* If α = -∞ and β is not larger than all elements of G, we reduce gS (i.e. (β,-∞)).
* If α = -∞ and β is larger than all elements of G, we reduce -g (i.e. (∞,β)).
Similarly, when |β| = ∞ (the second case), the border
condition is the following:
* If β = -∞ and α is no larger than all elements of G, we reduce g.
* If β = -∞ and α is larger than all elements of G, we reduce -gS (i.e. we replace the pair (α,-∞) by (∞,α)).
* If β = ∞ and α<0, we reduce -g (replacing (α,∞) by (α,-∞)).
* If β = ∞ and α >0, we reduce gS (replacing (α,∞) by (∞,α)).
In all the above cases, we end up in a situation where
β < α. The rest of the algorithm mimics the one presented
in <cit.>.
Let g ∈_2() and let
g'=([ a b; c d ]) be
its reduction (i.e. the output of the last algorithm).
With the previous notation, an element g ∈_2() belongs
to if and only if one of the following is true:
* g' = ±([ 1 0; 0 1 ]),
* (b/d,a/c) is a free side paired with (0,∞),
* g'= ±([ 0 -1; 1 0 ]), and 0 and ∞ are adjacent
vertices with an even pairing between them.
See <cit.>.
The last two results provide a solution to the word problem, namely
determining whether an element α∈_2() belongs to or not. We
end this section with an algorithm to compute coset representatives
for \_2().
Let be a Kulkarni diagram, let be the attached group and
G be its g.F.s. Let C be the vector made of the
coloring of G. The following algorithm gives a complete set of
representatives for \_2():
The result is stated in 5.3 of <cit.>, although there
is a minor mistake on it (which is why we present a proof). Let be the hyperbolic
triangle with vertices ρ, i, ∞ as in
Figure <ref> and let P be the special polygon attached to
with cusps the g.F.s. of . If c_i=a_i/b_i and
c_i+1=a_i+1/b_i+1 are two consecutive vertices of
the special polygon attached to (so they are two consecutive
cusps), then the matrix
A=[ -a_i a_i+1; -b_i b_i+1 ]
sends the line joining ∞ and 0 to the line joining c_i to
c_i+1 (note that the sign in the first column is missing in
<cit.>). The image under A^-1 of P then contains the
cusp A^-1· c_i-1 (a positive integer) next to ∞ next to the cusp 0. If
c_i-1 = a_i-1/b_i-1, then
A^-1· c_i-1 = w =|a_i-1b_i+1-b_i-1a_i+1|.
Suppose that the line joining c_i and c_i+1 is even,
then the translates of appearing in A^-1P are either of the form
* T^j(∪ Tι()) if the line joining c_i to
c_i-1 is also even (so they are translates of a fundamental
domain for _2()), where 0 ≤ j ≤ w-1 (see Figure <ref>),
* T^j(∪ Tι()), where 0 ≤ j ≤ w-1 together
with T^w (see Figure <ref>) if the line joining c_i to
c_i-1 is odd. Note that the “other” part of the fundamental
domain will appear while considering the cusp c_i-1, so we do
not add it to the list to avoid repetitions.
In this case it is clear that the hyperbolic triangles in the
special polygon P containing the infinity point are
the translates by A*T^j of (together with Tι()) for
0 ≤ j ≤ w-1.
Suppose otherwise that the line joining c_i and c_i+1 is odd. Then the translates
of appearing in A^-1P belong to one of the following two cases:
* T^-1(T ι()) together with
T^j(∪ Tι()), where 0 ≤ j ≤ w-1 (see
Figure <ref>) if the line joining c_i to c_i-1 is
even. Note that we must add the first element as it was excluded
from the opposite situation (namely left side even and right one
odd).
* T^-1(T ι()) together with T^w and
T^j(∪ Tι()) for 0 ≤ j ≤ w-1 (see
Figure <ref>). Once again, to avoid repetitions, we do
not add the representative T^w.
In this second case the hyperbolic triangles in the special polygon
P containing the infinity point are the translates of
A*T^j for -1 ≤ j ≤ w-1 of (together with
Tι()).
§.§ Relation with geometry
From a finite index subgroup of _2() one can construct
the algebraic curve given by the quotient
_:=\^∗. Most of the geometric
invariants of _ can be read from the Kulkarni diagram
. Define the following quantities:
* e_2 = |{“red" vertices fixed by the involution in R}| = |R_0|.
* e_3 = |{“blue" vertices in V_e}|.
* t = the number of orbits of the group obtained as the output of
Algorithm <ref>.
* d = [_2():].
Then in <cit.> it is proven that the curve
_ satisfies the following properties:
* the number of branch points of →_ of order 2 equals e_2,
* the number of branch points of →_ of order 3 equals e_3,
* the number of inequivalent cusps of _ equals t, and
* the following formula holds:
2g(_) + t-1 = 1/2|{“red" vertices non-fixed by the involution in R}|=f,
where g(_) is the genus of _.
In particular, the colored set V_e contains enough information to
compute g(_).
§ PASSPORTS AND EQUIVALENCE CLASSES OF SUBGROUPS
Let G a group and H be a subgroup of index d. Let
X={g_1,…,g_d} be a set of left coset representatives for
G /H. Without loss of generality, we can assume that g_1 ∈ H. Then
G=_i=1^dg_iH.
There is a natural well known homomorphism of groups
θ_H G ⟶𝕊_X≃𝕊_d
g ⟼σ_g,
where σ_g is the automorphism of X satisfying that
g· g_iH=σ_g(g_i)· H for all i. The morphism
θ_H has important properties (see for example <cit.>), namely:
* It determines the subgroup H (with our choice it is
precisely the group of elements in G fixing g_1).
* Any conjugate of H by an element of G equals the group of
elements in G fixing g_i for some 1 ≤ i ≤ d.
* If we denote by H^N the biggest normal subgroup of G
contained in H (also known as the normal core of H in G),
then it coincides with Ker(θ_H).
* If we denote by
Σ=Imθ_H≤𝕊_d (that is
isomorphic to G/H^N) then Σ acts transitively on _d.
Recall the following well known result on groups.
Let G be a group, and let H, K be two index d subgroups of
G. H is conjugated to K by an element of G if and only if there exists
σ∈_d such that
θ_H=σθ_Kσ^-1.
The proof is similar to that of <cit.>, although in such statement the author considers
only faithful representations. Start supposing that there exists
g ∈ G such that K=gHg^-1. If {h_1,…,h_d} is
a set of left coset representatives for H then
{gh_1g^-1,…,gh_dg^-1} is a set of left coset
representatives for K. Let t ∈ G such that t(h_i· H) = h_j· H, then
gtg^-1 (gh_ig^-1)· K = (gh_jg^-1) · K.
Thus if σ denotes the element of _d given by left
multiplication by g, we get that
σθ_K σ^-1 = θ_H for the chosen set of
representatives.
Conversely, suppose that there exists σ∈_d such that
θ_H=σθ_Kσ^-1. Without loss of
generality (after reordering the left coset representatives of
K), we can furthermore assume that θ_H=θ_K and
that the first coset representative for H lies in H. In particular,
if h ∈ H, θ_H(h)(1) = 1. Then θ_K(h)(1), so if
k_1 is the first left coset representative for K,
h (k_1· K) = k_1 · K, so h ∈ k_1Kk_1^-1 and
H ⊆ k_1 K k_1^-1. Since both groups have the same index
in G, they must be equal.
For a general group G and a subgroup H, it is hard to describe the
map θ_H, but if G has a small (and known) number of
generators, then it is enough to compute the permutation corresponding
to each generator. This is indeed the case for G = _2(),
which is generated by the elements
S = [ 0 -1; 1 0 ], T = [ 1 1; 0 1 ].
Then the pair (θ_H(S),θ_H(T)) ∈_d×_d up to simultaneous
conjugation determines the _2()-equivalence class of H. It
is also useful to compute the permutation corresponding to the element
R = ST (which is the composition of the two other permutations), as
it is an element of order 3 in _2() (with together with S
generates the group).
A passport is a tuple (σ_S,σ_R,σ_T) in
_d^3 up to simultaneous conjugation satisfying that
* σ_S^2=1,
* σ_R^3=1,
* σ_S σ_T = σ_R,
* Σ = ⟨σ_S,σ_R⟩ is transitive.
An important result of Millington (<cit.>)
implies that the the passport contains a lot of arithmetic information
of the curve H \.
There is a one-to-one correspondence between subgroups H of index
d in _2() and equivalence classes of passports
(σ_S,σ_R,σ_T). Furthermore, keeping the previous
notation,
* e_2 and e_3 are the number of elements fixed by the permutations σ_S
and σ_R respectively,
* σ_T has t disjoint cycles (where t equals the
number of cusps).
* Let d_1,…,d_t be the lengths of the disjoint cycle
decomposition of σ_T (so d=∑_i=1^t d_i). Then
{d_1,…,d_t} equals the set (with repetitions)
{w(C_1),…,w(C_t)} of cusp widths.
Using the algorithms described in the previous section, it is easy to
give an algorithm to, given a Kulkarni diagram together with an
element κ∈_2(), compute the permutation
σ_κ: use the algorithm of
Proposition <ref> to compute a set
{g_1,…,g_d} of left coset representative for
\_2(). For each 1 ≤ i ≤ d, use
Theorem <ref> to determine the unique j in
{1,…,d} such that g_j^-1κ g_i belongs to
. Then σ_κ(i)=j. This algorithm is implemented in
our package to compute the passport attached to a Kulkarni
diagram.
§.§ Congruence subgroups
Recall the following well known definition.
A subgroup of _2() is called a congruence subgroup if there exists a positive integer N such that the group
Γ(N)={[ a b; c d ]∈_2() : N| b, N| c, a ≡ 1 N, d≡ 1 N },
is contained in . If such N does not exist, we say that
the subgroup is a non-congruence subgroup. The subgroup
Γ(N) is known as the main congruence subgroup of
level N.
The main interest in congruence subgroups is that they have many
endomorphisms (given by the Hecke operators). In particular, if
is a congruence subgroup, then the curve _ is
defined over a cyclotomic extension, and its Jacobian is isogenous
to abelian varieties having very special properties (they are what
is called of _2-type, see for example <cit.>).
Let be a Kulkarni diagram and {C_1,…,C_t} be the
set of inequivalent cusps. The generalized level of is
defined to be the least common multiple of {W(C_1),…,W(C_t)}.
If is a congruence subgroup of generalized level N, then Γ(N) ⊆.
See <cit.>.
For
completeness, we include the following algorithm due to Hsu for
determining whether a group is a congruence group or not.
Let a subgroup of _2() and let
σ_L = θ_(([ 1 1; 0 1 ])), σ_R = θ_(([ 1 0; 1 1 ])).
Let N = e · m be the order of L, where e is a power of 2 and
m is an odd positive integer. Then the following algorithm
determines whether is a congruence subgroup or not:
See <cit.> and 3 of loc. cit. for the implementation.
§ SOME NUMERICAL DATA
We have systematically run our algorithm to compute Kulkarni
diagrams for subgroups up to index 20. For each of them, we computed
the number of conjugacy classes for _2 and _2-subgroups. The
results are presented in Table <ref>.
Note that the number
of Kulkarni diagrams grows much faster than the real number of
equivalent classes of subgroups. To our knowledge this is a phenomena
that was not observed before (since in the article published by
Kulkarni only subgroups with small index are computed). As mentioned
in the introduction, all these information can be downloaded from the GitHub
repository <https://github.com/vendramin/subgroups>.
In Table <ref> we give all the information obtained
from each _2()-equivalence class for subgroups up to index 7
(as it might be useful to the reader). Some
information phenomena obtained from our database:
* Among all subgroups (up to _2()-equivalence), there are
90 congruence ones (from a total of 16381 subgroups),
i.e. congruence groups are very rare even for small indices.
* Up to index 20, there are 2410 groups corresponding to curves of genus 1 (from a total of 16381 subgroups).
* There are only 9 non-conjugate subgroups whose curve has genus
2. They all correspond to subgroups of index 18 (as the example
found in <cit.>).
Let us state some properties regarding the genus 2 curves. Here is a list of their Kulkarni diagrams together with their passports:
* The tree diagram equals 2.1cm
< g r a p h i c s >
, the
g.F.s. equals
{∞, 0,1,4/3,3/2,5/3,2,3,∞} and
its passport equals
* σ_S=(1,5)(2,8)(3,15)(4,9)(6,12)(7,16)(10,14)(11,17)(13,18),
* σ_R=(1,8,4)(2,15,7)(3,18,14)(5,12,9)(6,16,11)(10,17,13),
* σ_T=(1,2,3,13,14,15,16,17,18,10,11,12,4,5,6,7,8,9).
* The tree diagram equals 2.1cm
< g r a p h i c s >
, the
g.F.s. equals
{∞, 0,1,4/3,3/2,5/3,2,3,∞} and
its passport equals
* σ_S=(1,5)(2,8)(3,15)(4,18)(6,12)(7,16)(9,13)(10,14)(11,17),
* σ_R=(1,8,4)(2,15,7)(3,18,14)(5,12,9)(6,16,11)(10,17,13),
* σ_T=(1,2,3,4,5,6,7,8,18,10,11,12,13,14,15,16,17,9).
* The tree diagram equals 2.1cm
< g r a p h i c s >
, the
g.F.s. equals
{∞, 0,1,4/3,3/2,5/3,2,3,∞} and
its passport equals
* σ_S=(1,5)(2,8)(3,15)(4,10)(6,12)(7,16)(9,14)(11,17)(13,18),
* σ_R=(1,8,4)(2,15,7)(3,18,14)(5,12,9)(6,16,11)(10,17,13),
* σ_T=(1,2,3,13,4,5,6,7,8,10,11,12,14,15,16,17,18,9).
* The tree diagram equals 2.1cm
< g r a p h i c s >
, the
g.F.s. equals
{∞, 0,1,4/3,3/2,5/3,2,3,∞} and
its passport equals
* σ_S=(1,5)(2,8)(3,15)(4,14)(6,12)(7,16)(9,13)(10,18)(11,17),
* σ_R=(1,8,4)(2,15,7)(3,18,14)(5,12,9)(6,16,11)(10,17,13),
* σ_T=(1,2,3,10,11,12,13,18,4,5,6,7,8,14,15,16,17,9).
* The tree diagram equals 2.1cm
< g r a p h i c s >
, the
g.F.s. equals
{∞, 0,1,4/3,3/2,5/3,2,3,∞} and
its passport equals
* σ_S=(1,10)(2,8)(3,15)(4,18)(5,13)(6,12)(7,16)(9,14)(11,17),
* σ_R=(1,8,4)(2,15,7)(3,18,14)(5,12,9)(6,16,11)(10,17,13),
* σ_T=(1,2,3,4,10,11,12,14,15,16,17,5,6,7,8,18,9,13).
* The tree diagram equals 2.1cm
< g r a p h i c s >
, the
g.F.s. equals
{∞, 0,1,4/3,3/2,5/3,2,3,∞} and
its passport equals
* σ_S=(1,10)(2,8)(3,15)(4,13)(5,14)(6,12)(7,16)(9,18)(11,17),
* σ_R=(1,8,4)(2,15,7)(3,18,14)(5,12,9)(6,16,11)(10,17,13),
* σ_T=(1,2,3,9,14,15,16,17,4,10,11,12,18,5,6,7,8,13).
* The tree diagram equals 2.1cm
< g r a p h i c s >
, the
g.F.s. equals
{∞, 0,1,4/3,3/2,5/3,2,3,∞} and
its passport equals
* σ_S=(1,14)(2,8)(3,15)(4,18)(5,10)(6,12)(7,16)(9,13)(11,17),
* σ_R=(1,8,4)(2,15,7)(3,18,14)(5,12,9)(6,16,11)(10,17,13),
* σ_T= (1,2,3,4,14,15,16,17,9,10,11,12,13,5,6,7,8,18).
* The tree diagram equals 2.1cm
< g r a p h i c s >
, the
g.F.s. equals
{∞, 0,1/2,1,3/2,2,3,4,∞} and
its passport equals
* σ_S=(1,7)(2,10)(3,14)(4,17)(5,18)(6,11)(8,13)(9,15)(12,16),
* σ_R=(1,10,6)(2,14,9)(3,17,13)(4,18,16)(5,11,7)(8,15,12),
* σ_T=(1,2,3,4,5,6,7,18,12,13,14,15,16,17,8,9,10,11).
* The tree diagram equals 2.1cm
< g r a p h i c s >
, the
g.F.s. equals
{∞, 0,1/2,1,3/2,2,3,4,∞} and
its passport equals
* σ_S=(1,7)(2,10)(3,14)(4,17)(5,16)(6,11)(8,13)(9,15)(12,18),
* σ_R=(1,10,6)(2,14,9)(3,17,13)(4,18,16)(5,11,7)(8,15,12),
* σ_T=(1,2,3,4,12,13,14,15,18,5,6,7,16,17,8,9,10,11).
The order of the image of θ_ in each case equals
1008, 258048, 486, 4896, 258048, 648, 258048, 4896, 4896
respectively. Furthermore, the fourth, the eighth one and the ninth
ones are isomorphic to _2(_17). Then this three groups must
be the ones found by Atkin and Swinnerton-Dyer in <cit.> (corresponding to a curve defined over the cubic field
with generating polynomial x^3-3x+1).
Recall that the modular curve attached to a congruence subgroup has
many endomorphisms (corresponding to the so called Hecke operators),
while modular curves for non-congruence subgroups tend to not have
endomorphisms at all (there are no Hecke operators in the new part, as
proved in <cit.>). In particular, for each of the previous
genus 2 curves, if there is no contribution from smaller levels, one
expects the Jacobian to be absolutely simple when the subgroup is a
non-congruence one.
The third one and the seventh groups are congruence ones, hence they
do have many endomorphisms. Let us study what happens for the remaining
seven ones. Here is an elementary result to determine whether a group
is contained in a larger (proper) subgroup of _2() in terms of
the passport representation.
Let be a subgroup of _2() of index d, with coset
representatives {μ_1,…,μ_d}. Let σ_S, σ_T
be its permutation representation. Then there exists a subgroup
of _2() of index n (with d | n)
containing if and only if the set {μ_1,…,μ_d} can be
written as a disjoint union of n sets each of them with
d/n elements satisfying that σ_S and σ_T
preserve the partition.
If there exists such a subgroup , let
{g_1,…,g_n} be coset representatives for
\_2() and
{h_1,…,h_d/n} be coset representatives for
\, so that {h_j g_i} are
representatives for \_2(). In particular, the
set of representatives for \_2() can be
written as the disjoint union
⋃_i=1^n{h_1g_i,…,h_d/ng_i}.
Let ν be any element of _2(). If
g_iν = g_j then the sets
{ h_1g_iν,…, h_d/ng_iν} and { h_1g_j,…, h_d/ng_j}
must be the same,
so σ_S and σ_T preserve the partition (<ref>).
Reciprocally, let {g_1,…,g_d} be coset representatives of
\_2(), and let
θ__2() →_d be as before (identifying the set
{g_1,…,g_d} with the set {1,…,d}). Assume that
g_1 ∈, so that corresponds precisely to the elements
of _2() whose image under θ_ fix 1.
By hypothesis (after relabeling the indexes if necessary) the set
{1,…,d} can be written as the disjoint union of the n
sets
{1,…,d/n}∪…∪{d-d/n+1,…,d}, and our representation
θ_ induces a morphism θ_ of
_2() onto _n (via the action on the previous sets). Let
the set of elements of _2() that preserve the
set {1,…,d/n}. Clearly contains
, and the index of on _2() is n
because the image of θ_ is a transitive group (so the
same holds for θ_).
Let us compute for each of our genus 2 subgroups their
“old” part. If there exists a subgroup with index 9
in _2(), then the lemma implies that our set of 18 elements
can be split into 9 sets of size 2, which are preserved under the
action of _2(). Note that σ_T is an 18-cycle in all
cases. Suppose that such a partition exists, and that {a,b} is a
pair of elements of it. Since σ_T acts transitively, there
exists i such that b=σ_T^i(a). Applying σ_T^i to the
pair {a,b} we obtain the pair
{σ^i_T(a),σ^i_T(b)} = {b,σ_T^2i(a)}, so
a = σ_T^2i(a), hence i=9. In particular, the partition must
be of the form {a,σ_T^9(a)} for a varying in the set
{1,…,18}. Then we are led to verify whether the permutation
σ_S preserves this partition or not. Consider each of the nine
cases separately.
* We obtain the partition:
{1,10}{2,11}{3,12}{4,13}{5,14}{6,15}{7,16}{8,17}{9,18}.
Clearly
the element σ_S preserves the partition, so this contains the
subgroup of index 9 with passport σ_S=(1,5)(2,8)(3,6)(4,9)
and σ_R=(1,8,4)(2,6,7)(3,9,5). It matches the 12th element in
the GitHub repository of subgroups of index 9. It is easy to verify that it corresponds
to a genus 1 curve. In particular, the Jacobian of the curve is
isogenous to the product of two elliptic curves.
* A similar computation as the previous one proves that this
second group is contained in an index 9 subgroup with the same
passport as the previous case. Once again, the surface is isogenous
to the product of two elliptic curves.
* In this case, the partition obtained from σ_T is not
compatible with the permutation σ_S, hence the group is not
contained in a group of index 9 (recall that this group is a
congruence one).
* The group is not contained in an index 9 subgroup because once
again the partition is not compatible with the action of σ_S.
* The group is contained in a group of index 9 with the same
passport as the first two cases.
* In this case, the group is contained in a subgroup of index 9,
with passport σ_S=(2,8)(3,6),
σ_R=(1,8,4)(2,6,7)(3,9,5). Such a group corresponds to a
curve of genus zero.
* The group is contained in a group of index 9 with the same
passport as the first two cases.
* The group is not contained in an index 9 subgroup because the partition is not compatible with the action of σ_S.
* The group is not contained in an index 9 subgroup because the partition is not compatible with the action of σ_S.
Although the unique subgroup of index 2 has genus zero, a similar
computation shows that the groups 1, 3, 6 are contained in the
subgroup of index 2, while the other ones are not.
Verifying which groups are contained in subgroups of index 6, we see
that the only ones are the first one (contained in the subgroup with
passport σ_S=(1,5)(2,3)(4,6) and σ_R=(1,3,4)) and the sixth
one, contained in the subgroup with passport σ_S=(1,4)(2,5)(3,6)
and σ_R=(1,5,3)(2,6,4), corresponding to the last subgroup of
index 6 in the GitHub repository (see also
Table <ref>). Such a group corresponds to a curve of
genus one, hence the surface attached to it is isogenous to the
product of two elliptic curves as well. We deduce that the only
possible absolutely simple surface from the list of index 18
subgroups are the ones corresponding to the groups 6, 8 and 9
which match the ones found by Atkin and Swinnerton-Dyer (defined over
a cubic field), while all other ones are isogenous to the product of
two elliptic curves.
abbrv
|
http://arxiv.org/abs/2307.03220v1
|
20230706180000
|
Mass Deformations of Brane Brick Models
|
[
"Sebastian Franco",
"Dongwook Ghim",
"Georgios P. Goulas",
"Rak-Kyeong Seong"
] |
hep-th
|
[
"hep-th",
"math-ph",
"math.MP"
] |
=1
|
http://arxiv.org/abs/2307.01991v1
|
20230705024722
|
Geodesic Equations on asymptotically locally Euclidean Kähler manifolds
|
[
"Qi Yao"
] |
math.DG
|
[
"math.DG",
"math.AP",
"53C55, 35B40, 35B45"
] |
235mm
171.5mm
-16mm
-22.25mm
-22.25mm
thmTheorem[section]
define[thm]Definition
pro[thm]Proposition
cor[thm]Corollary
lem[thm]Lemma
custhmTheorem
questionQuestion
*qsQuestion
manualtheoreminnerTheorem
acknowledgements
mtheorem
mtheorem[mtheorem]Theorem
mcor[mtheorem]Corollary
definition
rem[thm]Remark
ex[thm]Example
equationsection
Geodesic Equations on asymptotically locally Euclidean Kähler manifolds
Qi Yao
=======================================================================
We solve the geodesic equation in the space of Kähler metrics under the setting of asymptotically locally Euclidean (ALE) Kähler manifolds and we prove global ^1,1 regularity of the solution. Then, we relate the solution of the geodesic equation to the uniqueness of scalar-flat ALE metrics. To this end, we study the asymptotic behavior
of ε-geodesics at spatial infinity. Under the assumption that the Ricci
curvature of a reference ALE Kähler metric is non-positive, convexity of the Mabuchi K-energy along ε-geodesics. However, we will also prove that on the line bundle (-k) over ^n-1 with n ≥ 2 and k ≠ n, no ALE Kähler metric can have non-positive (or non-negative) Ricci curvature.
§ INTRODUCTION
In the paper, we study the geodesic equation in the setting of ALE Kähler cases, assuming relatively weak fall-off conditions. Let (X, J, g) be a complete non-compact Kähler manifold of complex dimension n (n≥ 2), we say (X,J, g) is ALE if there is a compact subset K ⊆ X such that ψ: X∖ K → (^n ∖ B_R)/ Γ is a diffeomorphism, where B_R is a closed ball in ^n with radius R and Γ is a finite subgroup of U(n) (any ALE Kähler manifold has only one end according to <cit.>) and the metric g satisfies the following condition on the end X∖ K:
∙ The metric g is asymptotic to the Euclidean metric δ_ij at the end with decay rate -τ for some τ> n-1, i.e., for i =0,1,…, k,
g_ij= δ_ij + O(r^-τ) , |∇^i ((ψ^-1)^* g) |_g_0 = O(r^-τ-i).
The fall-off condition τ > n-1 is the weakest decay rate to make the ADM mass coordinate-invariant in general, referring to Bartnik <cit.> and Chrus̀ciel <cit.>.
One of the difficulties to build up a general theory of scalar-flat Kähler metrics in the ALE setting is that the decay rate of such metrics to their asymptotic models is not good enough compared to the Ricci-flat case. For instance, consider the family of scalar-flat Kähler metric constructed on _^1(-k) by LeBrun <cit.>,
g = ds^2/1+ A/s^2 +B /s^4 + s^2 [σ_1^2 + σ_2^2 + (1+ A/s^2 + B/s^4) σ_3^2],
where A, B are constants, σ_1, σ_2, σ_3 are three invariant vector fields on 3-sphere and s is a radial function on (-k). It can be checked that g-g_euc = O(r^-2), where r denotes geodesic distance from a fixed basepoint, indicating that the Kähler potential function should be of log growth. In Arezzo-Pacard <cit.>, an expansion theorem is proved for scalar-flat Kähler metrics in the complement of B_Γ = { z ∈^n / Γ : |z| ≤ 1 } in ^n / Γ, where Γ is a finite subgroup of U(n), assuming that the dd^c-lemma holds in this situation. In <cit.>, the author proved a dd^c lemma and an expansion theorem under the setting of asymptotically conical (AC) Kähler manifolds. Here, we only need a theorem of weaker version under the setting of ALE Kähler manifolds.
(Yao 2022)
Let (X, J) be an ALE Kähler manifold asymptotic to ^n / Γ. Let ω_1, ω_2 be Kähler forms in the same Kähler class of (X,J) with the corresponding metrics satisfying (<ref>) and such that the scalar curvatures of ω_1 and ω_2 are equal, R_1 ≡ R_2.
Then
ω_2 = ω_1 + dd^c φ, with the potential φ∈^∞_2-2τ̃
for some τ̃ >n-1 depending on (n, τ).
Let ω be the corresponding Kähler form of g.
According to Theorem <ref>, given two Kähler forms ω_1,ω_2 ∈ [ω], if the corresponding ALE Kähler metrics g_1,g_2 satisfy the decay condition (<ref>) and that the scalar curvatures of g_1 and g_2 are identically equal, R(g_1)≡ R(g_2), then ω_1 - ω_2 = dd^c f and f decays at infinity with higher rate -γ, with γ = 2τ̃-2, for some τ̃ > n-1.
Hence, for prescribed scalar curvature problem, we consider the following restricted weighted Kähler potential space,
_-γ (ω) = {φ∈^∞_-γ: ω_φ = ω + dd^cφ > 0} (γ > 2n-4≥0),
where the class of functions, ^∞_s, is defined as follows
^∞_s = { f∈^∞ (X) : |∇_g_0^j f|_g_0 = O(r^s-j) for all j ≥ 0 },
^∞_s = {f̂∈^∞(X): f̂ = f + c, for f∈^∞_s and c is a constant}.
Define ω_0 = ω + dd^c ψ_0, ω_1 = ω+ dd^c ψ_1, for any two boundary data ψ_0,1∈_-γ(ω). Also introduce the linear reference path ψ(t) = (1-t)ψ_0 + t ψ_1 in _-γ(ω). Another path φ(t) in _-γ(ω) with the same endpoints ψ_0,ψ_1 is called a geodesic in _-γ(ω) if
φ̈(t) - 1/2 |∇_ω_φ(t)φ̇(t)|_ω_φ(t)^2 = 0.
As observed by Donaldson <cit.> and Semmes <cit.>, the geodesic equation is equivalent to a homogeneous complex Monge-Ampère equation in the product space X ×Σ, where Σ≅ [0,1]× S^1 can be embedded as an annulus in . Notice that any path φ(t) of functions on X can be viewed as a function Φ on X ×Σ via Φ (·, t, e^is) = φ(t). Let Ω_Φ = p^* ω + dd^c Φ, where p is the projection from X×Σ to X and dd^cΦ is computed on X ×Σ. Then the equation (<ref>) can be rewritten as follows:
Ω_Φ^n+1 = 0,
Ω_Φ≥ 0,
Φ|_t=0,1 = ψ_0,1.
In <cit.>, Donaldson proposed a program to attack the existence and uniqueness problems regarding canonical metrics by studying the geometric structure of the potential space , where the geodesic equation play a central role. In the cases of compact Kähler manifolds, Chen <cit.> showed that for any ψ_0, ψ_1 ∈, the geodesic equation has a unique solution up to dd^c-regularity. Blocki <cit.> and He <cit.> built up direct calculations to prove gradient estimate and Laplacian estimate. The full ^1,1 estimate was proved by Chu-Tosatti-Weinkove in <cit.>. In the other direction, Lempert-Vivas <cit.> and Darvas-Lempert <cit.> constructed counter-examples to assert that dd^c Ψ is not continuous in general, hence the ^1,1 regularity is optimal in general. In <cit.>, Auvray generalized the dd^c-regularity to singular cases (precisely, there exists cusp singularities along simple normal crossings). The main theorem of sections <ref>-<ref> is to generalize the full ^1,1 estimates to ALE Kähler manifolds.
Let X be an ALE Kähler manifold and ψ_0,ψ_1 ∈_-γ (ω) (γ > 0). Then ψ_0 and ψ_1 can be connected by a ^1,1 geodesic Φ solving (<ref>), (<ref>), (<ref>). Moreover, there is a uniform constant C depending only on ψ_0_^1,1(X,ω), ψ_1_^1,1 (X,ω) and on the geometry of (X, ω) such that
sup_X ×Σ( |Φ| + |∇_Θ_ΨΦ|_Θ_Ψ + |∇^2_Θ_ΨΦ|_Θ_Ψ) ≤ C.
Here, Θ_Ψ is a Kähler form on X ×Σ given by Θ_Ψ = Θ + dd^c Ψ with Ψ(·,t,e^is) = ψ(t) = (1-t)ψ_0 + tψ_1 the linear path introduced above, and with Θ = p^*ω + A dd^c t(t-1), where A>0 is fixed depending only on ψ_0_^1,1(X,ω), ψ_1_^1,1 (X,ω) such that Θ_Ψ > 0.
Then, we relate the solution of geodesic equation to the uniqueness of scalar-flat ALE Kähler metrics in each Kähler class. The main idea is to follow the framework of Chen <cit.> in the compact case, under the assumption that the Ricci curvature of the reference metric is non-positive. This was extended to the noncompact case with Poincaré cusp ends by Auvray <cit.>. In the ALE case, it is first necessary to prove sufficient decay at infinity of solutions to the ε-geodesic equation.
In Section <ref>, we discuss the asymptotic behavior of ε-geodesics. Given any two functions
ψ_0,ψ_1 ∈_-γ (ω) = {φ∈^∞_-γ: ω_φ = ω + dd^cφ > 0 } (γ > 0),
we set ψ(t) = (1-t)ψ_0 + tψ_1 and let Ψ denote the corresponding function on X ×Σ. We fix A large depending on ψ_0_𝒞^1,1(X,ω), ψ_1_𝒞^1,1(X,ω) such that Θ_Ψ := Θ + dd^c Ψ is positive on X ×Σ, where Θ := p^*ω + A dd^c t(t-1) with p: X ×Σ→ X the projection. Then, we introduce the following ε-geodesic equations
(E_ε)
(Θ + dd^c Φ_ε )^n+1 = υ(ε) Θ_Ψ^n+1, in X ×Σ,
Θ + dd^c Φ_ε >0, in X ×Σ,
Φ_ε|_t =0,1 = ϕ_0,1, on X ×∂Σ,
where υ(ε) is a smooth nonnegative function defined in X × [0,1] satisfies the following conditions
υ(0) ≡ 0, υ(1) ≡ 1;
υ(ε) > 0, for ε∈ (0,1];
C^-1ε≤υ(ε) ≤min ( C ε , 1 ), for ε∈ [0,1];
|∇^k υ(y, ε)| ≤ C ε r(y)^- ς -k, for (y, ε) ∈ X ×Σ× [0,1], k≥ 1
where ς is an real number with ς≥γ.
In particular, we are interested in the following case.
By taking
υ(ε)= ε ((1-χ(ε))f + χ(ε)),
where χ is a smooth increasing function in [0,1] equal to 0 (resp. 1) in a neighborhood of 0 (resp. 1) and f is defined as follows
f = A^-1Θ^n+1/Θ_ψ^n+1∈^∞ (X ×Σ),
and in this case, |∇^k f| ≤ C r^-γ-2-k, ς = γ+2.
By taking ε to be small enough, (E_ε) can be written as
(φ̈ - 1/2 |∇_ω_φφ̇|_ω_φ^2 ) ω_φ^n = εω^n.
Due to the positivity of the right hand side of (E_ε), it is well known that for every ε∈ (0,1] there exists a solution Φ_ε∈⋂_k,α𝒞^k,α. We now prove:
Let Φ_ε be the ε-geodesic constructed above. Then, there exists a constant C(k,ε^-1) depending on k ≥ 1 and on an upper bound for ε^-1 such that
(|∇^k_X,ωΦ_ε|_ω + |∇^k_X,ωΦ̇_ε|_ω + |∇^k_X,ωΦ̈_ε|_ω) ≤ C(k,ε^-1) r^-γ -k for all k ≥ 1,
where ∇_X,ω denotes the Levi-Civita connection of the ALE Kähler metric ω on X, acting as a differential operator in the X directions on X ×Σ. And
|Φ_ε - c(t)| ≤ C(ε^-1)r^-γ,
where c(t) is a function only depending on t.
Hence, for any two potentials ψ_0, ψ_1 in _-γ (ω), there exist ε-geodesics in _-γ(ω) connecting ψ_0 and ψ_1.
In section <ref>, we actually prove a stronger statement. Let φ_ε = Φ_ε- Ψ, then φ_ε∈_max{-2γ-2, -ς} due to the fact that Ψ was chosen to be linear in t (see section <ref> for details).
Hence, while we still cannot define the Mabuchi K-energy along geodesics, the Mabuchi K-energy is now actually well-defined along ε-geodesics assuming γ = 2τ̃-2 > 2n-4.
In Section <ref>, the second derivative of the Mabuchi K-energy will be calculated. Throughout section <ref>, we assume γ = 2τ̃-2 > 2n-4 (Here it turns out that if ψ_0, ψ_1 are only in _4-2n (ω), there would be boundary terms at infinity breaking the positivity of the second derivative. This is a new phenomenon compared to Chen <cit.> and Auvray <cit.>). However, under the assumption that the Ricci curvature of some reference ALE Kähler metric, ω, is non-positive, we can then prove the convexity of Mabuchi K-energy:
Assume that ω is an ALE Kähler metric on X such that the Ricci curvature of ω is non-positive, (ω) ≤ 0. Then, along each ε-geodesic in _-γ(ω) with γ>2n-4, φ(t), the Mabuchi K-energy is convex.
A quick corollary of Theorem <ref> is that assuming (ω) ≤ 0, the scalar-flat Kähler metric, if it exists, is unique in _-γ (ω). However, if there exists a scalar-flat Kähler metric ω_0 in _-γ (ω), the condition, (ω) ≤ 0 implies (ω) =0. Hence, the uniqueness of scalar-flat ALE metric can be reduced to the uniqueness result of Ricci-flat ALE Kähle metric, which can be found in reference <cit.>. The point is that ω_0 = ω + O(r^-γ-2) implies by definition that the ADM masses of ω and ω_0 are equal, (ω) = (ω_0). According to mass formula by Hein-LeBrun <cit.>, it follows that ∫ R(ω) = ∫ R (ω_0) =0. The assumption that (ω) ≤ 0 implies that (ω) =0 (see Remark <ref> for details). In fact, in Section <ref>, we will prove that many ALE Kähler manifolds do not admit any ALE Kähler metrics with ≤ 0 (or ≥ 0) at all:
Let (-k) be the standard negative line bundle over ^n-1 with n ≥ 2, k n,
and let ω be an ALE Kähler metric on (-k) with decay rate -τ, τ > 0. Then, the Ricci form of ω, is of mixed type, i.e., neither (ω) ≥ 0 nor (ω) ≤ 0 is true.
In Riemannian geometry, AE metrics of negative Ricci curvature are well-known to exist in ^n by explicit construction in Lohkamp <cit.>. Theorem <ref> give a negative answer to this question in setting of ALE Kähler metrics.
An interesting question in this context is to ask whether some version of the Nonexistence Theorem <ref> holds in general ALE Kähler manifolds or even AC Kähler manifolds.
Is it true in any ALE Kähler manifold that the Ricci curvature form of an ALE Kähler metric can only be identically zero or of mixed type?
This paper is a part of Ph.D thesis of the author. The author would like to express his gratitude to Professor Hans-Joachim Hein and Professor Bianca Santoro for suggesting the problem, and for constant support, many helpful comments, as well as much enlightening conversation. The author is also thankful to professor Gustav Holzegel for providing financial support during the last semester at University of Münster. The whole project is Funded by the DFG under Germany's Excellence Strategy EXC 2044-390685587, Mathematics Münster: Dynamics-Geometry-Structure, and by the CRC 1442, Geometry: Deformations and Rigidity, of the DFG.
§ Ε-GEODESIC EQUATIONS AND OPENNESS
Recall that ε geodesic equations can be written as follows,
(E_ε)
(Θ + dd^c Φ )^n+1 = υ(ε) (Θ+ dd^c Ψ)^n+1, in X ×Σ,
λΘ < Θ + dd^c Φ < ΛΘ, in X ×Σ,
Φ|_t =0,1 = ψ_0,1, on X ×∂Σ,
where ε∈ (0,1] and 0 < λ < Λ are constants depending on ε. The family of equations (E_ε) is called the ε-geodesic equations.
The idea to solve the equation (E_ε) is the following. Firstly, we apply the continuity method to show that there exists a solution of (E_ε) in ^k,α. In particular, consider the family of equations (E_s), s ∈ [ε,1]. Obviously, there is a trivial solution at (E_1). Then, we shall prove the openness and closedness of (E_s) in certain regularity. In the current section, we deal with the openness of (E_s).
Assuming that there exists a solution of (E_s_0) in ^k,α for some s_0 ∈ [ε,1], we will show in this subsection that (E_s) can be solved for all s in a small open neighborhood of s_0. For simplicity, we write Θ_Ψ = Θ +dd^c Ψ as in Theorem <ref> and φ = Φ- Ψ. Then, the equation (E_ε) can be written as, (Θ_Ψ + dd^c φ)^n+1 = εΘ_Ψ^n+1 in X×Σ,
with the boundary condition φ=0 on X ×∂Σ. Then, the Monge-Ampère operator is defined to be
ℳ (χ) = (Θ_Ψ + dd^c χ)^n+1/Θ_Ψ^n+1.
Let φ be a solution of (E_s_0) for some s_0 ∈ [ε, 1]. By assumption, φ is Θ_Ψ-plurisubharmonic satisfying cΘ≤Θ_Ψ + dd^c φ≤ C Θ. Then, the linearization of Monge-Ampère operator at φ is uniformly elliptic, and is given by
_φ (χ ) =( Δ_φχ) ·(Θ_Ψ + dd^c φ)^n+1/Θ_Ψ^n+1 = s_0 Δ_φχ,
where Δ_φ represents the Laplacian with respect to Θ_Ψ + dd^c φ. Let (^k,α)_0 be the functions in ^k,α vanishing on the boundary X ×∂Σ. Then, we have the following property of _φ, from which the desired openness is clear by the implicit function theorem.
Let φ be the solution of (E_s_0), then the linearized operator _φ : (^k,α)_0 →^k-2,α is an isomorphism for all integers k ≥ 2 and α∈ (0,1).
Let us first prove the surjectivity. Fixing f ∈^k-2,α, the exhaustion argument will be applied to solve the equation _φ u =f. Take an exhaustive sequence of pre-compact sets, Ω_k ⊆ X ×Σ, with smooth boundary. In particular, by taking a sequence of subsets, B_r_k×Σ where B_r_k = {x∈ X:r(x) ≤ r_k}, and smoothing the corners, we can obtain the exhaustive sequence {Ω_k}. Then, we can solve the following Dirichlet problems,
(L_k) _φ u_k = f in Ω_k,
u_k = 0 on ∂Ω_k,
where f ∈^k-2, α.
The existence of the solution of (L_k) is a classic result of the Dirichlet problem on compact Riemannian manifolds with boundary. The key to complete the proof is to give the uniform estimates of u_k. The main idea to show the ^0 uniform estimates is to construct barrier functions. Consider the function A t(1-t). The fact that λΘ≤Θ_Ψ + dd^c φ≤ΛΘ implies Δ_φ A t(1-t) ≤ -λ A. If we suppose that f_L^∞≤ C_0 and take A = C_0/λ, then we have Δ_φ A t(1-t) ≤ f = Δ_φ u_k. Combining with the fact that A t (1-t ) ≥ 0 on the boundary ∂Ω_k, the maximum principle implies that,
u_k_L^∞≤C_0/λ t(1-t) ≤C_0/4λ.
The uniform ^k,α estimates follows directly from the standard Schauder estimates. Precisely, for interior points p ∈Ω_k away from the boundary, we pick a pair of balls centered at p, B_1/4 (p) ⊆ B_1/2 (p) ⊂Ω_k. Then, the interior Schauder estimates implies that u_k_k,α; B_1/16(p)≤ C (u_k _L^∞ (B_1/8(p)) + f_k-2, α; B_1/8(p)). If p ∈Ω_k is close to the boundary, we can apply the boundary Schauder estimate. After straightening the boundary in case that the boundary portion on ∂Ω_k is not flat, we can pick half balls, p ∈ B^+_1/4(q) ⊆ B^+_1/2(q) for some q ∈∂Ω_k. Together with the interior estimates, we have
u_k_k,α; Ω_k≤ C(u_k _L^∞(X×Σ) + f_k-2,α; X×Σ),
where C depends only on n,k,α,λ,Λ. After passing to subsequence, we conclude that the limit function, u, satisfies _φ u =f in X ×Σ and u ≡ 0 on X ×∂Σ. The uniqueness directly follows from the following maximum principle, Lemma <ref>.
The following lemma comes from Yau's generalized maximum principle, referring to <cit.>. To describe the model metric on X ×Σ, we introduce the asymptotic coordinates of X ×Σ. Let {z_1, …, z_n} be asymptotic coordinates of the end of X and let w= t+is be the complex coordinate of Σ. Real asymptotic coordinates are given by {x_1, …, x_2n, x_2n+1 = t, x_2n+2 = s}, where the complex coordinates are written as z_i = x_2i-1 + ix_2i. The asymptotic coordinate system will be applied to describe the asymptotic behavior of prescribed Kähler metrics on X×Σ.
Let (X×Σ, Θ_Φ) be the noncompact Kähler manifold as above with the Kähler metric g associated with Φ satisfying, for some uniform constant 0 < λ < Λ,
λδ_ij≤g_ij≤Λδ_ij
in the asymptotic coordinates of X×Σ. Let u be a _loc^2 function bounded from above on X ×Σ. Suppose that sup_X ×Σu > sup_X ×∂Σ u, then there exists a sequence {x_k} in X ×Σ^∘ such that
lim_k →∞ u(x_k) = sup_X×Σ u, lim_k →∞|du(x_k)|_g = 0, lim sup_k→∞Δ_g u (x_k) ≤ 0.
Let r be the radial function inherited from the asymptotic chart of X, for instance, r = (∑_i=1^n|z_i|^2)^1/2. The radial function can be extended to a non-negative smooth function in the whole space X ×Σ satisfying the estimate
|∇_g r|_g≤ C, |Δ_g r|≤ C,
for some uniform constant C. Consider the function u_𝐞 = u- 𝐞 r. Since u_𝐞 tends to negative infinity as r goes to infinity, u_𝐞 achieves its maximum at some point x_𝐞. And x_𝐞 must be an interior point in X ×Σ based on the assumption that sup_X ×Σ u > sup_X ×∂Σ u.
At x_𝐞, the function u_𝐞 satisfies
0= d u_𝐞 (x_𝐞) = d u (x_𝐞) - 𝐞 d r (x_𝐞),
0 ≥Δ_g u_𝐞 (x_𝐞) = Δ_g u(x_𝐞) - 𝐞Δ_g r (x_𝐞)
and
u_𝐞 (x_𝐞) ≥ u(x) - 𝐞 r(x), for all x ∈ X ×Σ.
Choosing {x_k} to be points achieving the maximum of u_1/k, then combining with (<ref>) and letting k go to infinity, we complete the proof of (<ref>).
The following lemma is a strengthened version of the above maximum principle, based on solving the Dirichlet problem in X ×Σ.
Let (X×Σ, g) be the same as in Lemma <ref>. Suppose that u is a function in ^2_loc (X×Σ) and bounded from above. Suppose that u satisfies Δ_g u ≥ 0 in X ×Σ and u ≤ 0 on X ×∂Σ. Then u ≤ 0 in X×Σ.
Assuming u satisfies sup_X×Σ u ≥δ>0. According to the surjectivity part of proof of Proposition <ref>, there exists a function v satisfying
Δ_g v =-1, in X ×Σ,
v = 0, on X ×∂Σ,
and v_L^∞≤ C (n, λ, Λ). Consider the function u_𝐞 = u -𝐞 v for 𝐞 = δ/2C. Then sup_X ×Σ u_𝐞≥δ/2 >0 and Δ_g u_𝐞≥𝐞. According to Lemma <ref>, there exists a sequence {x_k} in X ×Σ^∘ such that lim_k→∞ u_𝐞 (x_k) = sup_X ×Σu_𝐞, lim_k→∞|du_𝐞(x_k)|_g =0, lim sup_k →∞Δ_g u_𝐞(x_k) ≤ 0. However, Δ_g u_𝐞 >0, which leads to the contradiction.
§ A PRIORI ESTIMATE UP TO 𝒞^0
From section <ref> to <ref>, we complete the proof of Theorem <ref>. The key ingredient is to prove uniform a priori estimates up to order 𝒞^1,1 for the solution φ= Φ - Ψ of the ε-geodesic equation (E_ε). These estimates will be uniform with respect to ε∈ (0,1] and with respect to the distance from a fixed point in X. (In section <ref>, we will also see that for a fixed ε > 0 it can be proved that φ is decaying at spatial infinity. However, we are currently unable to make these decay estimates uniform with respect to ε.)
These uniform 𝒞^1,1 estimates are then used in two ways:
∙ First, they allow us to solve (E_ε) for any fixed ε∈ (0,1] via the continuity method in (𝒞^k,α)_0 for any k ≥ 2. Recall this is done by considering the family of equations (E_s) with s ∈ [ε,1], where openness in (𝒞^k,α)_0 follows from Proposition <ref>. The uniform 𝒞^1,1 estimates that we will prove, together with general regularity theory of the Monge-Ampère equation, then imply closedness. Here, it is not yet important that the 𝒞^1,1 estimates are uniform in ε, and the higher 𝒞^k,α estimates will depend on ε because the ellipticity of the equation does. Also note that these higher-order estimates follow from standard local regularity in the interior and from <cit.> near the boundary because we already have a true 𝒞^1,1 bound.
∙ Once (E_ε) is actually solved, we can then let ε go to zero and use the uniformity of the 𝒞^1,1 estimates of the ε-geodesic solution φ to extract a subsequential limit φ∈𝒞^1,1 such that Φ = Ψ + φ solves the geodesic equation (<ref>), (<ref>), (<ref>).
We omit these standard arguments and instead focus on the proof of the uniform 𝒞^1,1 a priori estimates of the ε-geodesic solution φ. For this we follow the outline of <cit.> in the compact case. However, we provide all the necessary details that are required in order to generalize this theory to the ALE case. In addition, we also make use of the recent advance <cit.> in order to obtain a 𝒞^1,1 estimate which is uniform in ε.
In this section, we only deal with the uniform 𝒞^0 estimate. We begin with a standard comparison principle <cit.>.
Let D be a bounded connected domain in ^n with smooth boundary and u,v ∈^2 (D), plurisubharmonic functions in D. If u= v on ∂ D and u ≥ v, then we have
∫_Ω (dd^c u )^n ≤∫_Ω (dd^c v)^n.
Then we can prove the following maximum principle for Monge-Ampère operators.
Let Θ be a fixed reference Kähler form and Ω, the pull-back of a semipositive (1,1)-form in X. Assume that u,v∈^2(X ×Σ) are bounded functions with Ω+dd^c v, Ω+ dd^c u ≥ 0. If for some positive constants λ, Λ, we have the following properties:
(Ω + dd^c v)^n+1≤ (Ω + dd^c u)^n+1 in X ×Σ,
λΘ≤Ω + dd^c u ≤ΛΘ in X ×Σ,
u ≤ v on X ×∂Σ,
then u ≤ v in X ×Σ.
Assume u (z_0) > v (z_0) at some point z_0 ∈ X ×Σ. Let 2h = u(z_0) - v(z_0). Then, we can modify u,v to be ũ,ṽ as follows:
ṽ = v + h,
ũ = u + h/2 |τ|^2.
It can be checked that ũ, ṽ are bounded functions satisfying that ũ≤ṽ on X ×∂Σ and ũ (z_0) ≥ṽ (z_0) + h. By Wu-Yau's generalized maximum principle, there exists a sequence {p_k} in X ×Σ such that
lim_k →∞ (ũ - ṽ)(p_k) = sup_X ×Σ (ũ- ṽ) ≥ h, lim sup_k→∞ dd^c (ũ- ṽ) (p_k) ≤ 0.
For a sufficiently small constant δ >0, there exist a point p ∈ X ×Σ, dd^c ũ ( p ) - dd^c ṽ (p) ≤δΘ and η_0 = ũ (p) - ṽ (p) ≥sup_X ×Σ (ũ - ṽ)- δ. Fix a local holomorphic chart around p, {U, z^i: i= 1, …, n+1} with z^n+1 = τ. Without loss of generality, we assume U contains the unit disk in ^n+1 and for any local vector field V ∈ T^1,0U,
C^-1 |V|^2 ≤Θ (V, V) ≤ C |V|^2,
where the constant C only depends on the geometry of X and the reference metric Θ. Let 𝐞 = 2C δ and η = η_0 - C δ/2. To derive the contradiction, we construct the following local functions in U,
u = ũ - 𝐞 |z|^2,
v = ṽ + η.
If we denote the unit ball contained in the coordinate chart of U by B_1(p), we have u(p) - v (p) = Cδ/2>0 and u≤v on ∂ B_1(p). Consider the following subset of B_1(p),
D = {z∈ B_1(p): u(z) >v (z)}.
Let ρ be the local potential of Ω in U, Ω= dd^c ρ. According to Lemma <ref>,
∫_D [dd^c (ρ + v)]^n+1≥∫_D [dd^c(ρ + u)]^n+1.
Taking 𝐞≤λ/4C,
dd^c(ρ + u - 𝐞 |z|^2) ≥1/2 dd^c (ρ + u).
Together with the construction of u and v in (<ref>), (<ref>),
∫_D [dd^c (ρ + v) ]^n+1 ≥∫_D [dd^c (ρ +u + h/2 |τ|^2 - 𝐞|z|^2)]^n+1
≥h λ^n/2^n+1∫_D Θ^n+1 + ∫_D [dd^c (ρ +u)]^n+1 - 2𝐞Λ^n∫_DΘ^n+1.
By picking 𝐞 smaller, 𝐞≤h λ^n/2^n+4Λ^n, and combining with (<ref>), we have,
∫_D [dd^c (ρ + v)]^n+1 ≥∫_D [dd^c (ρ +u)]^n+1 + h λ^n/2^n+4∫_D Θ^n+1
≥∫_D [dd^c (ρ + v)]^n+1 + h λ^n/2^n+4∫_D Θ^n+1.
Since the second term of (<ref>) is strictly positive, which leads to a contradiction, we complete the proof.
Let φ= Φ-Ψ be the solution of (E_ε) after subtracting Ψ. According to Theorem <ref>, we have a uniform lower bound φ≥ 0; hence, Φ≥Ψ. The upper bound is easy to construct. Consider the function defined in X×Σ, H = 2t(1-t). By restricting to each section Σ_x_0= {x_0}×Σi_x_0↪ X ×Σ, we have
i_x_0^* ( Θ_Ψ + dd^c H ) ≤ 0 < i_x_0^* (Θ_Ψ + dd^c φ).
Hence, Δ_Σ H ≤ dd^c φ in Σ_x_0 and H= φ =0 on its boundary ∂Σ_x_0. The maximum principle on compact manifolds with boundary implies that φ≤ H on each section. Hence, we get the desired uniform ^0 estimate,
Ψ≤Φ≤Ψ + H.
§ A PRIORI ESTIMATE UP TO ^1
For the ^1 bound, Blocki gives an explicit estimate in the compact setting in <cit.>. We generalize this estimate to the noncompact case. The ^1 boundary estimate follows directly from the fact that Ψ≤Φ≤Ψ + H in X ×Σ and Ψ, Φ, Ψ + H agree along X ×∂Σ. Let ∇ be the Levi-Civita connection of Θ_Ψ on X ×Σ. Then we have
|∇Φ |_Θ_Ψ≤max{ |∇Ψ|_Θ_Ψ, |∇ (Ψ + H)|_Θ_Ψ}, on X ×∂Σ.
Hence, sup_X ×∂Σ |∇Φ|_Θ_Ψ≤ C, where C is a uniform constant.
Let φ= Φ -Ψ∈^3_loc (X ×Σ) be a solution of (E_ε) and let ∇ be the Levi-Civita connection of the Kähler metric Θ_Ψ on X ×Σ. Assume that φ lies in the space ^1(X ×Σ, Θ_Ψ). Then,
sup_X ×Σ|∇φ|_Θ_Ψ≤ C,
where C is a positive constant depending only on upper bounds for |φ|, on lower bounds for the bisectional curvature of Θ_Ψ, and on n, but not on ε.
Suppose that inf_X ×Σφ=A and sup_X ×Σφ=B. Consider the following function,
α = logβ - γ∘φ,
where β = |∇φ|^2_Θ_Ψ and γ : [A, B ] → is a smooth function to be determined later. According to the assumption that φ lies in the space ^1, Yau's maximum principle can be applied here. In particular, there exists a sequence in {x_k} in X ×Σ^∘ such that,
lim_k →∞α (x_k) = sup_X ×Σα, lim_k →∞ |∇α (x_k)|_Θ_Ψ =0, lim sup_k →∞Δα (x_k) ≤ 0,
where Δ = Δ_Θ_Ψ. Then, for a sufficiently small 𝐞 >0 to be determined later and all k ≫ 1, we have
α(x_k) ≥sup_X ×Σα -𝐞, |∇α(x_k)|_Θ_Ψ≤𝐞, Δα (x_k) ≤𝐞.
Fixing O =x_k satisfying (<ref>), we can pick the normal coordinates around O. Let g and g denote the metric tensors corresponding to Θ_Ψ and Θ_Φ = Θ_Ψ + dd^c φ. Then there exist local holomorphic coordinates near O such that,
g_ij (O) = δ_ij, g_ij, k(O) = 0 and g_ij(O) is diagonal.
By taking derivative of α,
α_p = β_p/β - (γ' ∘φ) ·φ_p.
Combining with condition (<ref>), |α_p (O)| ≤𝐞. Then, at the point O, we have
α_pp≥β_pp/β - [(γ')^2 + γ”] |φ_p|^2 -γ' φ_pp - 𝐞 |γ'| |φ_p| - 𝐞.
If we write the local potential of g_ij as u near O, then the ε-geodesic equation is locally given by (u_ij) = υ (ε) ( g_ij). The direct derivative of the equation at O gives,
∑_pu_pp j/ u_pp = (logυ (ε) )_j.
Also notice that,
β_pp≥ -D β + 2 ∑_j u_ppjφ_j + ∑_j |φ_jp|^2 + φ_pp^2,
where -D is the negative lower bound of bisectional curvature of Θ_Ψ. Recall that we have the assumption C^-1g_ij≤ u_ij≤ C g_ij and |φ_p| < C, where C is the constant from our assumption at the beginning of this section and we will get rid of this constant in the end. Together with (<ref>) and (<ref>), we have,
C 𝐞≥∑_pα_pp/u_pp≥ (γ' -D) ∑_p1/u_pp + 1/β∑_jp|φ_jp|/ u_pp
-2 1/β∑_j(logυ (ε) )_j φ_j
- [(γ')^2+ γ”] ∑_p|φ_p|^2/ u_pp - n γ' - C(|γ'| +1)𝐞.
According to Blocki's key observation in <cit.>, after modified in our case, at the point O, we have
1/β∑_j,p|φ_jp|^2/u_pp≥ (γ')^2 ∑_p |φ_p|^2/u_pp- 2 γ' - 2+ C𝐞/β - C (1+ |γ'|) 𝐞,
and
assuming that β≥ 1, we have
2/β∑_j (logυ)_j φ_j≥ -2 |∇logυ (ε)|/√(β)≥ - 2(n+1) |∇(υ (ε)^1/n+1)|/υ(ε)^1/n+1≥ - V ∑_p1/u_pp
where V is a uniform constant satisfying
V ≥ 2(n+1) | ∇( υ (ε)^1/n+1) |.
Combining with (<ref>),
C(1+ |γ'|) 𝐞≥ (γ' -D -V) ∑_p1/u_pp - γ”∑_p|φ_p|^2/u_pp- (n+2) γ' -2.
Now, we choose the function γ and the small number 𝐞>0 in (<ref>) as follows. Let γ = (D +V+3)(t-A) - (B-A)^-1 (t-A)^2 and 𝐞≤ C^-1 (D+V+3)^-1, then we have
∑_p1/u_pp + 2/B-A∑_p|φ_p|^2/ u_pp≤ 3 + (n+2) (D+V+3).
Then, it is straightforward to conclude that
β (O) ≤max{ [(n+3)(D+V+3)]^n+1 n(B-A) ,1}. Noting that β≤exp{𝐞 + logβ(O) - γ∘φ(O) + γ∘φ}, hence, β is controlled by some uniform constant only depending on φ_L^∞, D, V and n.
§ A PRIORI ESTIMATE UP TO ^1,1
First we deal with the uniform ^2 boundary estimate on X ×∂Σ. The technique is to construct local barrier functions near boundary, which is completely parallel to <cit.>. The statement is the following:
Let the data (X ×Σ, Θ_Ψ, φ) be the same as in Proposition <ref>. Let ∇ denote the Levi-Civita connection of Θ_Ψ on X ×Σ. Then
sup_X×∂Σ|∇^2 φ|_Θ_Ψ≤ C,
where the constant C only depends on sup_X×Σ |∇φ|_Θ_Ψ and on (X ×Σ, Θ_Ψ).
Fixing a point p ∈ X ×∂Σ, we pick the local holomorphic coordinates around the point p such that the coordinates system is normal in X and in Σ direction, we still pick the standard coordinate function of annulus, denoted by {x_1, …, x_2n, x_2n+1 =t, x_2n =s} and the corresponding holomorphic coordinates, z_i = x_2i-1 + i x_2i. Throughout the proof, we assume the metric tensor g associated with Θ_Ψ satisfies m δ_ij≤ g_ij≤ M δ_ij. In general, we need to prove the boundary C^2 estimate at p in tangential-tangential, tangential-normal and normal-normal directions respectively. However, the tangential-tangential is trivial in our case and the normal-normal estimate follows directly from tangential-normal estimate. Here, we briefly summarize the proof of tangential-normal estimate by explicitly constructing the barrier functions.
Consider a small neighborhood near p, B'_δ(p) = (X ×Σ) ∩ B_δ (p), where the small constant δ will be determined later. Firstly, we construct the following auxiliary function in B'_δ (p),
v = φ+ N t(1-t),
where N is a large constant to be determined. Then, it can be easily checked that
Δ v ≤ n+1 - m ∑_ig^ii - N g^n+1,n+1,
where g again denotes the metric tensor associated with Θ_Φ = Θ_Ψ + dd^c φ and Δ denotes the corresponding Laplacian. Notice that
-m/2∑_i g^ii -N g^n+1, n+1≤ - mN^1/n+1/2(g)^1/n+1 = -m N^1/n+1/ 2 υ(ε)^1/n+1 ( g)^-1/n+1.
By taking N = [(n+1) (2/m )]^n+1max_B_δ'(p) ( g), we have Δ v ≤ -m/2∑_ig^ii. Noting that φ = Φ -Ψ≥ 0, we have v ≥ 0 on ∂ B'_δ(p). Then, the barrier functions can be constructed as follows:
w = A v + B |z|^2±∂/∂ x_kφ, for 1≤ k ≤ 2n or k =2n+2.
By differentiating the Monge-Ampère equation (E_ε) in the local coordinates,
±Δ( ∂/∂ x_kφ) =±( g^ij (g)_ij, k - g^ij g_ij, k) ≤ C (1 + ∑g^ii),
where A and B are large positive constants to be determined. According to the ^1 estimate of φ, we assume that |∂_k φ| ≤ C. By picking a very large constant B such that, on ∂ B'_δ (p), B |z|^2 ±∂_k φ≥ 0, we have w ≥ 0 on ∂ B'_δ(p). Then, we choose a large constant A such that Δ w ≤ 0 in B'_δ (p). Then, by maximum principle, w≥ 0 in B'_δ (p). Together with the fact that w(p) =0, we have ∂_t w ≥ 0 at p, which implies the tangential-normal estimate on the boundary.
Lemma <ref> together with Yau's standard calculation on Laplacian estimate implies the following interior Laplacian estimate, referring to <cit.>.
Let φ be the solution of (E_ε) and Δ, Δ, the Laplacian operators of g = Θ_Ψ and g = Θ_Φ = Θ_Ψ + dd^cφ respectively. Then, for any constant C,
Δ(e^-C φ (n+1 + Δφ) ) ≥ e^-C φ( Δlogυ(ε) - (n+1)^2 inf_i l (R_ii l l) )
- C e^-C φ (n+1) (n+1+ Δφ)
+ (C + inf_i l (R_iil l)) e^-Cφ (n+1 + Δφ)^1+1/nυ( ε)^-1,
where R denotes the curvature tensor of g. From this, we can deduce the estimate
sup_X ×Σ|Δφ| ≤ C (1+ sup_X ×∂Σ |Δφ|),
where C only depends on sup_X ×Σφ and on a negative lower bound of inf_i l(R_iil l).
Lemma <ref>, together with Lemma <ref>, implies that there exists a uniform constant C only depending on sup_X ×ΣΔφ such that ε C^-1 g_ij≤g_ij≤ C g_ij. This is already enough to apply the standard local regularity theory of the Monge-Ampère equation to prove 𝒞^k,α estimates for any k ≥ 2 that depend on a positive lower bound for ε. In this way the equation (E_ε) can be solved using the continuity path (E_s), s ∈ [ε,1]. However, in order to construct an honest geodesic by letting ε→ 0, we require a full ^1,1 estimate which is uniform in ε. In <cit.>, ^1,1 regularity is proved in the compact case. The method can also be applied in the ALE Kähler setting.
Let the data (X ×Σ, Θ_Ψ, φ) be the same as in Proposition <ref>. If φ lies in the space ^2(X×Σ,Θ_Ψ), then there exists a constant C such that
|∇^2 φ|_Θ_Ψ≤ C,
where ∇ again denotes the Levi-Civita connection of the metric Θ_Ψ and C depends only on (X ×Σ, Θ_Ψ) and on sup_X×Σ|φ|, sup_X ×Σ |∇φ|_Θ_Ψ, sup_X ×Σ |Δφ|, sup_X ×∂Σ |∇^2 φ|_Θ_Ψ.
We again write g for the metric tensor associated with Θ_Ψ. Let λ_1(∇^2 φ) be the largest eigenvalue of the real Hessian ∇^2 φ. By observing that there exists a uniform constant C such that λ_1 (∇^2 φ) ≤ |∇^2 φ|_g ≤ C λ_1 (∇^2 φ) +C, it suffices to prove that λ_1 (∇^2 φ) has a uniform upper bound. Consider the following quantity,
Q = logλ_1 (∇^2 φ) + h (|∇φ|_g^2) -A φ,
where h is defined to be h(s) = -1/2log (1 + sup_X ×Σ |∇φ|_g^2 -s ) and A is a uniform large positive constant to be determined later. We can further modify this quantity to Q_𝐞 = Q - 𝐞 r, where 𝐞 is a small positive constant to be determined later. According to the assumption that |∇^2 φ| is bounded and hence so is Q, the modified quantity Q_𝐞 attains its maximum at some point x_𝐞∈ X ×Σ. The same argument as in Lemma <ref> implies that lim_𝐞→ 0 Q(x_𝐞) = sup_X ×Σ Q. In the following, we assume 𝐞 is small enough such that |Q(x_𝐞)- sup_X ×Σ Q| < 1 and always write p=x_𝐞. Since Q_𝐞 might not be smooth at p if the eigenspace of λ_1(∇^2 φ) (p) has dimension greater than one, a perturbation argument used in <cit.> can be applied to the quantity Q_𝐞 here.
Fix normal coordinates (z_1, …, z_n+1) with respect to g at p such that (φ_ij) is diagonal at p. Define the corresponding real coordinates (x_1, …, x_2n) by z_i = x_2i-1 + i x_2i. Let λ_1 ≥λ_2 ≥…≥λ_2n be the eigenvalues of ∇^2 φ at p and V_1, …,V_2n, the corresponding unit eigenvectors at p. The eigenvectors can be extended to vector fields with constant coefficients in a small neighborhood of p, also denoted by V_1, …, V_2n, and can be represented by V_α = V^β_α∂_x_β in the local coordinates. The perturbation argument is to perturb ∇^2 φ locally around p and to ensure that λ_1 > λ_2 near p. Precisely, consider the following locally defined tensor field,
P = ∑_α, β(δ_αβ - V^α_1 V^β_1 ) dx_α⊗ dx_β.
Let λ_i' = λ_i (∇^2 φ- P). Then, one can easily check that λ'_1 (p)= λ_1(p) and λ'_i (p)= λ_i (p)-1 for i ≥ 2. Hence, there exists a neighborhood of p such that λ'_1 > λ'_2 ≥…≥λ'_2n and λ'_1 ≤λ_1. Consider the following perturbed quantities,
Q̂ = logλ'_1 + h(|∇φ|^2_g) -A φ, Q̂_𝐞 = Q̂ - 𝐞 r.
Therefore, Q̂_𝐞 is a smooth quantity with a local maximum at p. Then, we have,
|d Q̂(p)|_g ≤ C𝐞, ΔQ̂ (p) ≤ C𝐞.
The following inequality follows directly from the calculation in <cit.>. The only information we need in the calculation is the second derivative of the Monge-Ampère equation at p. We will not repeat the details here.
By assuming λ_1' ≥ 1 at p, and again writing g for the metric tensor associated with Θ_Φ=Θ_Ψ+dd^cφ, we have
ΔQ̂≥ 2 ∑_α> 1g^ii|∂_i (φ_V_α V_1)|^2/λ_1 (λ_1- λ_α) + g^iig^jj|V_1 ( g_ij)|^2/λ_1 - g^ii |∂_i (φ_V_1 V_1)|^2/λ_1^2
+ h' ∑_kg^ii( |φ_ik|^2 + |φ_ik|^2 ) + h”g^ii|∂_i |∇φ|^2_g |
+ (A-B) ∑_ig^ii -A n,
where the constant B only depends on (X×Σ, g) and sup_X ×Σ |∇φ|_g. To cancel the annoying terms, we deal with the third term in (<ref>), λ_1^-2g^ii |∂_i (φ_V_1 V_1)|^2. To estimate the term, we split it into the following two parts,
I_1 = (1-2δ) g^ii |∂_i (φ_V_1 V_1)|^2/λ_1^2,
I_2 = 2δg^ii |∂_i (φ_V_1 V_1)|^2/λ_1^2,
where 0< δ <1/4 is to be determined later. For I_1, referring to <cit.>, by assuming that λ_1' ≥ D /δ, where D only depends on (X×Σ, g) and sup_X ×ΣΔφ, we have
I_1 ≤∑_i,jg^iig^jj|V_1 ( g_ij)|/λ_1 + 2 ∑_α >1∑_ig^ii |∂_i (φ_V_α V_1)|^2/λ_1(λ_1 -λ_α) + ∑_ig^ii.
To estimate I_2, recall the fact that d Q̂_𝐞 = 0 and apply the derivative of eigenvalues referring to <cit.>. Then, we have
I_2 = 2δ∑_ig^ii|A φ_i + h' ∂_i |∇φ|^2_g - 𝐞 r_i|^2
≤ 8 δ A^2 ∑_i g^ii |φ_i|^2 + 2 (h')^2 ∑_ig^ii |∂_i |∇φ|^2_g|^2 + C𝐞∑_i g^ii.
Combining (<ref>), (<ref>), (<ref>) and ΔQ̂≤ C𝐞, then, by assuming λ_1' ≥ D/δ, we have
C 𝐞≥ h' ∑_k g^ii( |φ_ik|^2 + |φ_ik|^2 ) +( h”- 2 (h')^2 ) g^ii| ∂_i |∇φ|^2 |
-8 δ A^2 g^i i |φ_i|^2 +(A-B -C 𝐞) ∑_ig^i i -A n.
Notice that h” = 2(h')^2. Picking 𝐞≤ 1/C, A = B+2 and δ = (8 A^2 (sup_X ×Σ |∇φ|^2+1))^-1, then we have
h' ∑_k g^ii(|φ_ik|^2 +|φ_ik|^2) + ∑_ig^ii≤ An+1.
Recall g_ij≤ C g_ij, where C only depends on sup_X ×ΣΔφ. Hence, at p, g^ii≥ C^-1. Then,
λ_1 (p) ≤max{D/δ, {(An +1)C -n } (1+ sup_X ×Σ |∇φ|_g^2) }.
Together with the fact that sup_X ×Σ Q ≤ Q(p) +1, we prove that sup_X ×Σλ_1 is bounded by some uniform constant.
§ THE ASYMPTOTIC BEHAVIOR OF Ε-GEODESICS
In this section we prove Theorem <ref> on the asymptotic behavior of ε-geodesics for a fixed ε>0.
We use the notation introduced before Theorem <ref> and we assume ψ_0,ψ_1 ∈_-γ (ω) (γ >0).
Actually, we are really interested in the case when -γ = 2 -2τ̃ due to theorem <ref>. In ε-geodesic equation (E_ε), the derivatives of function υ (ε) decays at infinity with order -ς, |∇^k υ (ε)| ≤ O(r^-ς-k) with ς≥γ for k ≥ 1.
Without loss of generality, we assume ς≥γ >τ, otherwise theorem <ref> can be proved more easily without iteration (step 3).
We also write φ_ε = Φ_ε -Ψ, so that the solution is given by Θ + dd^cΦ_ε = Θ_Ψ + dd^cφ_ε with φ_ε = 0 on X ×∂Σ. In Aleyasin <cit.>, a rough idea is given to prove the asymptotic behavior of ε-geodesics by constructing barrier functions in the (strictly easier) special case where the asymptotic coordinates are J-holomorphic and the decay rate of the ALE Kähler metric to the Euclidean metric is high enough. However, even in this special case, the details are actually more involved than what is suggested in <cit.>. Here we give a complete proof in the general setting.
§.§.§ Step 1: Differentiating the Monge-Ampère equation.
The Monge-Ampère equation can be written explicitly in the asymptotic coordinates of X ×Σ. As the complex structure J of X does not coincide with the Euclidean complex structure J_0 of the asymptotic coordinates in general, we will use real coordinates for clarity. By passing to universal covering of the end, we are able to work with the global coordinates. Precisely, let {z_1,…, z_n} be the asymptotic complex coordinates of ^n \ B_R and w = t+is the complex coordinate of Σ. The corresponding real coordinates are {x_1, …, x_2n, x_2n+1 = t, x_2n+2=s}, where z_k= x_2k-1 + i x_2k for k = 1,…,n. From now on:
∙ Latin indices i,j, … will denote the real coordinates from 1 to 2n+2.
∙ Greek indices α, β, … will denote the real coordinates from 1 to 2n.
∙ The bold Greek indices μ,ν will denote the real coordinates from 2n+1 to 2n+2.
In these coordinates, we write the Riemannian metric tensors corresponding to Θ_Ψ and Θ_Ψ + dd^c φ_ε as g_ij and (g_φ_ε)_ij, respectively.
Throughout this section, we work in the asymptotic chart of X. This allows us to use the Euclidean metric on (^2n∖ B_R) ×Σ as a reference metric to measure derivatives. This is helpful because it enables us to write down equations with a good structure. Let |·|_0 denote the Euclidean length, ∇_0 the Euclidean Levi-Civita connection and ∇_0,X (∇_0,Σ) the component of ∇_0 acting only in the space (time) directions on (^2n∖ B_R) ×Σ.
Then, the equation (E_ε) can be written as
√(( (g_φ_ε)_ij)) = υ (ε) √((g_ij)).
Recall that υ satisfies conditions in (<ref>).
By differentiating the log of both sides by D_α = ∂ / ∂_x_α, we have
g_φ_ε^ij D_α (g_φ_ε)_ij = g^ij D_α g_ij + D_αlogυ(ε).
The first goal is to rewrite the equation (<ref>) to be an elliptic equation in terms of D_αφ_ε. Let e_1, …, e_2n+2 represent the real coordinate vector fields of x_1, …, x_2n+2. Notice that (g_φ_ε)_ij = g_ij + dd^c φ_ε (e_i, J e_j). We compute D_α of the second term:
D_α [dd^c φ_ε (e_i, Je_j )] = -d ∘ J ∘ d ( D_αφ_ε) (e_i , Je_j) - d ∘ (D_α J) ∘ d φ_ε (e_i, J e_j)
- d ∘ J ∘ d φ_ε (e_i, (D_α J) e_j).
Observe that D_α J is completely horizontal because J preserves the product structure of the tangent bundle T((^2n∖ B_R) ×Σ) and J|_TΣ is constant. Thus,
D_α J = (D_α J)^β_ξ (e_ξ^* ⊗ e_β), (D_α J)_ξ^μ = 0, (D_α J)^β_ν = 0, (D_α J)^μ_ν = 0,
where the coefficients (D_α J)^β_ξ depend only on x_1,…,x_2n and not on x_2n+1,x_2n+2.
In the same way, we can also see that
|∇_0,X^m (D_α J)|_0 = O(r^-τ-1-m) (all m ≥ 0), ∇_0,Σ^m (D_α J) = 0 (all m ≥ 1).
Moreover, it is obvious that
Δ_g_φ_ε (D_αφ_ε) = _g_φ_ε (dd^c (D_αφ_ε) (·, J·)) = g^ij_φ_ε dd^c (D_αφ_ε) (e_i, J e_j).
Then, (<ref>)–(<ref>) imply that
g_φ_ε^i j D_α [dd^c φ_ε (e_i, J e_j)] = Δ_g_φ_ε (D_αφ_ε) + 𝐎(r^-τ-1) ⊛ g_φ_ε^-1⊛∇_0 ∇_0,X φ_ε
+ 𝐎(r^-τ-2) ⊛ g_φ_ε^-1⊛∇_0,Xφ_ε,
where ⊛ denotes a contraction and 𝐎 denotes the following behavior of a tensor T:
T = 𝐎(r^-ρ) :⟺ |∇_0,X^mT|_0 = O(r^-ρ-m) (all m ≥ 0), ∇_0,Σ^m T = 0 (all m ≥ 1).
Then, abbreviating the estimates
|∇_0,X^m(D_α g_ij)|_0 = O(r^-τ-1-m) (all m ≥ 0),
|∇_0,X^m∇_0,Σ(D_α g_ij)|_0 = O(r^-2τ-1-m) (all m ≥ 0), ∇_0,Σ^m(D_α g_ij) = 0 (all m ≥ 2),
by D_α g_ij = 𝐎(r^-τ-1) and
|∇^m_0,X∇^k_0, Σ D_αlogυ (ε)|_0 = O(r^-ς -1-m) (all m, k ≥ 0),
by D_αlogυ (ε) = 𝐎 (r^ -ς -1)
the equation (<ref>) can be rewritten as
Δ_g_φ_ε (D_αφ_ε) = 𝐎(r^-τ-1) ⊛ g_φ_ε^-1⊛∇_0 ∇_0,X φ_ε
+ 𝐎(r^-τ-2) ⊛ g_φ_ε^-1⊛∇_0,Xφ_ε
+ D_α g_ij· (g_φ_ε^ij - g^ij), D_α g_ij = 𝐎(r^-τ-1),
+ 𝐎 (r^-ς-1)
We will later use this formula in full but for now it is enough to take absolute values. Using the fact that Λ^-1εδ_i j≤ (g_φ_ε)_i j≤Λδ_i j, and according to the uniform estimates of |∇_0 φ_ε|_0 and |∇_0^2 φ_ε|_0 from Theorem <ref>, the formula (<ref>) implies that
|Δ_g_φ_ε (D_αφ_ε)| ≤ Cε^-1 r^-τ-1
for some constant C = C(φ_ε_^2(X ×Σ,Θ_Ψ), Λ, g, J) bounded above independently of ε.
§.§.§ Step 2: Barrier estimate of the first derivatives.
The next target is to construct the upper barrier and lower barrier functions to control |D_αφ_ε|. Consider a smooth cutoff function χ: _≥ 0→ satisfying χ(x) = 0 for x≤ 1, χ(x) =1 for x ≥ 2 and |χ'(x)| ≤ 4, |χ” (x)| ≤ 4 for 1 ≤ x ≤ 2. The function D_αφ_ε can be extended smoothly to X ×Σ by defining
h =
χ_R_0· D_αφ_ε,
where R_0 is a large positive constant to be determined later such that {r(p) ≥ R_0 /2} is contained in the asymptotic chart of X and χ_R_0 (p) = χ (r(p )/ R_0). From (<ref>),
|Δ_g_φ_ε h| ≤
Cε^-1 r^-τ-1, for r ≥ 2R_0,
4 Λε^-1( R_0^-2 |∇_0,Xφ_ε|_0 + R_0^-1 |∇_0,X^2 φ_ε|_0 ) + Cε^-1 R_0^-τ-1, for R_0 ≤ r ≤ 2 R_0,
0, for r ≤ R_0.
Then, we can pick a barrier function as follows:
u_1 = E {(1-χ_R_0/2) (R_0/2)^-τ-1 t(t-1) + χ_R_0/2 t(t-1) r^-τ-1}≤ 0,
where the constant E is to be determined later. The barrier function u_1 is defined in X×Σ with u_1=0 on X ×∂Σ. We also have
Δ_g_φ_ε u_1 =1/2_g_φ_ε(dd^c u_1(·,J·)) = 1/2{∑_1 ≤α,β≤ 2n g^αβ_φ_ε (u_1,αβ + u_1,Jα, Jβ ).
.+ ∑_1≤α≤ 2n,
2n+1 ≤μ≤ 2n+2 g^μα_φ_ε( u_1,αμ + u_1,J α, J μ )
+ ∑_2n+1 ≤μ,ν≤ 2n+2 g_φ_ε^μν (u_1,μν + u_1,J ν,Jν)}.
Using the estimate Λ^-1εδ_i j≤ (g_φ_ε)_i j≤Λδ_i j, we obtain that
Δ_g_φ_ε u_1 ≥
E ( Λ^-1 r^-τ-1 - Λε^-1 r^-τ-2 - Λε^-1 r^-τ-3) for r ≥ R_0/2,
E Λ^-1 R_0^-τ-1 for r ≤ R_0/2.
By taking
R_0 ≥ 4 Λ^2 ε^-1, E = 8 R_0^τC with C= C( φ_ε_^2(X ×Σ,Θ_Ψ), Λ, g, J),
and comparing with the inequality (<ref>), we have Δ_g_φ_ε u_1 ≥Δ_g_φ_ε h. Together with the fact that u_1= h =0 on X ×∂Σ, Lemma <ref> implies that h ≥ u_1 in X ×Σ. The same method shows the upper bound h ≤ -u_1, which, together with the lower bound, implies that for each spatial index 1≤α≤ 2n,
|D_αφ_ε| ≤ C( φ_ε_C^2(X ×Σ,Θ_Ψ), Λ, g, J, ε^-1)t(1-t) r^-τ-1 on {r ≥ 2R_0}×Σ.
§.§.§ Step 3: Barrier estimate of the second derivatives.
Now, it comes to deal with the asymptotic behavior of the second derivative.
For a preliminary estimate, we go back to the full formula (<ref>) for Δ_g_φ_ε(D_αφ_ε). For every 𝐚∈ (0,1), the Euclidean 𝒞^0,𝐚 norm of the right-hand side on a restricted unit ball B̂_1(p) = B_1(p) ∩ ((^2n∖ B_R) ×Σ) with r(p) = r ≥ 2R is still bounded by C( φ_ε_C^2(X ×Σ,Θ_Ψ), Λ, g, J, ε^-1) r^-τ-1 thanks to the Evans-Krylov estimates applied to φ_ε in the interior and the estimates of <cit.> at the boundary. (The precise dependence of this constant on the ellipticity, and hence on ε^-1, is not clear but also not needed.) Likewise, the ^0,𝐚 norm of the coefficient tensor of the PDE, g_φ_ε^-1, is bounded by C( φ_ε_C^2(X ×Σ,Θ_Ψ), Λ, g, J, ε^-1). Applying the classic interior and boundary Schauder estimates to (<ref>), we thus obtain from (<ref>) that
D_αφ_ε_^2,𝐚(B̂_1(p))≤ C(φ_ε_C^2(X ×Σ, Θ_Ψ), Λ, g, J, ε^-1) r^-τ-1.
These estimates will now be used to start a bootstrap to obtain some decay for D_β D_αφ_ε using the same barrier method as in Step 2. Differentiate the equation (<ref>) again by D_β = ∂ /∂ x_β for 1≤β≤ 2n. This yields
Δ_g_φ_ε (D_β D_αφ_ε) = g_φ_ε^-1⊛ D_β g_φ_ε⊛ g_φ_ε^-1⊛∇_0^2 ∇_0,Xφ_ε
+𝐎(r^-τ-2) ⊛ g_φ_ε^-1⊛∇_0 ∇_0,Xφ_ε
+ 𝐎(r^-τ-1) ⊛ g_φ_ε^-1⊛ D_β g_φ_ε⊛ g_φ_ε^-1⊛∇_0 ∇_0,Xφ_ε
+ 𝐎(r^-τ-1) ⊛ g_φ_ε^-1⊛∇_0 ∇_0,X^2φ_ε
+ 𝐎(r^-τ-3) ⊛ g_φ_ε^-1⊛∇_0,Xφ_ε
+ 𝐎(r^-τ-2) ⊛ g_φ_ε^-1⊛ D_β g_φ_ε⊛ g_φ_ε^-1⊛∇_0,Xφ_ε
+ 𝐎(r^-τ-2) ⊛ g_φ_ε^-1⊛∇_0,X^2φ_ε
+D_β D_α g_ij· (g_φ_ε^ij - g^ij), D_β D_α g_ij = 𝐎(r^-τ-2),
+𝐎(r^-τ-1) ⊛ (g_φ_ε^-1⊛ D_β g_φ_ε⊛ g_φ_ε^-1 - g^-1⊛ D_β g ⊛ g^-1)
+ 𝐎 (r^-ς- 2)
As before, we have that Λ^-1g^-1≤ g_φ_ε^-1≤ε^-1Λ g^-1, and we also have
|D_β g_φ_ε|_0 ≤ |D_β g|_0 + |∇_0,X∇_0^2 φ_ε|_0 = O(r^-τ-1)
thanks to the preliminary estimate (<ref>). Similarly, all derivatives of φ_ε on the right-hand side of (<ref>) are at worst of order 3, with at least one purely spatial derivative, and hence can be bounded by O(r^-τ-1) thanks to (<ref>). In this way, we obtain that
Δ_g_φ_ε (D_β D_αφ_ε) = 𝐎(r^-τ-2) ⊛ (g_φ_ε^-1 - g^-1) + O(r^-2τ-2).
The majority of terms on the right-hand side actually decay faster than O(r^-2τ-2), and the only term that might decay more slowly is 𝐎(r^-τ-2) ⊛ (g_φ_ε^-1 - g^-1). So far, we can only bound this by O(r^-τ-2). However, by applying the same method as in the weighted estimate of the first derivative in Step 2, we can then construct the following barrier function for D_β D_αφ_ε:
u_2 = E' {( 1- χ_R_0/2) (R_0/2)^-τ -2 t(t-1) + χ_R_0/2 t(t-1)r^-τ-2},
where R_0 is the same constant as in (<ref>) and E' is another uniform constant depending on R_0, φ_ε_C^2(X ×Σ,Θ_Ψ), Λ, g, J and on the constant of (<ref>). Hence, we get the weighted estimate for D_β D_αφ_ε:
|D_β D_αφ_ε| ≤ C(φ_ε_C^2(X ×Σ, Θ_Ψ), Λ, g, J, ε^-1) r^-τ-2.
According to the full formula (<ref>) for Δ_g_φ_ε (D_β D_αφ_ε) and (<ref>), in the restricted unit ball B̂_1(p), the ^0,𝐚 norm of all terms on the right hand side of (<ref>) are bounded by C( φ_ε_C^2, Λ, g, J, ε^-1)
r^-τ-2. Applying the classic interior and boundary Schauder estimates to (<ref>), we thus obtain from (<ref>) that
D_α D_βφ_ε_^2,𝐚(B̂_1(p))≤ C(φ_ε_C^2(X ×Σ, Θ_Ψ), Λ, g, J, ε^-1) r^-τ-2.
§.§.§ Step 4: Iterative improvement of the barrier estimates.
In this step, we improve the decay order of the estimates we obtain in (<ref>) and (<ref>) by an iteration argument. Recall that from Steps 2–3 we have the following weighted estimates to start the iteration process (see (<ref>) and (<ref>)):
∇_0,Xφ_ε_^0,𝐚 (B̂_1 (p)) + ∇_0∇_0,Xφ_ε_^0,𝐚 (B̂_1 (p))
+ ∇_0^2∇_0,Xφ_ε_^0,𝐚 (B̂_1 (p)) = O(r^-τ-1),
∇^2_0,Xφ_ε_^0,𝐚 (B̂_1 (p))+∇_0 ∇_0,X^2φ_ε_^0,𝐚 (B̂_1 (p)) = O(r^-τ-2).
To complete the iteration argument, we need to improve the decay of the term g_φ_ε^-1- g^-1.More precisely, this term occurs in a combination (g_φ_ε^ij - g^ij) D_α g_ij in the first derivative estimate (Step 2), and in combinations (g_φ_ε^ij - g^ij) D_β D_α g_ij and [g_φ_ε^ik D_β (g_φ_ε)_kl g_φ_ε^jl - g^ik D_β g_kl g^jl ] D_α g_ij (to get optimal decay rate of D_α D_βφ_ε, we need to analyze this term) in the second derivative estimate (Step 3). We will now analyze these combinations more carefully. All constants in this step may depend on φ_ε_C^2(X ×Σ, Θ_Ψ), Λ, g, J, ε^-1. Let φ be a continuous function defined in (X\ B_R) ×Σ with at most polynomial growth rate at infinity, for simplicity, we introduce the notation (φ)^♯ to denote the decay rate of φ and (D_X φ)^♯, (D_X^2 φ )^♯ to denote the decay rate of ∇_0,Xφ_^0,𝐚 (B̂_1 (p)) + ∇_0∇_0,Xφ_^0,𝐚 (B̂_1 (p)) + ∇_0^2∇_0,Xφ_^0,𝐚 (B̂_1 (p)), ∇^2_0,Xφ_^0,𝐚 (B̂_1 (p))+∇_0 ∇_0,X^2φ_^0,𝐚 (B̂_1 (p)) respectively.
The metric tensor (g_φ_ε)_i j and its inverse can be written as (2n+2) × (2n+2)-matrices
P =
[ P η^t; η 𝔭 ], (P)^-1 =
[ Q ξ^t; ξ 𝔮 ],
where P, Q are 2n × 2n-matrices, 𝔭, 𝔮 are 2 × 2-matrices and η, ξ are 2 × 2n-matrices. By direct calculation, we have
Q = P^-1 - P^-1η^tξ, ξ=- 𝔭^-1η Q, 𝔮 = (I_2 - ξη^t)𝔭^-1.
The fact that Λ^-1ε I_2n+2≤ P≤Λ I_2n+2 implies that |ξ| ≤ C |η|. The weighted estimate (<ref>), together with the fall-off condition of the metric g, implies that |η| = O(r^(D_X φ_ε)^♯). Then, from (<ref>), we have
Q= P^-1 + O(|η|^2), ξ = O(|η|), 𝔮 = 𝔭^-1 + O(|η|^2).
Similarly, let P' denote the matrix of g in asymptotic coordinates. If we write
P' =
[ P' (η')^t; η' 𝔭' ], (P')^-1 =
[ Q' (ξ')^t; ξ' 𝔮' ],
then we have
Q'= (P')^-1 + O(|η'|^2), ξ' = O(|η'|), 𝔮' = (𝔭')^-1 + O(|η'|^2),
where |η'| = O(r^-γ-1). According to the estimate (<ref>), |P - P'| = O(r^(D_X^2 φ_ε)^♯) and hence | P^-1 - (P')^-1| = O(r^(D_X^2 φ_ε)^♯) as well because P,P' are uniformly bounded. Moreover, 𝔭, 𝔮, 𝔭', 𝔮' are all uniformly equivalent to I_2 but there is no reason for 𝔭 - 𝔭' to decay. Then (<ref>) and (<ref>) imply that
|Q- Q'| = O(r^max{ (D_X^2 φ_ε)^♯, 2 (D_X φ_ε)^♯, -2γ-2 } ),
|ξ-ξ'| = O(r^max{ (D_X φ_ε)^♯, -γ -1 }),
|𝔭- 𝔭' | = O(1).
Then, by calculating blockwise and using that D_αg_μν = 0, we have
( g_φ_ε^i j - g^i j ) D_α g_i j = |Q- Q'| O( r^-τ-1 ) + |ξ -ξ'| O(r^-γ-2),
( g_φ_ε^i j - g^i j ) D_β D_α g_i j = |Q- Q'| O( r^-τ-2 ) + |ξ -ξ'| O(r^-γ-3).
By inserting (<ref>), (<ref>) into (<ref>), we have
|D_αφ_ε| = O(r^max{(D_X^2 φ_ε)^♯-τ-1, (D_X φ_ε)^♯ -τ-1, -2γ-3, -ς-1}).
For the last but one term of (<ref>),
D_α g_ij(g_φ_ε^ik g^jl_φ_ε D_β (g_φ_ε)_kl - g^ik g^jl D_β g_kl) = D_α g_ij g^ik_φ_ε g^jl_φ_ε (D_β (g_φ_ε)_kl -D_β g_kl )
+ D_α g_ij D_β g_kl g_φ_ε^jl (g^ik_φ_ε- g^ik)
+ D_α g_ij D_β g_kl g^ik (g^jl_φ_ε- g^jl).
For the first term of right hand side of (<ref>), by using |D_β (g_φ_ε)_kl - D_β g_kl| ≤ |∇_0^2 ∇_0,Xφ_ε|, we obtain that the decay rate of the first term is given by (D_X φ_ε)^♯ -τ -1. For the second and third terms, we need to analyze (g^ik_φ_ε - g^ik). Similar to (<ref>), we have
D_α g_ij D_β g_kl g^ik (g^jl_φ_ε- g^jl) = |Q-Q'| O(r^-2τ -2) + |ξ-ξ'| O(r^-τ-γ-3) + O(r^-2γ-4).
By inserting (<ref>) into (<ref>), we have
D_α g_ij( g_φ_ε^ik g^jl_φ_ε D_β (g_φ_ε)_kl - g^ik g^jl D_β g_kl) = O(r^max{(D_X φ_ε)^♯ -τ-1, (D_X^2 φ_ε)^♯ -2τ -2, -2γ-4}).
Then, inserting (<ref>), (<ref>) into (<ref>), we have
|D_α D_βφ_ε| = O (r^max{ (D_X^2 φ_ε)^♯-τ -1, (D_X φ_ε)^♯ -τ-1, -2γ-4, -ς-2 }).
We can go one step further by applying Schauder estimates to (<ref>) and (<ref>) and to obtain ^2,𝐚 estimates for D_αφ_ε and D_α D_βφ_ε in B̂_1 (p). Indeed, those terms on the right-hand side of the PDEs (<ref>), (<ref>) that were known to decay pointwise with rate max{(D_X^2 φ_ε)^♯, (D_X φ_ε)^♯} -τ-1 already after Step 3 are actually also decaying at rate max{(D_X^2 φ_ε)^♯, (D_X φ_ε)^♯} -τ-1 in ^0,𝐚 (B̂_1 (p)) norm.
This is clear from (<ref>). So we just need to find the decay rates of the most difficult terms, (g_φ_ε^ij - g^ij) D_α g_ij in (<ref>) and (g_φ_ε^ij - g^ij) D_β D_α g_ij, D_α g_ij (g_φ_ε^ik g^jl_φ_ε D_β (g_φ_ε)_kl - g^ik g^jl D_β g_kl ) in (<ref>) in 𝒞^0,𝐚(B̂_1(p)) norm as well. For this we need to go back and also estimate the 𝒞^0,𝐚-norm of Q-Q' and ξ-ξ' in B̂_1(p), as follows. By using (<ref>), we have that
[P^-1- (P')^-1]_𝒞^0,𝐚(B̂_1(p)) = O(r^(D_X^2 φ_ε)^♯ ), [ξ]_𝒞^0,𝐚(B̂_1(p)) = O(r^(D_X φ_ε)^♯).
Then, based on (<ref>), we have that
[Q-Q']_𝒞^0,𝐚(B̂_1(p)) = O(r^(D_X^2 φ_ε)^♯, 2 (D_X φ_ε)^♯, -2γ-2),
[ξ- ξ']_𝒞^0,𝐚(B̂_1(p)) = O(r^max{ (D_X φ_ε)^♯, -γ-1 }).
Then we can proceed as in (<ref>) and (<ref>), obtaining that the decay rates of [Δ_φ_ε D_αφ_ε]_^0,𝐚(B̂_1 (p)) and [Δ_φ_ε D_α D_βφ_ε]_^0,𝐚(B̂(p)) are max{(D_X^2 φ_ε)^♯-τ-1, (D_X φ_ε)^♯ -τ-1, -2γ-3, -ς-2 } and max{(D_X^2 φ_ε)^♯-τ-1, (D_X φ_ε)^♯ -τ-1, -2γ-4, -ς-2} respectively. According to the classic interior and boundary Schauder estimates, we improve (<ref>) to ^2,𝐚 (B̂_1 (p)) norm,
D_αφ_ε_^2,𝐚 (B̂_1 (p)) = O(r^max{(D_X^2 φ_ε)^♯-τ-1, (D_X φ_ε)^♯ -τ-1, -2γ-3, -ς-1 }),
D_α D_βφ_ε_^2,𝐚 (B̂_1 (p)) = O (r^max{ (D_X^2 φ_ε)^♯-τ -1, (D_X φ_ε)^♯ -τ-1, -2γ-4, -ς- 2}).
Inserting (<ref>) into (<ref>), and using (<ref>) again to improve (<ref>), we finally obtain the following estimates:
D_αφ_ε_^2,𝐚 (B̂_1 (p)) = O(r^max{-2γ-3, -ς-1}), D_α D_βφ_ε_^2,𝐚 (B̂_1 (p)) = O (r^max{ -2γ-4, -ς-2 }).
Note that according to (<ref>), because Ψ was chosen to be linear in t, the decay rate of φ_ε is faster than the decay rate of the boundary data ψ_0,ψ_1.
§.§.§ Step 5: Proof of Theorem <ref>
In Step 4, we have obtained the optimal decay rates in the cases of k=1,2 (even though it is not required in the proof of Theorem <ref>). In this step, we give optimal estimates for k≥ 3 and complete the proof of Theorem <ref>.
For the higher order derivatives, by differentiating the Monge-Ampère equation (<ref>) m times, similar to (<ref>) and (<ref>) and writing D_K = D_κ_1⋯ D_κ_m (1 ≤κ_i ≤ 2n, for i =1,…, m), instead of giving a full formula as (<ref>) and (<ref>), we write a simplified formula of Δ_φ_ε D_K φ_ε:
|Δ_φ_ε D_K φ_ε| ≤∑_i=1^m O(r^-τ -2 -m+i ) |∇_0,X^i φ_ε| + ∑_i=1^m O(r^-τ -1 -m +i) |∇_0∇_0,X^iφ_ε |
+ ∑_i=1^m-1 O(r^-τ -m +i) |∇_0^2 ∇_0,X^i φ_ε| + ∑_i=1^m |∇_0,X^i g_jl| |∇_0,X^m-i (g^jl_φ_ε - g^jl)|
+ O(r^-ς-m).
Applying induction on m, according to iteration process (step 4), we can assume for k ≤ m-1
||∇^k_0,Xφ_ε||_B̂_1(p)= O(r^max{-2γ-2, -ς}-k).
To find the optimal decay rates, the most difficult term is ∑_i=1^m |∇_0,X^i g_jl| |∇_0,X^m-i (g^jl_φ_ε - g^jl)|. Notice that
by (<ref>) and (<ref>), we have
|D_K_1g_jl D_K_2 g_ik (g^ij_φ_ε- g^ij)| = O (r^-2γ-2- k_1-k_2),
where K_1, K_2 are k_1-, k_2-multi-indices respectively. Then, we apply induction on k to find decay rate of |D_K_1g_jl D_K_2 g_ik D_K (g^ij_φ_ε- g^ij)|, where K is a k-multi-index. Applying one derivative to (g^ij_φ_ε-g^ij), by (<ref>), we can prove that
|D_K_1g_jl D_K_2 g_ik D_K (g^ij_φ_ε- g^ij)| = O(r^-2γ-2-k_1-k_2-k).
Then, by (<ref>) and (<ref>), we have
|∇_0,X^i g_jl| |∇_0,X^m-i (g^jl_φ_ε - g^jl)| ≤ |∇^i_0,X g_jl| {|∇_0,X^m-i-1[g^jk_φ_ε g^sl_φ_ε (∇_0,X (g_φ_ε)_ks -∇_0,X g_ks) ] |
+ 2 | ∇^m-i-1[ g_φ_ε^jk (g^ls_φ_ε- g^ik)∇_0,X g_ks] | }
= O(r^-2γ-2-m).
Combining with (<ref>), we have that the right-hand side of (<ref>) is O(r^-2τ+2 -k). Using the construction of barrier functions in Step 2–3, we obtain that |D_K φ_ε| ≤ C r^-2τ+2 -m. To apply Schauder estimates to the m-th derivative of Monge-Ampère equation, we also need to know the decay rate of [Δ_φ_ε D_K φ_ε]_^0,𝐚 (B̂(p)):
[Δ_φ_ε D_K φ_ε]_^0,𝐚 (B̂(p)) ≤∑_i=1^m O(r^-τ -2 -m+i ) ∇_0,X^i φ_ε_^0,𝐚 (B̂(p))
+ ∑_i=1^m O(r^-τ -1 -m +i) ∇_0∇_0,X^iφ_ε_^0,𝐚 (B̂(p))
+ ∑_i=1^m-1 O(r^-τ -m +i) ∇_0^2 ∇_0,X^i φ_ε_^0,𝐚 (B̂(p))
+ ∑_i=1^m|∇_0,X^i g_jl| |∇_0,X^m-i (g^jl_φ_ε - g^jl)|_^0,𝐚 (B̂(p))+ O (r^-ς-m)
= O(r^max{-2γ-2, -ς}-m)
Hence, we have D_K φ_ε_^2,𝐚(B̂_1(p))≤ C r^max{-2γ-2, -ς}-m, for m ≥ 1.
To prove that φ_ε is in _-γ, by integrating (φ_ε)_r = O(r^max{-2γ-2, -ς}-1) in the radial direction from infinity to r=R, we obtain a function φ̂_ε defined in X∖ B_R with decay rate max{-2γ-2, -ς}. Then,
φ_ε - φ̂_ε = c(θ, t),
where c(θ, t) is a function in X ∖ B_R independent of radius r and θ be viewed as a variable on the link. It suffices to prove that c(θ, t) is independent of θ. By taking derivative of (<ref>), we have |∇_0, X c(θ, t)| = O(r^max{-2γ-2, -ς}-1). In the case that c(θ, t) is not constant with respect to θ, |∇_0,X c(θ, t)| ∼ r^-1, which contradicts to the fact that max{-2γ-2, -ς} <-1. Hence we proved that φ_ε = c(t) + O(r^-max{-2γ-2, -ς}). We conclude that, for Φ_ε = φ_ε + Ψ,
sup_(^2n\ B_R) ×Σ(|∇^k_0,XΦ_ε| + |∇^k_0,XΦ̇_ε| + |∇^k_0,XΦ̈_ε|) ≤ C(k,ε^-1) r^-γ-k for all k ≥ 1.
In conclusion, we have proved Theorem <ref>.
§ CONVEXITY OF THE MABUCHI K-ENERGY
According to Theorem <ref> (assuming τ = τ̃), we can restrict ourselves to the space
_-2τ+2 = {φ∈^∞_-2τ +2: ω_φ = ω + dd^c φ >0 }, τ > n-1,
and the function υ(ε) is constructed by (<ref>) and (<ref>).
In the previous section, we proved that for any two given boundary data ψ_0,ψ_1 ∈_-2τ+2, there exists a solution of the ε-geodesic equation (E_ε) in the same space _-2τ+2.
The derivative of the Mabuchi K-energy can be defined as follows: for ψ∈ T_φ_-2τ +2,
δ_ψ (φ) = -∫_Xψ R(ω_φ) ω_φ^n/n!.
The integral converges because -2-2τ < -2n, equivalently, τ > n-1. In the following proposition, the second derivative of Mabuchi K-energy will be calculated in M = X_R = {x ∈ X: r(x) ≤ R } containing boundary terms, and it will be clear that these boundary terms go to zero as R →∞. Precisely, we consider Mabuchi K-energy restricted in M,
δ_ψ_M (φ) = -∫_Mψ R(ω_φ) ω_φ^n.
The calculation of the second variation of _M is due to my advisor Bianca Santoro in one of her unpublished notes, several years before I started this project. The limiting case R →∞ was previously stated by Aleyasin <cit.> without details concerning the vanishing of boundary terms.
To simplify the notation, in the following proposition, we write R_φ = R(ω_φ), _φ = (ω_φ), Δ = g^ik_φ∂_i ∂_k, |·|= |·|_ω_φ, ∇= ∇_ω_φ and 𝒟 f = ∇_i∇_k f dz^i dz^k, where ∇_i ∇_k f = f_, ik is a covariant derivative of f with respect to ω_φ. Recall that 𝒟 is called the Lichnerowicz operator, and 𝒟 f = 0 if and only if grad^1,0f is a holomorphic type (1,0) vector field.
[Santoro]
Along a path of potentials φ(t) ∈_-2τ+2,
d^2 _M/dt^2 =
- ∫_M [φ̈- 1/2 |∇φ̇|^2] R_φω_φ^n
+ ∫_M |𝒟φ̇|^2 ω_φ^n
- n(n-1)/2∫_∂ Mφ̇d^c φ̇∧_φ∧ω_φ^n-2
+ ni ∫_∂ Mφ̇g_φ^kl (_φ)_ilφ̇_k d z^i ∧ω_φ^n-1
- ni ∫_∂ Mφ̇g_φ^ijφ̇_, ikj d z^k ∧ω_φ^n-1
- ni ∫_∂ M g_φ^k lφ̇_lφ̇_,ki dz^i ∧ω_φ^n-1.
Furthermore, by taking R →∞ in (<ref>), we have
d^2 /dt^2 = ∫_X [φ̈ - 1/2 |∇φ̇|^2 ] R_φω_φ^n/n! + ∫_X |𝒟φ̇|^2 ω_φ^n/n!.
By taking the second derivative of Mabuchi K-energy in M, we have
d^2𝒦_M/dt^2 = d/dt[- ∫_M R_φφ̇ ω_φ^n]
= - ∫_M φ̈ R_φ ω_φ^n
-n ∫_M φ̇d/dt(_φ) ∧ω_φ^n-1
- n (n-1) ∫_M φ̇_φ∧ω_φ^n-2∧( i∂∂φ̇).
The second term of (<ref>) needs one integration by parts, and we get
-n ∫_M φ̇d/dt(_φ) ∧ω_φ^n-1 = -n ∫_M φ̇[ - i ∂∂( d/dt(logω_φ^n) )∧ω_φ^n-1]
=
n ∫_M φ̇[ i ∂∂( n ω_φ^n-1∧ i ∂∂φ̇/ω_φ^n)∧ω_φ^n-1]
=
∫_M φ̇(Δ^2 φ̇)ω_φ^n.
Now, to the term ∫_M φ̇_φ∧ω_φ^n-2∧ i∂∂φ̇. For simplicity, φ̇= u,
∫_M φ̇_φ∧ω_φ^n-2∧ i∂∂φ̇
=
-i∫_M ∂ u ∧∂ u ∧_φ∧ω_φ^n-2 + 1/2∫_∂ M u d^c u ∧_φ∧ω_φ^n-2
= -i∫_M ∂ u ∧∂ u ∧_φ∧ω^n-2_φ
- i/n∫_M ∂ u ∧∂ u ∧ R_φω^n-1_φ
+ 1/2∫_∂ M u d^c u ∧_φ∧ω_φ^n-2
= - i∫_M ∂ u ∧∂ u ∧_φ∧ω^n-2_φ
- 1/2n^2∫_M |∇ u|^2 R_φω^n_φ
+ 1/2∫_∂ M u d^c u ∧_φ∧ω_φ^n-2,
where is the traceless part of Ricci. If ψ is any primitive (1,1)-form, then
*ψ = -1/(n-2)!ψ∧ω^n-2, and hence n(n-1)∧ω_φ^n-2 =
-n!(*).
Hence,
n(n-1) ∫_M i∂ u ∧∂ u ∧_φ∧ω_φ^n-2
=
-∫_M n! (*_φ) ∧ (i ∂ u ∧∂ u)
=
-∫_M ⟨_φ, i∂ u ∧∂ u ⟩ ω_φ^n
=
-∫_M ⟨_φ, i∂ u ∧∂ u ⟩ ω_φ^n +
∫_M ⟨1nR_φ ω_φ ,i∂ u ∧∂ u ⟩ ω_φ^n.
Note that
∫_M ⟨1nR_φ ω_φ , i ∂ u ∧∂ u ⟩ ω_φ^n
= ∫_M ⟨ (n-1)! R_φ ω_φ , i ∂ u ∧∂ u⟩ ω_φ^n/n!
=
∫_M (i ∂ u ∧∂ u)∧ *[(n-1)! R_φω_φ]
=
∫_M R_φ (i ∂ u ∧∂ u) ∧ω_φ^n-1
=
1/2n∫_M |∇ u|^2R_φ ω_φ^n.
Thus, we get that
d^2 _M/dt^2
=
- ∫_M [φ̈- 1/2 |∇φ̇|^2] R_φ ω_φ^n - ∫_M ⟨_φ,i ∂ u ∧∂ u⟩ ω_φ^n
+ ∫_M u (Δ^2 u) ω_φ^n - n(n-1)/2∫_∂ M u d^c u ∧_φ∧ω_φ^n-2.
Let f be a smooth function defined on M. Then we have that
Δ^2 f = 𝒟^* 𝒟 f - g_φ^ik g_φ^jl (_φ)_il f_j k - g_φ^ik g_φ^jl (∇_k (_φ)_il) f_j.
Hence,
∫_M u(Δ^2 u) ω_φ^n - ∫_M⟨_φ, i∂ u ∧∂ u ⟩ω_φ^n
= ∫_M |𝒟 u|^2 ω_φ^n
+ ni ∫_∂ M u g_φ^kl (_φ)_il u_k d z^i ∧ω_φ^n-1
- ni ∫_∂ M u g_φ^ij∇_j∇_k∇_i u d z^k ∧ω_φ^n-1
- ni ∫_∂ M g_φ^k l u_l u_,ki dz^i ∧ω_φ^n-1.
Notice that
∇_j∇_k∇_i f = ∇_k∇_j∇_i f - R_i^m_j_k f_m
Then, we have
Δ^2 f = g^ij_φ g^k l_φ∇_l∇_k∇_j∇_i f
=
g^ij_φ
g^kl_φ∇_l(∇_j∇_k∇_i
f - R_k^m_i_j f_m )
= 𝒟^* 𝒟 f - g_φ^kl g_φ^mj_kj f_m l - g_φ^kl g_φ^mj (∇_l_kj) f_m.
Here 𝒟^* 𝒟 = g^ij g^kl∇_l∇_j∇_k ∇_i. Then, we have
∫_M u 𝒟^* 𝒟 u ω_φ^n = ∫_M g_φ^kl( u g_φ^ij∇_j∇_k∇_i u )_lω_φ^n
- ∫_M ( g_φ^kl u_l g_φ^ij∇_j∇_k ∇_i u ) ω_φ^n.
The Stokes' theorem can be applied to the first term in the above formula by observing that if we write 𝔥 = ih_k dz^k = i(u g_φ^ij∇_j∇_k∇_i u) d z^k, then g_φ^kl (h_k)_lω_φ^n = n ∂𝔥∧ω_φ^n-1. Hence,
∫_M g_φ^kl( u g_φ^ij∇_j∇_k∇_i u )_lω_φ^n = ∫_∂ M n 𝔥∧ω_φ^n-1.
Similarly,
-∫_M( g_φ^kl u_l g_φ^ij∇_j∇_k ∇_i u ) ω_φ^n
= - ∫_M g_φ^ij(g_φ^kl u_l∇_k ∇_i u)_jω_φ^n
+ ∫_M |𝒟 u|^2 ω_φ^n
= ∫_M |𝒟 u|^2 ω_φ^n
- ni ∫_∂ M g_φ^kl u_l u_, ki dz^i ∧ω_φ^n-1.
We have
∫_M u Δ^2 u
= ∫_M |𝒟 u|^2
- ∫_M u g_φ^kl g_φ^mj (_φ)_kj u_m l
- ∫_M u g_φ^kl g_φ^mj(∇_l (_φ)_kj) u_m
-ni ∫_∂ M u g_φ^ij∇_j∇_k∇_i u d z^k ∧ω_φ^n-1 - ni ∫_∂ M g_φ^k l u_l u_,ki dz^i ∧ω_φ^n-1.
Notice that
-∫_M ⟨_φ, i ∂ u ∧∂ u ⟩ω_φ^n
= -∫_M g_φ^ij g_φ^k l (_φ )_il u_k u_jω_φ^n,
and integrating by parts,
-∫_M g_φ^ij g^kl_φ (_φ)_il u_k u_jω_φ^n
= ∫_M g^ij_φ( g^kl_φ (_φ)_il u_k u)_jω_φ^n
+ ∫_M u g^ij_φ g^kl_φ (_φ)_i l u_kjω_φ^n
+ ∫_M u g_φ^ij g_φ^kl(∇_j (_φ)_il) u_k ω_φ^n
= ni ∫_∂ M u g_φ^kl (_φ)_il u_k d z^i ∧ω_φ^n-1
+ ∫_M u g^ij_φ g^kl_φ (_φ)_i l u_kjω_φ^n
+ ∫_M u g_φ^ij g_φ^kl(∇_j (_φ)_il) u_k ω_φ^n.
Hence, we proved that
∫_M u(Δ^2 u) ω_φ^n - ∫_M⟨_φ, i∂ u ∧∂ u ⟩ω_φ^n
= ∫_M |𝒟 u|^2 ω_φ^n
+ ni ∫_∂ M u g_φ^kl (_φ)_il u_k d z^i ∧ω_φ^n-1
- ni ∫_∂ M u g_φ^ij∇_j∇_k∇_i u d z^k ∧ω_φ^n-1
- ni ∫_∂ M g_φ^k l u_l u_,ki dz^i ∧ω_φ^n-1,
which completes the proof of the lemma.
The integration formula (<ref>) in this lemma, together with (<ref>), completes the proof of (<ref>).
It suffices to show that all boundary terms in this formula vanish as R→∞. According to Theorem <ref>, we can check that the decay rates of the integrands integrated on ∂ M are at most -2τ-1 < -2n +1. This completes the proof.
Assume that ω is an ALE Kähler metric on X such that the Ricci curvature of ω is non-positive, (ω) ≤ 0. Then, along each ε-geodesic in _-2τ+2(ω), φ(t), the Mabuchi K-energy is convex.
The proof is parallel to Chen <cit.>. Here, we just do the calculation in the ALE setting. Define f = φ̈ - 1/2 |∇φ̇|_ω_φ^2. Then the ε-geodesic equation can be written as
εω^n/ω^n_φ = f.
According to (<ref>), together with the observation, (ω_φ) = (ω) + dd^c log f, we have
d^2/dt^2 = ∫_X|φ̇(t)|^2_ω_φω_φ^n - ∫_X f R(ω_φ) ω_φ^n
= ∫_X|φ̇(t)|^2_ω_φω_φ^n - ∫_X f _ω_φ(ω) ω_φ^n - ∫_X f Δ_ω_φlog f ω^n_φ
= ∫_X|φ̇(t)|^2_ω_φω_φ^n -∫_X f _ω_φ(ω) ω_φ^n + ∫_X |∇ f|^2_ω_φ/f ω_φ^n≥ 0.
We have the last equality because f|∇log f|_ω_φ = O(r^-2τ-1) and -2τ-1 < -(2n-1), so that the relevant boundary integral vanishes. Hence, we have proved the convexity of the Mabuchi K-energy.
A quick corollary of Theorem <ref> is that assuming (ω) ≤ 0, the scalar-flat Kähler metric, if it exists, is unique in _-2τ+2(ω). The proof is also parallel to Chen <cit.>. However, if there exists a scalar-flat Kähler metric in _-2τ +2 (ω), the condition, (ω) ≤ 0, implies (ω)= 0. Hence, the uniqueness of scalar-flat ALE Kähler metric can be reduced to the uniqueness result of Ricci-flat ALE Kähler metric (which can be found in many reference <cit.>). A short proof is given as follows. Let ω_0 be a scalar-flat Kähler metric in _-2τ +2 (ω). The fact, ω_0 = ω + O(r^-2τ), implies that the ADM mass of ω and ω_0 are equal, (ω) = (ω_1). According to mass formula by Hein-LeBrun <cit.>,
(ω) = A(n, c_1(X), [ω]) + B (n) ∫_X R(ω) ω^n/n!,
where A(n, c_1(X), [ω]) is a constant only determined by the dimension n, the first Chern class of X and the cohomology class of ω and B(n) only depends on dimension n. The fact, (ω) = (ω_1), together with the mass formula, implies that
∫_X R(ω) = ∫_X R (ω_1) =0.
The assumption that (ω) ≤ 0 implies that (ω)=0. Then, by a simple argument, we can prove that all scalar-flat ALE Kähler metrics in [ω] is actually Ricci-flat. The expansion of scalar-flat Kähler metrics (Theorem <ref>) implies that the Ricci form, (ω_1), decays to zero at infinite with decay rate faster than -2n. The ddbar lemma implies that there exist f ∈^∞_2-2n such that
(ω_1) = dd^c f.
Taking trace with respect to g, we have that Δ f = 0. By solving the Laplacian equation (for instance, see <cit.>), there is a unique solution in the space ^∞_-δ (for -δ∈ (-∞, 0)\ D). Hence, f≡ 0, which implies that ω is Ricci-flat.
§ NONEXISTENCE OF NON-POSITIVE (OR NON-NEGATIVE) RICCI CURVATURE
Consider the standard family of negative line bundles, (-k), over ^n-1 together with their natural projections π : (-k) →^n-1. The total spaces of (-k) are fundamental examples of ALE Kähler manifolds by viewing (-k) as a resolution space of ^n/ _k.
Let ω be any ALE Kähler metric on (-k) asymptotic to the Euclidean metric with decay rate -τ (τ> 0). In the following, we shall prove the nonexistence of a sign of the Ricci curvature of ω in the case k n. When k=n, there always exists a Ricci-flat ALE Kähler metric in each compactly supported ALE Kähler class, see <cit.>.
Let (-k) be the standard negative line bundle over ^n-1 with n ≥ 2 and k n. Let
ω be an ALE Kähler metric on (-k) with decay rate -τ (τ > 0). Then, the Ricci form of ω, ρ, is of mixed type, i.e., neither ρ≥ 0 nor ρ≤ 0 is true.
Notice that for each integer k ≥ 1, there is a compactification of (-k) by adding a divisor at infinity, D_∞≅^n-1. We denote the compactified manifold as M_k and the natural embedding j: (-k) → M_k is holomorphic. M_k is a ^1-bundle over ^n-1. Denote D_0 as the divisor corresponding to the base manifold, ^n-1⊂(-k)↪ M_k. Then, the normal line bundles of D_0 and D_∞ are given by
N_D_0/M_k = (-k), N_D_∞/M_k = (k).
The following facts on the geometry of M_k can be checked by viewing M_k as a smooth toric variety. M_k can be described by 2n coordinate charts with coordinates {U_i; u_i^1, … , u_i^n-1, u_i }, {V_i; v_i^1, …, v_i^n-1, v_i } (0 ≤ i ≤ n-1), where the coordinates are related by
(u^1_i, …, u^n-1_i, u_i) = (1/u^i_0, u_0^1/u^i_0, …, u^i_0/u^i_0, …, u^n-1_0/u^i_0, (u^i_0)^k u^n_1 ), 1 ≤ i ≤ n-1,
(v^1_i, …, v^n-1_i, v_i) = (u^1_i, …, u^n-1_i, 1/u_i), 0 ≤ i ≤ n-1.
The divisor classes of M_k are generated by the class of D_0 ≅^n-1, the zero section of (-k) ⊂ M_k, and the class of D_f, the total space of the restriction of the ^1-bundle M_k → D_0 to a linear subspace of D_0. Restricting D_0 and D_f to U_0, we can write
D_0 = (u_0 =0), D_f = (u_0^1 =0).
The divisor at infinity, D_∞, can be represented by (u_0 =∞) = (v_0 =0) and D_∞ can be represented in terms of D_0 and D_f as follows,
D_∞ = D_0 + k D_f
By viewing D_0, D_f and D_∞ as smooth complex hypersurfaces of M_k, the Poincaré duals of D_0, D_f and D_∞ have natural explicit representatives denoted by ρ_0, ρ_f, ρ_∞ respectively. For instance, in U_0,
ρ_0|_U_0 = 1/nπ i ∂∂log(1+ ∑_j|u_0^j|^2)^k |u_0|^2 +1/(1+∑_j|u_0^j|^2)^k|u_0|^2,
ρ_f |_U_0 = 1/nπ i ∂∂log(1 +∑_j |u_0^j|^2 ),
ρ_∞ |_U_0 = 1/nπ i ∂∂log[(1+ ∑_j |u_0^j|^2)^k |u_0|^2 + 1].
§.§.§ Step 1: Extension of the ALE Ricci form to M_k.
Recall that the diffeomorphism Φ: (^n)^* /_k →(-k) ∖ D_0 gives a holomorphic asymptotic chart of (-k). The diffeomorphism Φ can be explicitly written as
Φ : (^n)^* →(-k)∖ D_0, Φ (z_1, …, z_n) |_U_0= ( z_2/z_1, …, z_n/z_1, z_1^k).
In the coordinate chart {U_0; u_0^1, …, u_0^n-1, u_0}, we have r^2k = (1+ ∑_j|u_0^j|^2)^k |u_0|^2.
By the asymptotic condition of ω, in an asymptotic chart of (-k), log (ω^n/ ω_0^n) can be viewed as a function of decay order O(r^-τ), where ω_0 is the standard Euclidean metric on the asymptotic chart. Thus, the Ricci form satisfies
ρ = -i∂∂logω^n/ω^n_0 = O(r^-τ-2).
The adjunction formula tells us that as line bundles over (-k),
K_(-k) = n-k/k[D_0].
Since ρ is the curvature form of a Hermitian metric on K_(-k)^-1 and ρ_0 is the curvature form of a Hermitian metric on [D_0], it follows that
ρ + n-k/kρ_0 is globally i∂∂-exact.
By restricting ρ_0 in (<ref>) to the asymptotic chart of (-k), we have
ρ_0 = -i∂∂log (1+ r^-2k).
Hence, by Theorem <ref>, ρ can be written as
ρ = -n-k/kρ_0 + i∂∂ f for f ∈^∞_-τ' ((-k)), τ' = min{ 2k, τ} > 0.
Since ρ cannot be extended smoothly to M_k, we define a smooth cut-off function χ,
χ(t) =
1, 0 ≤ t ≤ 1,
0, t ≥ 2,
smooth, 1 < t <2,
and we define χ_R (t) = χ(t/R). Applying the cutoff function, we can extend ρ to be
ρ_R = -n-k/kρ_0 + i ∂∂( χ_R f), in M_k ∖ D_∞,
-n-k/kρ_0, on D_∞.
§.§.§ Step 2: Integral argument for n=2.
Recall that the intersection numbers between D_∞, D_0 and D_f are given by
(D_0)·(D_0) = -k, (D_0)· (D_f) = 1, (D_f)·(D_f) =0, (D_0)·(D_∞) =0.
In particular, if we integrate ρ over D_0, then
∫_D_0ρ = ∫_D_0ρ_R = ∫_M_kρ_R∧ρ_0 = -2-k/k∫_M_kρ_0^2 = 2-k.
On the other hand, we have ρ_R →ρ pointwise and ρ_R = O(r^-τ'-2) uniformly as R →∞. Hence, by the dominated convergence theorem,
∫_{u^1_0 =0 }ρ =lim_R→∞∫_{u^1_0 =0 }ρ_R = ∫_M_kρ_R∧ρ_f = -2-k/k∫_M_kρ_0 ∧ρ_f = k-2/k.
Now assume that ρ is seminegative (or semipositive). Then the left-hand sides of both (<ref>) and (<ref>) are non-positive (or non-negative). However, the right-hand sides have opposite signs because k ≠ 2. This is a contradiction.
§.§.§ Step 3: Integral argument for n≥ 3.
In higher dimension, the difficulty is to calculate the intersection numbers of divisors. However, in the case of M_k, we can apply the formula of intersection numbers on toric varieties <cit.>, or, more explicitly, take integral of formulas of Poincaré dual (<ref>)–(<ref>). Notice that
∫_D_0ρ_0^n-1 = (-k)^n-1.
Then, we have
∫_D_0ρ^n-1 =∫_D_0ρ_R^n-1 = ∫_M_kρ_R^n-1∧ρ_0 = (-n-k/k)^n-1 (-k)^n-1.
On the other hand, we have ρ_R →ρ pointwise and ρ_R = O(r^-τ'-2) uniformly as R →∞. Hence, by the dominated convergence theorem,
∫_{u^1_0 =0 }ρ^n-1 =lim_R→∞∫_{u^1_0 =0 }ρ_R^n-1
=lim_R →∞∫_D_fρ_R^n-1 = lim_R →∞∫_M_kρ_R^n-1∧ρ_f
= ( -n-k/k)^n-1 ∫_M_kρ_0^n-1∧ρ_f = (-n-k/k)^n-1 (-k)^n-2,
where the last equality can be observed from (<ref>) and (<ref>):
∫_M_kρ_0^n-1∧ρ_f = ∫_M_kρ_0^n-1∧1/k(ρ_∞ - ρ_0) = 0 - 1/k∫_M_kρ_0^n = (-k)^n-2
because ρ_0|_D_∞ =0. By the same argument as in dimension 2, we complete the proof.
amsplain
|
http://arxiv.org/abs/2307.00565v1
|
20230702132024
|
Gauss-Bonnet modification to Hawking evaporation of AdS black holes in massive gravity
|
[
"Hao Xu",
"Yun Du"
] |
gr-qc
|
[
"gr-qc",
"hep-th"
] |
a4paper,left=28mm,right=28mm
|
http://arxiv.org/abs/2307.01564v1
|
20230704083512
|
Central limit theorem under the Dedecker-Rio condition in someBanach spaces
|
[
"Aurélie Bigot"
] |
math.PR
|
[
"math.PR"
] |
Central limit theorem under the Dedecker-Rio condition in some Banach spaces
Aurélie BigotLAMA, Univ Gustave Eiffel, Univ Paris Est Créteil, UMR 8050 CNRS, F-77454 Marne-La-Vallée, France.
===================================================================================================================
We extend the central limit theorem under the Dedecker-Rio condition to adapted stationary and ergodic sequences of random variables taking values in a class of smooth Banach spaces. This result applies to the case of random variables taking values in L^p(μ), with 2 ≤ p < ∞ and μ a σ-finite real measure. As an application we give a sufficient condition for empirical processes indexed by Sobolev balls to satisfy the central limit theorem, and discuss about the optimality of these conditions.
§ INTRODUCTION
Let (Ω, ℱ, ) be a probability space and (X_i)_i ∈ℤ be a strictly stationary sequence of centered real-valued random variables in 𝕃^2, adapted to a stationary filtration ( _i)_i ∈ℤ . In 2000, Dedecker and Rio proved in <cit.> the central limit theorem (in short CLT) for (X_n)_n under the condition
DR
X_0 S_n_0 converges in 𝕃 ^1
where S_n = X_1 + ⋯ + X_n.
Let us look at the particular case where we consider a Markov sequence (ξ_n)_n ∈ℤ with stationary distribution ν and transition operator P. Let X_n = f(ξ _n) be a centered functional of our Markov sequence such that ν(f^2) < ∞, then the CLT holds under the condition of convergence in 𝕃^1(ν) of the series ∑_n = 0^∞ f P^n f.
For α-mixing random variables, condition (<ref>) leads to the following condition:
∑_n =0^+ ∞∫_0^α(_0, σ(X_n)) Q_X_0^2 (u) du < ∞,
where Q_X_0 is the upper tail quantile function of X_0, and (α( _0, σ(X_n)))_n ≥ 1 is the sequence of strong mixing coefficients associated with (X_n)_n ∈ℤ (see for instance [(2.1)]DR2000).
The CLT under condition (<ref>) has been extended in <cit.> to random variables taking values in a separable Hilbert space. In this paper we extend this CLT to the case of r.v.'s taking values in a 2-smooth and Schauder decomposable Banach space. As we shall see, the main ingredients to obtain such a result are a martingale blocks decomposition, Theorem 5 in []Ros1982 and Theorem 2.1 in <cit.>. Typically, the L^p spaces for p ≥ 2 fit into our framework of Banach spaces, and this particular case will lead to a CLT for the empirical process as will be shown in Section 2.
Recently, several authors have extended other projective criteria valid in the real spaces context to the case of smooth Banach spaces.
For instance, the Hannan condition (see <cit.>) has been extended in <cit.> for random variables taking values in a 2-smooth Banach space having a Schauder basis. Such a condition can be written in the Banach space setting: ∑_n ∈ (P_0(X_n)_𝔹^2)^1/2 < ∞ where P_0 is the operator defined by P_0 = ._0-._-1, 𝔹 is the real and separable Banach space and . its associated norm.
Very recently the condition of Maxwell-Woodroofe (see <cit.>) has been extended by <cit.> in Banach space settings. In the case of 2-smooth Banach spaces, the condition becomes ∑_n = 1^∞ n^-3/2([S_n_0_𝔹^2])^1/2 < ∞.
Note that it has been shown in <cit.> that the conditions by Dedecker-Rio, Hannan and Maxwell-Woodroofe are independent.
The paper is organised as follows. In Section 1, we state an extension of the CLT under the condition (<ref>) to the case of 2-smooth and Schauder decomposable Banach spaces. As a consequence, in Section 2, we derive in Corollary <ref> a sufficient condition for empirical processes indexed by Sobolev balls to satisfy the CLT, and discuss about the optimality of this condition. The proofs of the main results are postponed to Section 3.
§ A CLT IN SOME SMOOTH BANACH SPACES
In all the paper, (𝔹, ._𝔹) will be a real and separable Banach space. We shall consider the class of Banach spaces that are 2-smooth. This notion introduced by Pisier in [Section 3]Pis1975 plays the same role with respect to vector martingales as spaces of type 2 do with respect to the sums of independent random vectors. Let us consider the following definition of 2-smooth Banach spaces.
Let (𝔹, ._𝔹) be a separable Banach space and define ψ _2 : x ↦x_𝔹^2. (𝔹, ._𝔹) is said to be 2-smooth if there exists D > 0 such that for any x,u ∈𝔹,
D^2ψ_2(x)(u,u) ≤ D^2 u _ 𝔹^2 .
Here D^2ψ_2(x) denotes the usual second order Fréchet derivative of ψ_2 at point x.
As quoted in <cit.>, this definition implies the (2,D)-smoothness in the sense of Pisier meaning that
x+y_𝔹^2 + x - y_𝔹^2 ≤ 2 x_𝔹^2 + 2D^2y_𝔹^2, ∀ x,y ∈𝔹
.
We shall also need the notion of Schauder decomposable Banach spaces.
A family {x_n : n ∈} of elements of 𝔹 is called a Schauder basis if for any vector x ∈𝔹 there exists an unique series
∑_n=0^∞ a_n x_n, with a_n = a_n(x) ∈,
which converges to x with respect to ._𝔹.
One of the properties of Schauder decomposable spaces is the uniform boundedness of the family of operators (P_n)_n, where P_n is the projection on the space generated by {x_k : k ≤ n}. More precisely, there exists a constant c>0 such that for any n ∈, P_n≤ c, where . is the operator norm.
Let p ≥ 2 and μ a σ-finite measure on . As a consequence of [Proposition 5.1]LPP2011, L^p(μ) is Schauder decomposable. In addition, equipped with its usual norm, it is also (2, √(p-1))-smooth as shown in [Proposition 2.1]Pin1994.
The main result of this paper is Theorem <ref> below which extends [Theorem 1]DR2000 to the Banach space setting.
Let (𝔹, ._𝔹) be a 2-smooth and Schauder decomposable Banach space. Let (X_i)_i ∈ be an ergodic stationary sequence of centered 𝔹-valued random variables, adapted to a non-decreasing and stationary filtration (ℱ _i)_i ∈ and such that (X_0_𝔹^2) < ∞.
Set S_n ∑_k = 1^n X_k and assume that
X_0_𝔹S_n_0 converges in 𝕃^1_𝔹.
Then 1/√(n)S_n_n ≥ 1 converges in distribution to G, where G is a Gaussian 𝔹-valued random variable whose covariance operator is given by : K_G(x^*, y^*) = ∑_k ∈ cov(x^*(X_0), y^*(X_k)) for any x^*, y^* ∈𝔹 ^*.
In view of applications, we shall give in Corollary <ref> sufficient conditions for (<ref>) to be verified. With this aim, we first introduce useful notations.
Let X and Y be real-valued random variables. Denote by :
* Q_X the generalized inverse of the upper tail function t ↦(X >t)
* G_X the inverse of x ∈ [0, ( X >0)] ↦∫_0^x Q_X(u) du
* H_X,Y the generalized inverse of x ↦(X_Y>x).
Consider a stationary sequence of random variables (X_i)_i ∈ℤ adapted to a non-decreasing and stationary filtration ( _i)_i ∈. We define for any nonnegative integer k :
β_1,X(k) = sup_f_∞≤ 1 P_X_k |ℱ _0(f) - P_X_k(f) _1
.
If there is no confusion on the r.v. to which we refer, we shall write β_1(k) instead of β_1,X(k) for the sake of clarity.
Let (𝔹, ._𝔹) be a 2-smooth and Schauder decomposable Banach space. Let (X_i)_i ∈ be an ergodic stationary sequence of centered 𝔹-valued random variables, adapted to a non-decreasing and stationary filtration (ℱ _i)_i ∈ and such that (X_0_𝔹^2) < ∞.
Set γ_i (X_i_0_𝔹). Consider the conditions
* ∑_n ≥ 1∫_0^β_1(n) Q_X_0_𝔹^2(u) du < ∞,
* ∑_n ≥ 1∫_0^γ_n Q_X_0_𝔹∘ G_X_0_𝔹 (u) du < ∞
.
We have the implications (i) ⇒ (ii)
⇒ (<ref>).
Proof of Corollary <ref>.
By Berbee's coupling lemma (see [Lemma 5.1]Rio2017), there exists a random variable X_n^* distributed as X_n, independent of _0 and such that (X_n ≠ X_n^*) = β_1(n).
Hence
γ_n
= X_n - X_n^*_0_𝔹 ≤X_n-X_n^*_𝔹_X_n ≠ X_n^*
≤X_n_𝔹_X_n ≠ X_n^* + X_n^*_𝔹_X_n ≠ X_n^*.
Using the same argument as in the proof of Proposition 1 in <cit.>, we get
X_n_𝔹_X_n ≠ X_n^*≤∫_0^(X_n ≠ X_n^*) Q_X_n_𝔹(u) du,
implying that
γ_n
≤ 2 ∫_0^β_1 (n) Q_X_n_𝔹(u) du
and then G_X_0_𝔹^-1(γ_n/2) ≤β_1(n).
Therefore, using a change of variables, it follows that
∫_0^γ_n Q_X_0_𝔹∘ G_X_0_𝔹 (u) du
≤
2∫_0^γ_n /2 Q_X_0_𝔹∘ G_X_0_𝔹 (u) du
≤
2∫_0^β_1(n) Q^2_X_0_𝔹(u) du
.
This ends the proof of the first implication. The second implication is an immediate consequence of the proof of Proposition 1 in <cit.> by replacing the absolute values by the norm ._𝔹.
In view of applications, let us give the following result which specifies the rates of decrease of (β_1(k))_k > 0 and moments of X_0_𝔹 for (<ref>) to hold. Its proof follows directly from [Annex C with p = 2]Rio2017.
Let (𝔹, ._𝔹) be a 2-smooth and Schauder decomposable Banach space. Let (X_k)_k ∈ℤ be an ergodic stationary sequence of centered 𝔹-valued random variables such that X_0_𝔹^2 < ∞, and adapted to a non-decreasing and stationary filtration (ℱ _k)_k. Assume that one of the following conditions holds:
* there exists r > 2 such that (X_0_𝔹^r) < ∞ and ∑_n ≥ 0 (n+1)^2/(r-2)β_1(n) < ∞
* there exist r > 2 and c > 0 such that for any x, X_0_𝔹^ > x≤c/x^r and ∑_n ≥ 0β_1(n)^1-2/r < ∞
* there exist a > 0 and τ > 0 such that X_0_𝔹^2 (log (1+X_0_𝔹^))^a < ∞ and β_1(n) = O e^-τ n^1/a.
Then ∑_n ≥ 1∫_0^β_1(n) Q_X_0_𝔹^2(u) du < ∞ is verified and the conclusion of Theorem <ref> holds.
§ APPLICATIONS TO THE EMPIRICAL PROCESSES OVER SOBOLEV BALLS FOR DEPENDENT SEQUENCES
Let us consider (Y_i)_i ∈ a stationary and ergodic sequence of real random variables, whose cumulative distribution function is denoted F, and define F_n(t) = 1/n∑_k = 1^n _Y_k ≤ t the empirical distribution function. We are interested in the asymptotic behavior of the centered empirical distribution function in L^p(μ), where μ is a σ-finite measure on ℝ.
We suppose that
∫__- F(t)^p dμ(t) + ∫__+ (1-F(t))^p dμ(t) < ∞
so that F_n - F is an element of L^p(μ).
In <cit.> the link between the convergence in distribution of √(n) (F_n - F) in 𝕃 ^p(μ) and Donsker classes has been clearly established. More precisely, let us denote
W_1,q(μ) {f : → : f(x) = f(0) + _x >0∫_[0,x[g dμ - _x≤ 0∫_[x, 0[ g dμ , g_q, μ≤ 1 }
,
where q is the conjugate exponent of p and ._q,μ is the usual norm on L^q(μ). Then according to [Lemma 1]DM2007 the following convergences are equivalent:
* {√(n) (F_n - F)(t) }__t{ G(t)}__t in L^p(μ)
* {√(n)1/n∑_i= 1^n f(Y_i) - f(Y_0)}{G_1(f)} in ℓ^∞ (W_1,q(μ))
where ℓ^∞ (W_1,q(μ)) is the space of all functions ϕ : W_1,q(μ) → such that sup_f ∈ W_1,q(μ)ϕ(f) is finite, and G_1(f) = ∫ g(t)G(t)dμ(t).
Hence, proving that W_1,q(μ) is a Donsker class for (Y_n)_n is equivalent to proving that √(n) (F_n - F) converges weakly in L^p(μ) to a Gaussian process.
To study the asymptotic behavior of √(n) (F_n - F), we then define the random process:
∀ k ∈, X_k = {_Y_k ≤ t - F(t) : t ∈}
which takes values in L^p(μ). With such a notation, the study of the asymptotic behavior of √(n) (F_n - F)_n is equivalent to the study of the asymptotic behavior of (S_n/√(n))_n in L^p(μ) where S_n = X_1 + ⋯ + X_n. Hence, since L^p(μ) is a 2-smooth and Schauder decomposable Banach space, the centered empirical distribution behavior in L^p(μ) will follow from an application of Theorem <ref> and in particular of Corollary <ref>.
To state the condition in terms of dependence conditions on the sequence (Y_k)_k ∈, we introduce the following weak dependence coefficients (see <cit.>).
Let (Y_i)_i ∈ℤ be a stationary sequence of real random variables adapted to a stationary filtration (_i)_i ∈ℤ.
For any nonnegative integer k, let
b_k
= sup_t ∈_Y_k ≤ tℳ_0 - (_Y_k ≤ t) and β_1,Y(k) (b_k)
.
Let us consider the same notations as in <cit.>:
Define the function F_μ by: F_μ(x) = μ(]0,x]) if x ≥ 0 and F_μ(x) = -μ([x,0[) if x ≤ 0. Define
also the nonnegative random variable Y_p,μ = F_μ (Y_0)^1/p.
As an application of Corollary <ref>, we derive the following result.
Let p ∈ [2, +∞[ and assume that
∑_n ≥ 0∫_0^β_1,Y(n) Q_Y_p, μ^2 (u) du < ∞.
Then
√(n) (F_n - F) G in L^p(μ),
where G is a Gaussian process whose covariance operator is given, for any x^*, y^* ∈ L^q(μ), by
K_G(x^*, y^*) = ∑_k ∈ℤ cov(x^* (X_0), x^*(X_k)).
In particular,
n^p/2∫F_n - F^p dμ∫G^p dμ
.
When μ is finite, (<ref>) simply reads as ∑_n ≥ 0β_1,Y(n) < ∞.
With the equivalence with Donsker classes in mind, it is interesting to study the envelope function of our class of functions.
Note that x ↦F_μ(x)^1/p is the smallest envelope function of W_1,q(μ) which means that
F_μ(x)^1/p = sup{f(x) - f(0) : f ∈ W_1,q(μ)}
.
Indeed, for any f ∈ W_1,q(μ) and any real x, by Hölder's inequality,
f(x) - f(0)≤F_μ (x)^1/pg_q, μ≤F_μ (x)^1/p
.
On the other hand, define for any x ∈:
f_x : y ↦_y >0∫_[0,y[g_x dμ - _y≤ 0∫_[y, 0[ g_x dμ
with
g_x : t ↦{[ _[0,x](t) F_μ(x)^-1/q if F_μ(x) ≠ 0 and x>0; -_[x,0](t) F_μ(x)^-1/q if F_μ(x) ≠ 0 and x<0; 0 either ]. .
One has that f_x belongs to W_1,q(μ) for any real x and f_x(x) = F_μ(x)^1/p. This ends the proof of (<ref>).
Note that when (Y_i)_i ∈ℤ is a sequence of i.i.d. random variables, (<ref>) reads as
∫_0^1 Q_Y_p,μ^2(u) du < ∞ i.e. [F_μ(Y_0)^2/p] < ∞.
It means that the smallest envelope function of W_1,q(μ) is square integrable.
This condition together with an entropy condition ensure the CLT (see Theorem 2.5.2 in <cit.>).
It is worth noting that, in the particular case where μ is the Lebesgue measure on denoted by λ, W_1,q(λ) can be rewritten
W_1,q(λ) = {f : → : f(x) = f(0) + _x >0∫_[0,x[ g(t) dt - _x≤ 0∫_[x, 0[ g(t) dt , g_q≤ 1 }
which is the space of absolutely continuous functions f such that λ(f'^q) ≤ 1. In particular, it contains the unit ball of the Sobolev space of order 1 with respect to L^q(λ).
Moreover, since in this case F_μ is the identity function, condition
(<ref>) can be rewritten
∑_n ≥ 0∫_0^β_1,Y(n) Q_Y_0^2/p (u) du < ∞.
Comment on the optimality of condition (<ref>).
In the dependent case, condition (<ref>) of Corollary <ref> implies that
1/√(n)∑_i=1^n (|F_μ(Y_i)|^1/p-(|F_μ(Y_i)|^1/p))
converges in distribution to a Gaussian r.v.
The proposition below which is essentially due to []DMR1994, shows that condition (<ref>) is essentially optimal for the convergence in distribution of (<ref>) to a Gaussian r.v. to hold.
[]DMR1994
Let a > 1. Suppose that Y_0 is a real-valued r.v. whose distribution function F is continuous and such that ∫_0^1 u^-1/a Q_Y_p,μ^2(u) du = +∞. Then, there exists a stationary Markov chain (Z_i)_i ∈ with marginal distribution function F and such that
* 0 < lim inf_n →∞ n^a β_1(n) ≤lim sup_n →∞ n^a β_1(n) < ∞, here (β_1(n))_n denotes the sequence of strong β_1-mixing coefficients of (Z_i)_i ∈ℤ
* 1/√(n)∑_i=1^n (|F_μ(Z_i)|^1/p-(|F_μ(Z_i)|^1/p)) does not converge in distribution to a Gaussian law.
Let us see how Proposition <ref> can be deduced from Theorem 5 in <cit.>. Let F F_μ∘ F^-1(.)^1/p. Note that ∫_0^1 u^-1/a Q_Y_p,μ^2(u) du is convergent if and only if, with F defined above, ∫_0^1/2 u^-1/a F^2(u) du and ∫_0^1/2 u^-1/a F^2(1-u) du are convergent. In other words, in our setting one of the two previous integrals is infinite. Let us assume for instance that ∫_0^1/2 u^-1/a F^2(1-u) du = +∞. Theorem 5 in <cit.>, applied to the function f : x ↦ F(1-x), asserts that there exists a stationary Markov chain (U_i)_i ∈ with uniform marginal distributions on [0,1] such that (β_1,U(n))_n ∈ satisfies (i) and
1/√(n)∑_i=1^n (|F_μ∘ F^-1(1-U_i)|^1/p-(|F_μ∘ F^-1(1-U_i)|^1/p))
does not converge in distribution to a Gaussian distribution.
Furthermore, setting Z_i = F^-1(1-U_i), the Markov chain (Z_i)_i ∈ℤ admits F as marginal distribution function and the same mixing coefficients as (U_i)_i ∈ℤ so that (i) is verified.
Furthermore, analogously to the first section, we get sufficient conditions for (<ref>) to be satisfied.
Let (X_k)_k ∈ℤ and (Y_k)_k ∈ℤ be as defined in (<ref>) and (<ref>), and (ℱ _k)_k be a filtration to which (X_k)_k is adapted.
Then under one of these conditions:
* there exists r > 2 such that (Y_p,μ^r) < ∞ and ∑_n ≥ 0 (n+1)^2/(r-2)β_1,Y(n) < ∞
* there exists r > 2 and c > 0 such that for any x, Y_p,μ > x≤c/x^r and ∑_n ≥ 0β_1,Y (n)^1-2/r < ∞
* there exists a > 0 and τ > 0 such that Y_p,μ^2 (log (1+Y_p,μ))^a < ∞ and β_1,Y(n) = Oe^-τ n^1/a
the condition (<ref>) is verified, so Corollary <ref> applies.
Application to the empirical process in L^p for intermittent maps.
For γ∈ ]0,1[, let T_γ: [0,1] → [0,1] be the intermittent map defined by <cit.> as follows:
T_γ(x) = {[ x(1+2^γ x^γ) if x ∈ [0, 1/2[; 2x - 1 if x ∈ [1/2, 1] ]. .
If there is no confusion we write T for the sake of clarity. As shown in <cit.>, for all γ∈ ]0,1[, there exists a unique absolutely continuous T_γ-invariant probability measure ν_γ (or simply ν) on [0,1] whose density h_γ satisfies: there exist two finite constants c_1,c_2 > 0 such that for all x ∈ [0,1], c_1 ≤ x^γ h_γ(x) ≤ c_2. Let us fix γ and consider K the Perron-Frobenius operator of T with respect to ν defined by
ν(f∘ T.g) = ν(f.Kg) , for any f,g ∈𝕃^2(ν).
Then, by considering (X_i)_i ∈ℤ a stationary Markov chain with invariant measure ν and transition kernel K, for any positive integer n, on the probability space ([0,1], ν), (T, T^2, ⋯, T^n) is distributed as (X_n, X_n-1, ⋯, X_1) (see for instance Lemma XI.3 in <cit.>). Consequently, the two following empirical processes have the same distribution
* {G_n(t) = 1/√(n)∑_k=1^n [_T^k ≤ t - F(t)] ; t ∈ [0,1]}
* {L_n(t) = 1/√(n)∑_k=1^n [_X_k ≤ t - F(t)] ; t ∈ [0,1]}
where F(t) = ν([0,t]).
Since ν is supported on [0,1], condition (<ref>) reads as ∑_n ≥ 0β_1,X(n) < ∞. Now, from [Proposition 6.2]DDT2015, we have the upper bound
β_1,X(n) ≤C/(n+1)^(1-γ)/γ.
Hence, applying Corollary <ref>, we derive that for any γ∈ ]0,1/2[ and any p ≥ 2
{ G_n(t) : t ∈ [0,1]}{G(t) : t ∈ [0,1]} in 𝕃^p([0,1], λ)
where G is a Gaussian process.
We could also consider unbounded but monotonic observables φ of the iterates such as φ(x)=1/x^α or φ(x)=1/(1-x)^α since for such functions, the β_1-coefficients of φ(X_k)_k ≥ 0 are of the same order than the initial ones. More precisely taking into account the behavior of the density h_γ of ν, one can prove that condition (<ref>) reads as α < p/2(1-2γ) when φ(x)=1/x^α and α < p/21-2γ/1-γ when φ(x)=1/(1-x)^α.
§ PROOFS
We start this section by a general CLT for random variables taking values in 2-smooth Banach spaces. It will be a building block in the proof of Theorem <ref> and has interest in itself.
§.§ A general result
Let 𝔹 be a 2-smooth Banach space and let (X_i)_i ∈ be a stationary sequence of 𝔹-valued centered random variables, adapted to a non-decreasing and stationary filtration (ℱ _i)_i ∈, and such that (X_0_𝔹^2) < ∞.
Set S_n ∑_k = 1^n X_k and assume that
for any x^* ∈𝔹^*, [x ^* (S_n)]^2/n_n is an uniformly integrable family,
1/n(S_n | _0)_𝔹^2 0 ,
for any x^* ∈𝔹 ^*, there exists σ^2(x^*) such that .[x ^* (S_n)]^2/n| _0 - σ^2(x^*) 0 ,
and there exists (F_l)_l a sequence of finite dimensional subspaces of 𝔹 such that
lim sup_n → +∞1/n [q_F_l^2(S_n)] 0 ,
where q_F (x) inf{x-y_𝔹 : y ∈ F} for any x ∈𝔹
.
Then 1/√(n)S_n_n ≥ 1 converges in distribution to G, where G is a 𝔹-valued Gaussian random variable whose law γ is such that for any x^* ∈𝔹 ^*, γ(x^*) = e^-σ^2(x^*)/2.
Conditions (<ref>) and (<ref>) can be replaced by the two following conditions:
S_n_𝔹^2/n_n is an uniformly integrable family ,
1/√(n) (S_n | _0)_𝔹 0
.
Proof of Remark <ref>.
Let x^* be an element of 𝔹 ^*. Then, there exists c(x^*) > 0 such that for any x ∈𝔹, x^*(x)≤ c(x^*)x_𝔹. Hence,
∀ n ∈ ^*, [x ^* (S_n)]^2/n≤ c(x^*)^2 S_n_𝔹^2/n
.
So, condition (<ref>) implies that [x ^* (S_n)]^2/n_n is an uniformly integrable family.
On another hand, for any A>0,
1/n(S_n | _0)_𝔹^2
≤1/√(n)(S_n | _0)_𝔹_S_n/√(n) _0)_𝔹≤ A ^2 + 1/√(n)(S_n | _0)_𝔹_S_n/√(n) _0)_𝔹 > A^2
.
Clearly,
1/√(n)(S_n | _0)_𝔹_S_n/√(n) _0)_𝔹≤ A ^2
≤A/√(n)(S_n | _0)_𝔹
,
which converges to 0 in 𝕃^1 by condition (<ref>).
On another hand, by Lemma 6.3 in <cit.>, we have for any 𝔹-valued r.v. Y, any >0 and any σ-algebra ℬ,
Y_𝔹^2 _Yℬ_𝔹 > 2 ε≤Y_𝔹^2 _Y_𝔹 > ε
.
Hence, applying Jensen's inequality and taking into account (<ref>), we get
1/√(n)(S_n | _0)_𝔹_S_n/√(n) _0)_𝔹 > A^2 ≤S_n_𝔹 ^2/n_S_n/√(n) _0)_𝔹 > A
≤S_n_𝔹 ^2/n_S_n_𝔹/√(n) > A/2
,
which converges to zero by (<ref>) by first letting n tend to infinity and after A.
So, overall, (<ref>) together with (<ref>) imply (<ref>).
§.§ Proof of Theorem <ref>
To prove it, we follow the method of proof given in the proof of [Theorem 4.31]MPU2019. This method consists in constructing super big blocks of random variables and then in approximating them by a triangular array of martingales to which a CLT is applied. In the Banach space setting, to prove Theorem <ref> we shall rather use [Theorem 5]Ros1982 instead of [Theorem 2.4]MPU2019.
For any fixed positive integer m, set p = ⌊ n/m⌋ and let
X_n,j∑_k = p(j-1)+1^pj1/√(n) X_k
and _n,j _jp.
Let us consider the conditionally centered random variables
X_n,j X_n,j - X_n,j_n,j-1,
so that (X_n,j)_1≤ j ≤ n is a triangular array of martingale differences adapted to the array of filtrations (_n,j)_1≤ j ≤ n.
As done in <cit.>, let consider (m_n)_n a sequence sufficiently slow growing such that, as n tends to infinity
m = m_n + ∞ ,
m/√(n) 0
and,
√(m) 1/√(p)S_p_0 0 in 𝕃^1(𝔹)
.
We have in particular, from (<ref>), that mp/n 1.
Note that
∑_j=1^m X_n,j = ∑_j = 1^m X_n,j - ∑_j = 1^m X_n,j_n,j-1
= 1/√(n) S_n + 1/√(n)(S_pm-S_n) - ∑_j = 1^m X_n,j_n,j-1.
On the one hand,
1/√(n)(S_pm-S_n)_𝔹 ≤m/√(n) X_0_𝔹,
which converges to 0 as n tends to +∞ from (<ref>).
On another hand, bearing in mind stationarity,
∑_j = 1^m X_n,j_n,j-1_𝔹≤√(mp/n) √(m) S_p/√(p)_0_𝔹,
which converges to 0 as n tends to +∞ combining conditions (<ref>) and (<ref>).
So, overall, Theorem <ref> will be proved if one can show that ∑_j = 1^m X_n,j converges in distribution to G. To this end, we shall use Theorem 5 in <cit.>. Hence, we have to show that the following conditions are verified:
* there exists ψ : 𝔹^* →_+ such that for any x^* ∈𝔹^*, σ_n^2(x^*) ψ(x^*) as n → +∞, where σ_n^2(x^*) = ∑_j =1^m [x^*(X_n,j)]^2_n,j-1
* for any x^* ∈𝔹^* and any >0, ∑_j = 1^m(x^*(X_n,j)^2 _x^*(X_n,j) > _n, j-1 0 as n → +∞
* there exists a sequence (F_l)_l of finite subspaces of 𝔹 such that
lim sup_n → +∞∑_j = 1^mq_F_l^2(X_n,j)_n, j-1 0 as l → +∞.
First, since for any integer k, (X_k_𝔹^2) is finite, it appears immediately that for any x^* ∈𝔹 ^* and any n,j, [x^* (X_n,j)]^2 < ∞.
Let us begin with the proof of condition <ref>.
Let x^* be a linear form defined on 𝔹.
Using the same notations as in Rosinski's paper, we write
σ_n^2(x^*) ∑_j = 1^m [x^* (X_n,j)]^2_n,j-1
= ∑_j = 1^m [x^* (X_n,j)]^2_n,j-1 - x^* (X_n,j)_n,j-1^2
.
Hence, by stationary and condition (<ref>), we derive
lim sup_n → +∞ σ _n^2 (x^*) - σ ^2(x^*)≤lim sup_p → + ∞[x^* ( S_p)]^2/p_0 - σ ^2(x^*) + lim sup_p → +∞x^* (S_p)/√(p)_0 ^2
.
Then, combining (<ref>), (<ref>) and (<ref>), we get
σ_n^2(x^*) σ^2 (x^*) .
This ends the proof of <ref>.
Let us now check condition <ref>.
Let be a positive real number. For any n ∈ ^*, in accordance with <cit.> and by stationarity
∑_j = 1^m [x^*(X_n,j)]^2 _x^*(X_n,j)>_n,j-1
≤ 12 ∑_j = 1^m [x^*( X_n,j )]^2 _x^*( X_n,j ) >/4_n,j-1
= 12mp/n[x^*( S_p )]^2/p _x^*( S_p )/√(p)>√(n)/4√(p)
.
Then, from (<ref>) and since mp/n→ 1 as n tends to +∞,
∑_j = 1^m [x^*(X_n,j)]^2 _x^*(X_n,j)>_n,j-1 0 in 𝕃 ^1
.
It remains to verify condition <ref>.
It is easy to see that for any x,y ∈𝔹, q_F_l(x+y) ≤ q_F_l(x) + y_𝔹. Therefore
q_F_l(X_n,j) ≤ q_F_l(X_n,j) + X_n,j_n,j-1_𝔹
,
implying that
q_F_l^2(X_n,j) ≤ 2 q_F_l^2(X_n,j) + 2X_n,j_n,j-1_𝔹^2
.
On the one hand, taking into account stationarity, we derive
∑_j = 1^m X_n,j_n,j-1_𝔹^2
= mp/nS_p/√(p)_0_𝔹^2
,
which converges to 0 as n tends to infinity by conditions (<ref>) and (<ref>).
On the other hand, by stationarity
∑_j = 1^m q_F_l^2(X_n,j) = m q_F_l^2(X_n,1) = mp/n1/p[q_F_l^2(S_p)]
.
Therefore, combining (<ref>) and (<ref>), we derive
lim_l → +∞ lim sup_n → +∞∑_j = 1^m q_F_l^2(X_n,j) = 0
.
Hence, (X_n,j)_n,j satisfies the conditions of [Theorem 5]Ros1982.
This ends the proof of Theorem <ref>.
§.§ Proof of Theorem <ref>
We shall apply Theorem <ref>.
Let start by showing that (<ref>) implies (<ref>).
Denote for any nonnegative integer k,
Y_k = x^* (X_k)
and
T_k = ∑_i = 1^k Y_i
.
Condition (<ref>) implies that for any x^* ∈𝔹 ^*, (x^*(X_0)x^*(S_n)_0)_n converges in 𝕃^1. Hence, applying [Proposition 1(b)]DR2000, it follows that T_n^2/n_n is an uniformly integrable family, which proves (<ref>).
We prove now that under (<ref>), (<ref>) is verified.
Let (e_n)_n ∈ be a Schauder basis of 𝔹. For any l, denote by P_l the projection on the subspace F_l generated by the l first vectors of the basis.
For any integers n and l, since P_l(S_n) ∈ F_l,
1/n q_F_l^2 (S_n) ≤1/n(id - P_l)S_n_𝔹^2
.
From Theorem 2.1 together with Lemma 1.1 and Remark 2.1 in <cit.> with p = 2, we get that there exists c>0 such that for any positive integer n
(id - P_l)S_n_𝔹^2 ≤ c ∑_i = 1^n max_i ≤ j ≤ n(id - P_l) X_i_𝔹∑_k=i^j (id - P_l) X_k _i_𝔹
.
Hence, by using stationarity, we derive
(id - P_l)S_n_𝔹^2
≤ c ∑_i = 1^n max_i ≤ j ≤ n(id - P_l) X_0_𝔹∑_k=0^j-i(id - P_l) X_k_0_𝔹
≤
nc max_1 ≤ j ≤ n(id - P_l) X_0_𝔹∑_k=0^j-1(id - P_l) X_k_0_𝔹
.
Furthermore, (P_l)_l is uniformly bounded for the operator norm, then so is (id - P_l)_l. Hence there exists C >0 such that for any l and any n > N,
1/n(id - P_l)S_n_𝔹^2
≤
C (id - P_l) X_0_𝔹∑_k=0^NX_k_0_𝔹
+ C max_N+1 ≤ j ≤ nX_0_𝔹∑_k=N+1^j-1X_k_0_𝔹
.
The second term in the right hand side converges to zero by condition (<ref>) by letting first n tend to +∞ and after N.
Since (X_0_𝔹^2) < ∞ and (id-P_l)X_0_𝔹 converges a.s. to zero as l tends to +∞, the first term in the right hand side of (<ref>) converges to zero by letting first l tend to +∞. So, overall,
lim_l → +∞lim sup_n → +∞1/n ((id-P_l)S_n_𝔹^2) = 0,
and (<ref>) follows.
We prove now that (<ref>) implies (<ref>).
For any positive integers n and l, we have
1/nS_n_0_𝔹^2 ≤2/n P_l S_n_0_𝔹^2 + 2/n(id-P_l) S_n_0_𝔹^2
.
The second term in the right hand side is going to zero as n tends to infinity by using (<ref>) and Jensen's inequality.
Furthermore, to prove the convergence of 2/n P_l S_n_0_𝔹^2 to 0 as n tends to +∞, it is sufficient to prove that for any x^* ∈𝔹 ^* we have 1/n x^* S_n _0^2→ 0.
Since [x ^* (S_n)]^2/n_n is uniformly integrable, it is then enough to prove that
1/√(n) x^*(S_n)_0 0
.
This holds under the condition x^*(X_0) x^*S_n_0_n converges in 𝕃^1 by the arguments developed in Step 3 of the proof of Theorem 4.18 in <cit.>.
It remains to check Condition (<ref>).
Applying x^*, we place ourselves in the well-known context of real random variables. From Condition (<ref>) and using the same notations as in (<ref>), (Y_0T_n_0)_n converges in 𝕃^1 then the series of covariance associated to (Y_k)_k converges. Let us denote it σ^2(x^*).
From [Proof of Theorem 4.18, Step 4]MPU2019, since the sequence (X_n)_n ∈ℤ is ergodic, we get
1/nT_n ^2_0σ^2(x^*)
.
That is
[x ^* (S_n)]^2/n_0σ^2(x^*) ∑_k ∈Cov(x^*(X_0), x^* (X_k))
.
This ends the proof of Theorem <ref>.
§.§ Proof of Corollary <ref>
Note that when μ is finite, the result follows immediately. Assume from now that μ is not finite.
Let start by noting that
X_0_p, μ≤ Y_p, μ + Y_p,μ, so that
Q_X_0_p, μ≤ Q_Y_p,μ + Y_p, μ and
G_X_0_p, μ(.) ≥ G_Y_p,μ(./2),
where X_0_p, μ = ∫_X_0(t)^p dμ(t)^1/p.
Indeed, by definition of X_0 and as an application of Minkowski's inequality,
X_0_p,μ =
∫_-∞^0 _Y_0 ≤ t - F(t)^p dμ(t) + ∫^+∞_0 _Y_0 > t +1 - F(t)^p dμ(t) ^1/p
≤∫_-∞^0 _Y_0 ≤ t dμ(t) + ∫^+∞_0 _Y_0 > t dμ(t) ^1/p + ∫_-∞^0 F(t)^p dμ(t) + ∫^+∞_0 1 - F(t)^p dμ(t) ^1/p
≤
Y_p,μ + Y_p,μ
.
The inequality for the quantile function follows immediately. Let us prove the last inequality.
For any x ∈ [0,1],
∫_0^x Q_X_0_p,μ(u) du
≤∫_0^x Q_Y_p,μ(u) du + x Y_p,μ≤∫_0^x Q_Y_p,μ(u) du + x ∫_0^1 Q_Y_p,μ(u) du
.
Hence, as Q_Y_p,μ is non-increasing,
∫_0^x Q_X_0_p,μ(u) du ≤ 2∫_0^x/2 Q_Y_p,μ(u). Thus, G_X_0_p, μ(.) ≥ G_Y_p,μ(./2) and (<ref>) is established.
From (<ref>) and after a change of variables, for any a>0,
∫_0^X_n_0_p,μ Q_X_0_p,μ ∘ G_X_0_p,μ (u) du
≤
2∫_0^X_n_0_p,μ/2 Q_Y_p,μ∘ G_Y_p,μ (u) du + X_n_0_p,μ. Y_p,μ
≤
2a∫_0^G_Y_p,μ(X_n_0_p,μ/a) Q_Y_p,μ^2 (u) du + X_n_0_p,μ. Y_p,μ
.
Let U and V be real r.v.'s such that U is distributed as X_n_0_p,μ, V is distributed as Y_p,μ and U and V are independent.
It follows that
X_n_0_p,μ. Y_p,μ = (UV).
Following the proof of Proposition 1, (4.1) in <cit.> and after taking into account (<ref>), we derive
(UV)
≤∫_0^X_n_0_p,μ Q_Y_p,μ∘ G_X_0_𝔹 (u) du
≤ 2a∫_0^G_Y_p,μ(X_n_0_p,μ/a) Q_Y_p,μ^2 (u) du
.
Finally, (<ref>) together with (<ref>) imply that
∫_0^X_n_0_p,μ Q_X_0_p,μ ∘ G_X_0_p,μ (u) du
≤ 4a∫_0^G_Y_p,μ(X_n_0_p,μ/a) Q_Y_p,μ^2 (u) du .
In the following, we will assume without loss of generality that (F_μ(Y_0)> 0) > 0 and (F_μ(Y_0)< 0) > 0, as if it is not the case, in the following calculations some terms disappear making it simpler.
Let x > 0 and y < 0, we can write
X_n_0_p,μ =
(∫_-∞^y _Y_n ≤ t_0-F(t)^p dμ(t) + ∫_y^x _Y_n ≤ t_0-F(t)^p dμ (t) .
. + ∫_x^+∞_Y_n ≤ t_0-1+1-F(t)^p dμ (t))^1/p
≤∫_-∞^y _Y_n ≤ t_0^p dμ(t)^1/p + ∫_-∞^y (_Y_0 ≤ t)^p dμ(t)
^1/p
+ F_μ(x)^1/pb_n
+ (-F_μ (y))^1/p b_n
+ ∫^+∞_x _Y_n > t_0^p dμ(t)^1/p + ∫^+∞_x (_Y_0 > t)^p dμ(t)
^1/p
,
where b_n is defined in Definition <ref>. Considering for any f, f_p, I,μ = ∫_I f(t)^p dμ(t)^1/p and using Jensen's inequality for ._p, [x, +∞[,μ, we get
∫^+∞_x _Y_n > t_0^p dμ(t)^1/p
= _Y_n > ._0_p, [x, +∞[,μ
≤_Y_n > ._p, [x, +∞[,μ_0
= ∫^+∞_x _Y_n > t dμ(t)^1/p_0
and
∫^+∞_x (_Y_0 > t)^p dμ(t)^1/p
= _Y_0 > ._p, [x, +∞[,μ
≤_Y_0 > ._p, [x, +∞[,μ≤∫^+∞_x _Y_0 > t dμ(t)^1/p.
We proceed in a similar way with ._p, ]-∞, y],μ, so that taking the expectation in (<ref>), we finally get
X_n_0_p,μ
≤ 2∫^+∞_x _Y_0 > t dμ(t)^1/p
+ β_1,Y(n) [F_μ(x)^1/p + (-F_μ(y))^1/p]
+ 2 ∫^y_-∞_Y_0 ≤ t dμ(t)^1/p
.
Note now that
∫_x^+∞_Y_0 > t dμ(t)^1/p
=
((F_μ(Y_0))_+ - F_μ(x))_+^1/p
≤(F_μ(Y_0))_+^1/p_(F_μ(Y_0))_+^1/p > F_μ(x)^1/p≤∫_0^Q_(F_μ(Y_0))_+^1/p^-1(F_μ (x)^1/p) Q_(F_μ(Y_0))_+^1/p (u) du
,
and similarly,
∫^y_-∞_Y_0 ≤ t dμ(t)^1/p≤∫_0^Q_(-F_μ(Y_0))_+^1/p^-1((-F_μ (y))^1/p) Q_(-F_μ(Y_0))_+^1/p (u) du.
Furthermore, we can select x>0 such that F_μ(x)^1/p = Q_(F_μ(Y_0))_+^1/p(β_1,Y(n)). Indeed, assume that such an x doesn't exist. Then, there exists x_1>0 such that F_μ(x_1^-) ≠ F_μ(x_1) and Q_(F_μ(Y_0))_+^1/p(β_1,Y(n)) belongs to ] F_μ(x_1^-)^1/p , F_μ(x_1)^1/p [. Note that (F_μ(Y_0))_+^1/p doesn't take values in ] F_μ(x_1^-)^1/p , F_μ(x_1)^1/p [. Since (F_μ(Y_0))_+^1/p and Q_(F_μ(Y_0))_+^1/p(U) have the same distribution, it implies that Q_(F_μ(Y_0))_+^1/p doesn't take values in ] F_μ(x_1^-)^1/p , F_μ(x_1)^1/p [. There is a contradiction.
In the same manner, reasoning on (-F_μ(Y_0))_+^1/p , we can select y <0 such that (-F_μ(y))^1/p = Q_(-F_μ(Y_0))_+^1/p(β_1,Y(n)).
Hence, selecting x >0 such that F_μ(x)^1/p = Q_(F_μ(Y_0))_+^1/p(β_1,Y(n)) and y <0 such that (-F_μ(y))^1/p = Q_(-F_μ(Y_0))_+^1/p(β_1,Y(n)), by combining (<ref>) and (<ref>) we get
∫^+∞_x _Y_0 > t dμ(t)^1/p≤∫_0^β_1,Y(n) Q_(F_μ(Y_0))_+^1/p(u) du
≤∫_0^β_1,Y(n) Q_Y_p,μ(u) du
and
∫^y_-∞_Y_0 ≤ t dμ(t)^1/p≤∫_0^β_1,Y(n) Q_(-F_μ(Y_0))_+^1/p(u) du
≤∫_0^β_1,Y(n) Q_Y_p,μ(u) du
.
On another hand, as Q_Y_p,μ is a nonincreasing function, we have
β_1,Y(n) Q_Y_p,μ(β_1,Y(n))
≤∫_0^β_1,Y(n) Q_Y_p,μ (u) du
.
Starting from (<ref>) and taking into account (<ref>) and (<ref>), we get
X_n_0_p,μ≤ 6 ∫_0^β_1,Y(n) Q_Y_p,μ (u) du.
Using the upper bound of (<ref>) in (<ref>) with a = 6, we derive
∫_0^X_n_0_p,μ Q_X_0_p,μ ∘ G_X_0_p,μ (u) du ≤
24 ∫_0^β_1,Y(n) Q_Y_p,μ^2(u) du
.
Thus, as soon as (<ref>) is verified, the condition <ref> in Corollary <ref> holds and Theorem <ref> applies.
Acknowledgements. I would like to thank my two advisors J. Dedecker and F. Merlevède for helpful discussions.
*
|
http://arxiv.org/abs/2307.00890v3
|
20230703093948
|
Exact results of one-dimensional repulsive Hubbard model
|
[
"Jia-Jia Luo",
"Han Pu",
"Xi-Wen Guan"
] |
cond-mat.str-el
|
[
"cond-mat.str-el"
] |
Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences, Wuhan 430071, China
University of Chinese Academy of Sciences, Beijing 100049, China
Department of Physics and Astronomy, and Rice Center for Quantum Materials, Rice University, Houston, Texas 77251-1892, USA
[e-mail:]xwe105@wipm.ac.cn; xiwen.guan@anu.edu.au
Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences, Wuhan 430071, China
Hefei National Laboratory, University of Science and Technology of China, Hefei 230088, China
Peng Huanwu Center for Fundamental Theory, Xi'an 710069, China
Department of Fundamental and Theoretical Physics, Research School of Physics, Australian National University, Canberra ACT 0200, Australia
We present analytical results of fundamental properties of one-dimensional (1D) Hubbard model with a repulsive interaction, ranging from fractional excitations to universal thermodynamics, interaction-driven criticality, correlation functions, Contact susceptibilities and quantum cooling.
Using the exact solutions of the Bethe Ansatz equations of the Hubbard model, we first rigorously calculate the gapless spin and charge excitations, exhibiting exotic features of fractionalized spinons and holons.
While we investigate the gapped excitations in terms of the spin string and the k-Λ string bound states at arbitrary driving fields, showing subtle differences of spin magnons and charge η-pair excitations.
Based on the analysis on the fractional charge and spin excitations, the spin-incoherent Luttinger liquid with only the charge propagation mode is elucidated by the asymptotic of the two-point correlation functions with the help of the conformal field theory.
For a high density and high spin magnetization region, i.e. near quadruple critical point, we then further analytically obtain the thermodynamical properties, dimensionless ratios and scaling functions near quantum phase transitions.
These results of thermodynamics provide rich universal low energy physics and critical scaling laws for the phase transitions near the quadruple point.
In particular, we determine additivity rules of spin and charge susceptibilities, and thermodynamics of spin-incoherent Luttinger liquid.
Finally, in order to capture deeper insight into the Mott insulator and interaction driven criticality, we further study the double occupancy and its associated Contact and Contact susceptibilities through which an adiabatic cooling scheme upon the quantum criticality is proposed
Our methods provide rich perspectives of quantum integrability in 1D strongly correlated systems and offer promising guidance to future experiments with interacting electrons and ultracold atoms with and without a lattice.
Keywords: repulsive Hubbard model, fractional excitations, thermodynamics, spin-incoherent Luttinger liquid, quantum cooling, interaction driven criticality
Exact results of one-dimensional repulsive Hubbard model
August 1, 2023
=========================================================
§ I. INTRODUCTION
The strongly correlated electronic systems provide us with a powerful platform to investigate novel and intriguing many-body phenomena, such as high-T_c superconductor <cit.>, Mott metal-insualtor transition<cit.>, colossal magnetoresistance<cit.>, antiferromagnetic correlations <cit.> etc., which cannot be merely explained by compounding of single particle motions.
Among the continuous and lattice models of interacting electrons and ultracold atoms, one of the prototypical integrable models is the one-band Hubbard model <cit.>.
Recently, particular attention has been paid to this model because of its operability and accessibility in trapping ultracold atoms on lattices <cit.>.
There have been numerous publications focusing on the 1D Hubbard model with on-site interaction as well as its variants in one or higher dimensions <cit.>, involving the studies of spectral weights and critical exponents for correlation functions<cit.>, dynamical transport<cit.>, interaction quench<cit.> etc.
In this scenario, the well-celebrated Landau theory of Fermi liquid <cit.> remarkably describes universal low energy physics of interacting fermions and leads to a wide range of applications in higher dimensional quantum many-body systems.
On the other hand, the low energy excitations of 1D many-body systems usually lack individual motions of quasiparticles. Instead, they form collective motions of bosons, i.e., Tomonaga-Luttinger liquid (TLL) <cit.>.
A hallmark of interacting electrons in 1D is the splitting of low-lying spin and charge excitations into two separate collective motions <cit.>, i.e., one solely carrying spin and the other carrying charge.
This is one of the most significant features predicted by both the integrable theory and the TLL theory, namely, the propagators generated by excitations separate into two massless wave packets with distinctive velocities v_c, v_s in charge and spin degrees of freedom, respectively <cit.>.
The spin-charge separation phenomenon was experimentally evidenced in literature <cit.>.
This phenomenon was further evidenced by detecting spin and charge propagating velocities <cit.> from quenching an initial Mott state into few hole doped states in the 1D Hubbard model.
While some high-energy or gapped excitations may contribute to the optical conductivity <cit.>.
The spin-charge separation has been recently verified and confirmed by the virtue of spin and charge dynamical structure factors in 1D Fermi gas of ultracold atoms <cit.>.
In this research, the team used Bragg beams to excite spin and charge density waves separately and measured the corresponding spin and charge Bragg spectra at different interacting strengths.
By taking into account the curvature correction to the linear charge excitation spectrum and the nonlinear effect of the spin backward scattering in the spin sector, theoretical and experimental results are in good agreement.
Nevertheless, when the temperature T exceeds a characteristic energy scale of spin, but still less than typical energy of charge Fermi energy, i.e. E_s≪ T≪ E_c, the spin degree of freedom is fully disordered, while the charge degree of freedom still remains in propagating mode.
Such a state is referred to as spin incoherent Luttinger liquid (SILL)<cit.>.
The correlation function of the SILL shows an exponential decay of distance in the spin sector, while it remains a power-law decay in the charge sector, giving a characteristic of the emergent SILL <cit.>,
in contrast to the case of the usual TLL that correlation functions in both spin and charge sectors exhibit power-law decaying in distance <cit.>.
An evidence for the spin-incoherent Luttinger liquid was recently reported by R Hulet's group at Rice University <cit.>.
Although there have been extensive study on this model, a comprehensive study of the spin-coherent and incoherent Luttinger liquids, universal thermodynamics, fractional quasiparticles and Mott phase in the 1D Hubbard model still remains elusive.
Moreover, comparing with the continuum gas systems, one has a consensus that the Mott insulator phase dominated by antiferromagnetic ordering is of great importance on account of its uniqueness stemming from the umklapp scattering in terms of bosonization <cit.>.
There are ample thoughts and debates on the mechanism for the onset of the Mott insulator for a long time <cit.>.
But it is still a big theoretical challenge to capture the essential nature of the insulating phase.
This massive charge sector with massless spin degree belongs to the category of Luther-Emery model <cit.> whose dynamical spectral function for Mott phase displays one or two singularities without anomalous dimension <cit.>.
A quantity which is experimentally accessible is the double occupancy, embodying charge fluctuations.
It gives insights into the emergence of the Mott phase<cit.> and
conveys plentiful information apart from Mott phenomenon, for instance, the Pomeranchuk effect <cit.>, antiferromagnetic correlations <cit.>, adiabatic cooling <cit.> over a wide energy scales and temperature regimes.
One of the key contents in this paper is to define the Contact and Contact susceptibilities through which we further study adiabatic cooling scheme at quantum criticality.
The Bethe Ansatz provides us with a feasible way to solve the 1D Hubbard Hamiltonian <cit.>.
The framework of building up the finite-temperature thermodynamics of 1D integrable systems was attributed to C N Yang and C P Yang's seminal paper <cit.>.
Later M Takahashi made an important step forward by treating the thermodynamics of some more complicated integrable systems <cit.>.
In this scenario, quantum transfer matrix was successfully developed to the study of the thermodynamics of integrable models in terms of path integral <cit.>.
These two approaches endow us with possibility to acquire the excitation spectra <cit.>, universal free Luttinger liquids information <cit.> and correlation functions <cit.> for the repulsive Hubbard model and the attractive Hubbard model <cit.> despite big challenges in actual calculations of physical properties due to the complexity of the TBA equations at arbitrary temperature.
For higher dimensionality and various lattice symmetries <cit.>, there does not seem to have completely applicable theoretical schemes and commonly exploited numerical simulation methods including quantum Monte Carlo<cit.>, matrix-decomposition algorithms <cit.>, dynamical mean-field theory <cit.>, exact diagonalization <cit.>, perturbation theory <cit.>, density-Matrix renormalization group<cit.> and other generalization <cit.>.
The elementary excitations in spin and charge degree and universal thermodynamics of the 1D repulsive Hubbard model are briefly discussed in <cit.>.
In this paper, we develop analytical methods to obtain fundamental properties of the 1D Hubbard model with repulsive interaction and external fields.
In particular, using the exact solutions of the Bethe Ansatz equations of the Hubbard model, we rigorously calculate fractional excitation spectra, universal thermodynamics, scaling functions, incoherent correlation functions, Contact, Contact susceptibilities, quantum cooling and interaction-driven quantum phase transitions in a pedagodical way.
Exact results of various excitation spectra reveal exotic features of fractionalized spinons and holons and show subtle differences of spin magnons and charge η-pair excitations.
Universal thermodynamical properties of spin and charge Luttigner liquids, dimensionless ratios and scaling functions near quantum phase transitions in terms of chemical potential, magnetic field and interaction are obtained explicitly near quadruple critical point.
The asymptotic two-point correlation functions for the spin-incoherent Luttinger liquid are derived with the help of the conformal field theory.
Finally, we study the double occupancy, the Contact, Contact susceptibilities, interaction-driven adiabatic cooling scheme and criticality and Mott insulator behaviour of the model as well.
The outline of this paper is the following. In section II, we introduce the basic knowledge of exact Bethe ansatz solution of the repulsive Hubbard model and calculate the excitation spectra of fractional holons and spinons as well as the gapped excitations above the ground state.
In section III, we initiate our study on the thermal and magnetic properties of TLL, quantum criticality, non-Fermi liquid behaviour and the universal laws in the 1D Hubbard model.
In section IV, we study the SILL from the Bethe ansatz perspective. The asymptotic single particle Green's function and pairing correlation function are presented.
A depiction of double occupancy and analytical calculation of Contact and its susceptibilities with respect to the chemical potential, magnetic field and interaction near Mott phase are given in section V.
In this section, we also carry out a brief analysis on quantum cooling and interaction driven phase transitions. The last section VI remains for a summary and outlooks.
§ II. FRACTIONAL HOLONS AND SPINONS
Fundamental low energy physics is intimately related to the elementary excitations in 1D many-body systems.
In particular, the low-lying excited states involve many-body correlations and determine the unique dynamics of the 1D Hubbard model, showing novel features of fractional holons and spinons.
In the Mott phase, the excitations show a charge gapped phase with antiferromagnetic correlations in spin sector.
Spin and charge particle-hole excitations reveal the origins of the spin-coherent TLL and spin-incoherent liquid in a crossover temperature regime.
The purpose of this section is aimed to calculate such typical excitation spectra of the repulsive Hubbard model, which complement the study in literature, see a review <cit.>.
§.§ II.1 Bethe ansatz equations and root patterns
In the presence of external potentials, the 1D Hamiltonian of Hubbard model is given by
H=-∑_j=1^L∑_a=↑↓(c_j, a^+ c_j+1, a+c_j+1, a^+ c_j, a)+u ∑_j=1^L (1-2n_j ↑) (1-2n_j ↓)-μN̂-2BŜ_z,
where c_j,a^† and c_j,a are the creation and annihilation operators of fermions with spin a (a=↑ or a=↓) at site j in a periodic lattice of length L,
n_j σ is particle density operator of spin σ at site j, N̂=∑_j=1^L(n_j ↑+n_j ↓) is the total number particle operator, and Ŝ_z=1/2∑_j=1^L(n_j ↑-n_j ↓) is the magnetization operator.
The hopping amplitude is set to unity, and u represents an on-site interaction between particles.
The chemical potential μ and magnetic field B are rescaled with respect to the hopping term.
u>0 for repulsion and u<0 for attraction, they can be converted into each other by the Shiba transformation.
We choose a unit system by setting k_B, ħ to 1 throughout the paper.
At an absence of the external potential (μ=0,B=0) and even lattice sites, the Hamiltonian <ref> possesses SO(4)≅SU(2)× SU(2)/ℤ_2 full symmetry composed of spin rotational and η-pairing invariance<cit.>:
[H,S^α]=0,
[H,η^α]=0,
where the spin and η-pair operators are given by
S^α= 1/2∑_j=1^L∑_a,b=↑,↓c_j, a^+(σ^α)_b^ac_j,b,
η^x= -1/2∑_j=1^L(-1)^j(c_j, ↑^+c_j, ↓^++c_j, ↑c_j, ↓),
η^y= i/2∑_j=1^L(-1)^j(c_j, ↑^+c_j, ↓^+-c_j, ↑c_j, ↓),
η^z= 1/2(N-L),
which are useful to analyze elementary excitations at zero magnetic field and half-filled band <cit.>.
In the above equation, σ^α with α=x,y,z denote the Pauli matrices.
In this paper, we will consider the case where the number of particles n_c is less than one with the magnetization less than n_c/2 for repulsive interaction, i.e., it corresponds to a negative chemical potential and positive magnetic field.
The other argument spaces can be involved in terms of discrete symmetries <cit.>.
The eigenvalues of {k}, {Λ}, respectively the solutions of charge quasimomentum and spin rapidity of this model, can be obtained by solving the Lieb-Wu equations <cit.>.
While Takahashi <cit.> assumed that the patterns of quasimomenta {k}, {Λ} exhibit different patterns, which determine full states of the model.
The root patterns of repulsive Hubbard model can be categorized into three types of strings: single k, length-n Λ strings composed of n spin-down electrons, length-m k-Λ string containing m spin-down and m spin-up particles.
Let M_e,M_n,M^'_n denote the number of k, Λ strings and k-Λ strings of length n, resepctively.
Therefore, the total particle number N and spin down electron number M read:
M=∑_n=1^∞ n(M_n+M_n^'),
N=ℳ_e+∑_n=1^∞ 2 n M_n^'.
In order to carry easily out our study, we prefer to present the following Bethe ansatz equations and the TBA equations of the 1D Hubbard model with a repulsive interaction. Based on these, we will derive analytically physical properties of the model.
In terms of the string solutions, the real centers of these roots satisfy the so-called Takahashi's forms of Bethe ansatz equations <cit.>:
k_j L=2 π I_j-∑_n=1^∞∑_α=1^M_nθ(sin k_j-Λ_α^n/n u)-∑_n=1^∞∑_α=1^M_n^'θ(sin k_j-Λ_α^' n/n u),
∑_j=1^N-2 M^'θ(Λ_α^n-sin k_j/n u)=2 π J_α^n+∑_m=1^∞∑_β=1^M_mΘ_n m(Λ_α^n-Λ_β^m/u),
2 L Re[arcsin(Λ_α^' n+n i u)]=2 π J_α^' n+∑_j=1^N-2 M^'θ(Λ_α^' n-sin k_j/n u)+∑_m=1^∞∑_β=1^M_m^'Θ_n m(Λ_α^n-Λ_β^' m/u),
where θ(x)=2arctan(x) and Θ_n m is defined as
Θ_nm(x)={[ θ(x/|n-m|)+2 θ(x/|n-m|+2)+⋯+2 θ(x/n+m-2)+θ(x/n+m), if n ≠ m; 2 θ(x/2)+2 θ(x/4)+⋯+2 θ(x/2n-2)+θ(x/2n), if n=m ]..
The counting numbers I_j,J_α^n,J_α^' n are integer or half-odd integers, which rely on the odevity of string number,
I_j is integer if ∑_m(M_m+M_m^') is even
half-odd integer if ∑_m(M_m+M_m^') is odd ,
J_α^n is integer if N-M_n is odd
half-odd integer if N-M_n is even, ,
J_α^' n is integer if L-N+M_n^' is odd
half-odd integer if L-N+M_n^' is even .
The classification of these quantum numbers will be needed for characterizing the excitations which will be presented later.
Every selected set of quantum numbers I_j,J_α^n,J_α^' n uniquely determine the value of quasimomentum k_j, Λ_α^n, Λ_α^' n, and thus give a specific state, either the ground state or an excited state of various filling of the model.
These numbers are in the ranges of
-L/2<I_j≤L/2,
|J_α^n| ≤1/2(N-2 M^'-∑_m=1^∞ t_n m M_m-1),
|J_α^' n| ≤1/2(L-N+2 M^'-∑_m=1^∞ t_n m M_m^'-1),
where M^'=∑_n=1^∞ n M_n^' is the entire number of spin-down particles included in k-Λ strings, and t_m n=2 min (m, n)-δ_m n.
Denote ρ^p,σ_n^p,σ_n^' p (ρ^h,σ_n^h,σ_n^' h) as the densities of real quasimomenta k, the real part of the length-n spin strings and the real part of length-n k-Λ strings for particles (holes), the root distributions of these types of root patterns are given by <cit.>
ρ^p(k)+ρ^h(k)= 1/2π+cos k ∑_n=1^∞∫_-∞^∞Λ̣a_n(Λ-sin k)(σ_n^' p(Λ)+σ_n^p(Λ)),
σ_n^p(Λ)+σ_n^h(Λ) = -∑_m=1^∞ A_n m * σ_m^p(Λ) +∫_-π^πḳ a_n(sin k-Λ) ρ^p(k),
σ_n^' p(Λ)+σ_n^' h(Λ) = 1/πRe1/√(1-(Λ-i n u)^2) -∑_m=1^∞ A_n m * σ_m^' p(Λ) -∫_-π^πḳ a_n(sin k-Λ) ρ^p(k),
respectively. In the above equations a_n(x)=1/2π2nu/(nu)^2+x^2, and the convolution term denoted by * denotes a convolution
A_nm*f(x)=∫_-∞^∞ỵ/2 π/x̣Θ_nm(x-y/u)f(y).
The function A_nm denotes a derivative of the function Θ_nm, namely
A_nm(x-y/u) =1/2 π/x̣Θ_nm(x-y/u)=
{[ a_|n-m|(x-y)+2 a_|n-m|+2(x-y)+⋯+2 a_n+m-2(x-y)+a_n+m(x-y), if n ≠ m; 2 a_2(x-y)+2 a_4(x-y)+⋯+2 a_2n-2(x-y)+a_2n(x-y), if n=m ]..
Following the Yang-Yang method <cit.>, the true physical equilibrium states at finite temperature can be determined by the minimization of the free energy with respect to the densities.
The TBA equations for dressed energies κ(k),ε_n(Λ),ε_n^'(Λ), associated with the product of the logarithm of the ratio of hole density to particle density and temperature κ=Tln(ρ^h/ρ^p),ε_n=Tln(σ_n^h/σ_n^p),ε_n^'=Tln(σ_n^' h/σ_n^' p), are given by the following form <cit.>
κ(k) = -2 cos k-μ-2 u-B+∑_n=1^∞∫_-∞^∞Λ̣a_n(sin k-Λ) Tln(1+^-ε_n^'(Λ)/T)
-∑_n=1^∞∫_-∞^∞dΛ a_n(sin k-Λ) Tln(1+^-ε_n(Λ)/T),
ε_n(Λ) = 2nB-∫_-π^πḳcos k a_n(sin k-Λ) Tln(1+^-κ(k)/T)+∑_m=1^∞ A_n m * Tln(1+^-ε_m(Λ)/T),
ε_n^'(Λ) = 4 √(1-(Λ-i n u)^2)-2 n μ-4 n u-∫_-π^πḳcos k a_n(sin k-Λ) Tln(1+^-κ(k)/T)
+∑_m=1^∞ A_n m * Tln(1+^-ε_m^'(Λ)/T).
It is observed that the above dressed energies are even functions with respect to their variables and admit monotonically increasing behavior in the positive argument space. The Gibbs free energy per site is given by
f = -T ∫_-π^πḳ/2 πln(1+^-κ(k)/T)+u
-T ∑_n=1^∞∫_-∞^∞Λ̣/πRe1/√(1-(Λ-i n u)^2)ln(1+^-ε_n^'(Λ)/T).
(<ref>) serves as the equation of state from which one can directly obtain the profiles of ground state and thermodynamic properties of the Hubbard model at zero and finite temperatures by solving (<ref>), (<ref>) and (<ref>).
However, finding the solution to the TBA equations (<ref>), (<ref>) and (<ref>) imposes a big theoretical challenge.
Analytical results of universal thermodynamics, Luttinger liquid properties, magnetic properties, scaling functions in the vicinities of the critical points, as well as behaviour of the Mott phase, quantum cooling are still lacking of a comprehensive study in literature.
§.§ II.2 Ground state
In the limit of zero temperature, one can see that the dressed energies of the length-n Λ strings with n>1 and all k-Λ are always positive in the parameter space.
Consequently, they do not contribute to zero temperature thermal and magnetic properties.
The real k and length-1 Λ string should be cut off at the points where dressed energies switch sign. Thereby, in the ground state, the coupled equations for dressed energies and densities and hole distribution functions <cit.> are greatly simplified
κ(k) = -2 cosk-μ-2 u-B+∫_-A^AΛ̣a_1(sink-Λ) ε_1(Λ),
ε_1(Λ) = 2 B +∫_-Q^Qḳcos k a_1(sink-Λ) κ(k)
-∫_-A^AΛ̣^' a_2(Λ-Λ^')ε_1(Λ^'),
ρ^p(k) = θ_H(Q-|k|)[1/2 π+cos k ∫_-A^AΛ̣a_1(sin k-Λ) σ_1^p(Λ)],
σ_1^p(Λ) = θ_H(A-|Λ|)[-∫_-A^AΛ̣^' a_2(Λ-Λ^') σ_1^p(Λ^')+∫_-Q^Qḳ a_1(sin k-Λ) ρ^p(k)],
ρ^h(k) = θ_H(|k|-Q)[1/2 π+cos k ∫_-A^AΛ̣a_1(sin k-Λ) σ_1^p(Λ)],
σ_1^h(Λ) = θ_H(|Λ|-A)[-∫_-A^AΛ̣^' a_2(Λ-Λ^') σ_1^p(Λ^')+∫_-Q^Qḳ a_1(sin k-Λ) ρ^p(k)],
σ_n^h(Λ) = -A_n 1 * σ_1^p(Λ)+∫_-Q^Qḳ a_n(sin k-Λ) ρ^p(k), for, n ≥ 2,
σ_n^' h(Λ) = 1/πRe1/√(1-(Λ-i n u)^2)-∫_-Q^Qḳ a_n(sin k-Λ) ρ^p(k),
where Q, A are the Fermi points of charge and spin, respectively, i.e. κ(Q)=0,ε_1(A)=0.
θ_H is the Heaviside step function.
We observe that particles only exist within Fermi points, while holons locate outside in the ground state.
In the above equations, we also present the hole density distribution functions for Λ strings and k-Λ strings for our later calculation.
Defining the total quantities ρ=ρ^p+ρ^k, σ_1=σ_1^p+σ_1^h, thus the density Bethe ansatz equations can be recast in a more succinct form
ρ(k) = 1/2 π+cos k ∫_-A^AΛ̣a_1(Λ-sin k)σ_1(Λ),
σ_1(Λ) = - ∫_-A^AΛ̣^' a_2(Λ-Λ^')σ_1(Λ^') +∫_-Q^Qḳ a_1(sin k-Λ) ρ(k).
Thus, the particle number and spin-down number can be expressed as in terms of (<ref>) and (<ref>)
n_c = ∫_-Q^Qḳρ(k),
n_↓ = ∫_-A^AΛ̣σ_1(Λ).
The momenta of the charge and spin quasiparticles for |k|≤ Q, |Λ|≤ A, and holons for |k|> Q, |Λ|> A in terms of the quasi-momentum parameters k_j,Λ_α^n, Λ_α^' n can be calculated via the following expressions
p(k) = 2π I_k/L=2π∫_0^kḳ^'ρ(k^'),
p_1(Λ) = 2π J_1/L=2π∫_0^ΛΛ̣^'σ_1(Λ^'),
p_n(Λ) = 2π J_n/L=2π∫_0^ΛΛ̣^'σ_n^h(Λ^'), n≥2,
p_n^'(Λ) = 2π J_n^'/L=-2π∫_0^ΛΛ̣^'σ_n^' h(Λ^')+π(n+1), n≥1.
Let us denote N_GS be the total particle number, and M_GS the spin-down number in the ground state.
Consider the ground state structure in the presence of magnetic field and 0<n_c<1 as follows (assume lattice length L is even):
M_e=N_GS=2*M_GS<L, M_1=M_GS is an odd number and less than N_GS/2, M_n=0 for n≥ 2, M^'_n=0 for n≥1, that is, both charge and spin sectors are partially filled and all particles form charge and spin Fermi seas, inside of which are referred as antiholon states for charge and spinon states for spin degree of freedom.
At zero temperature, the system lies in its lowest energy state and particles are symmetrically distributed around zero momentum.
Using (<ref>) (<ref>) and (<ref>)-(<ref>), we can evaluate that {I_j} are half-odd integers and fill in the interval [-N_GS-1/2,N_GS-1/2], whereas { J_α^1} are integer within [-M_GS-1/2,M_GS-1/2].
The momentum distribution p(k), p_1(Λ) respectively covers the intervals of [-π n_c,π n_c] and [-π n_↓,π n_↓].
It is inferred from (<ref>) that
the values for quantum numbers { J_α^1} satisfiy |J_α^1| ≤1/2(N_GS-M_GS-1). This implies that there are N_GS-M_GS vacancies for J_α^1 and thus N_GS-2M_GS holes are left.
Building on the quantum numbers of the ground state, elementary excitations can be classified into two types: gapless and gapped excitations.
Particle-hole excitations as well as their combinations belong to the gapless ones, which always occur at sufficient low energy sector.
Whereas the length-n k-Λ and Λ strings with n>1 require an additional energy to be excited. The key points for excitation spectrum calculation are stated below:
* dressed energy is an exact excitation energy of the particle with the corresponding momentum, and holon energy corresponds to the negative dressed energy.
In other words, energies of particle excitations for different types of quasiparticles are κ(k), ε_n(Λ) and ε^'_n(Λ),
while their corresponding holon excitations are -κ(k), -ε_n(Λ) and -ε^'_n(Λ), respectively.
* by changing quantum numbers M_e, M_n and M^'_n, one can analyze the excited structure with respect to ground state for capturing the holon or particle excitations of each type strings.
* if there is no extra hole or particle appearing over the ground state, the particle-hole excitation can exist. This type of excitation does not alter the total particle number or parity.
* individual particle and holon excitations can be transformed into each other, e.g., in figure <ref>, a holon excitation can be created by conducting two particle-hole excitations over a particle excitation.
* when I_j converts from half-odd (interger) to interger (half-odd), extra constant momentum ±π n_c needs to be added to total momentum with a change of parity (In the later study, we only consider the case of π n_c which can find common in continuous model. The case for -π n_c is symmetric with the former.), see, for example, figure <ref>.
* a multi-parametric excitation can be decomposed into few-parametric elementary excitations. Once few particle excitations are determined, analytical expressions of multiple particle excitations are created by few elementary excitations. We elucidate this approach in later, e.g., M_e=N_GS+2 is compose of double excitations with M_e=N_GS+1.
The techniques mentioned above enable us to classify the processes of complicated excitations, based on which we can conceive unique collective nature of fractional excitations. In what follows we will present various spectrum patterns of elementary excitations, which are helpful for latter analysis.
§.§ II.3 Fractional charge and spin excitations
Elementary excitations of the Hubbard model were studied in literature, see review <cit.>. Here we present a comprehensive understanding of fractional excitations and various gapped excitations which provide deep insight into understanding Luttinger liquid, spin incoherent liquid and low energy physics.
§.§.§ Two holon excitations
We first study the elementary excitation of flipping one spin-down particle, for which we have a change M_e=L, M_1=M_GS-1.
Here we take M_GS an odd number.
From (<ref>), (<ref>), we can determine that the quantum number {I_j} are integers and J_α^1 are half-odd integers within ranges -L/2<I_j≤L/2 and |J_α^1| ≤1/2(N_GS-M_GS), respectively.
Therefore the vacancies for spin are N_GS-M_GS+1.
In contrast to the ground state, one more vacancy arises with a diminished particle in the spin sector, and thus two holes Λ_h1, Λ_h2 carrying separately spin 1/2 are created in the spinon band for Λ.
For this spin flipping excitation, η pairing and spin magnetization (Δη^z,Δ S^z) are determined through Δη^z=Δ N/2, Δ S^z=Δ N-2Δ M/2 with N, M total particle number and spin-down number in the excited state, i.e., (Δη^z,Δ S^z)=(0,1) in this sample.
On the other hand, the particle-hole excitation can be constructed in the charge sector with a particle taken out from |k|<Q to |k|>Q, leaving a holon k_h inside the antiholon band and a particle k_p in the holon states.
The energies and momenta of the particle-hole excitation in the charge sector and the two-spinon excitation in the spin sector are
E_ph = κ(k_p)-κ(k_h), P_ph=p(k_p)-p(k_h),
E_ss = -ε_1(Λ_h1)-ε_1(Λ_h2), P_ss=-p_1(Λ_h1)-p_1(Λ_h2)+π n_c.
In figure <ref>, we demonstrate the results of the particle-hole and the two-spinon excitations for various parameters.
Obviously, one universal nature of low-energy excitation spectra is the existence of spin-charge separation behavior <cit.>, visualizing the theory of the spin-charge separated TLL.
In the absence of the magnetic field, the system has spin rotation symmetry.
Figure <ref> (a) shows the particle-hole excitation in the charge sector when away from half-filled lattice and the two-spinon excitation in the spin sector which fully accounts for the first Brillouin zone.
In contrast, Figure <ref> (b) shows the particle-hole excitation and the two-spinon excitation for strong coupling limit, reminiscent of the spin-charge separated spectra in the 1D Yang-Gaudin model <cit.>.
The slopes of the dispersion are characterised by velocities v_c, v_s, displaying v_s≪ v_c, which were recently measured experimentally in a quasi-1D trapped repulsive Fermi gas <cit.>.
This regime offers a remarkable possibility to study the spin-incoherent Luttinger liquid (SILL), which occurs when the temperature exceeds the spin characteristic energy, but is still less than the charge energy.
In the SILL, the spin degree of freedom is frozen. However, the charge degree of freedom propagates as a sound mode, behaving like spinless fermions <cit.>.
Figure <ref> (c), (d) presents the particle-hole excitation and the two-spinon excitation for an arbitrary magnetic field, away from half-filling and near half-filling, respectively.
In these two cases, the spin degree is highly suppressed in the presence of the magnetic field.
The spin and charge velocities are significantly different, and the spectra of spin and charge display a large separation.
Exspecially for figure <ref> (d),
the charge band is very narrow, behaving like a single particle, which was experimentally studied in <cit.>.
The spin velocity exceeds the sound velocity of charge.
Figure <ref> (e), (f) shows that in the charge sector the energy of holon excitation at small momentum can exceed that of particle excitation in the Hubbard model due to the cosine term presented in the charge dispersion, which is prohibited in continuous model<cit.>.
Nevertheless, when the system approaches the Mott phase, charge excitation gradually shrinks to the cases of the figure <ref> (d), indicating an emergence of single particle excitation in the charge sector <cit.>. In general, the spin sector exhibits similar behaviour as that of Heisenberg chain<cit.>.
When magnetic field is applied, the two-spinon excitation is no longer the low-energy one in the sense that it does not fill the entire Brillouin area.
The role played by the low-energy excitation in spin degree of freedom is the spin particle-hole spectrum as depicted in figure <ref>.
§.§.§ Antiholon- and holon-spinon excitations
Let's further consider adding or removing a particle with spin up or down over the ground state in the Hubbard model.
There are four configurations with the following quantum numbers: (a) M_e=N_GS+1, M_1=M_GS+1; (b) M_e=N_GS-1, M_1=M_GS; (c) M_e=N_GS+1, M_1=M_GS; (d) M_e=N_GS-1, M_1=M_GS-1.
These four excitations can be classified by the charge and spin magnetization values (Δη^z,Δ S^z):
correspondingly,
(a) antiholon-spinon particle (1/2,-1/2), (b) holon-spinon particle (-1/2,-1/2), (c) antiholon-spinon hole (1/2,1/2), and (d) holon-spinon hole (-1/2,1/2).
Consequently, (a) has one antiholon particle in the charge and one spinon particle in the spin sector;
(b) has one holon in the charge and one spinon particle in the spin sector;
(c) has an antiholon particle in the charge and one spinon hole in the spin sector;
(d) has one holon in the charge and one spinon hole in the spin sector.
Analogous to previous discussion, we obtain the following results of excitation spectra
[ (a) {[ E_hs=κ(k_p)+ε_1(Λ_p); P_hs=p(k_p)+p_1(Λ_p)+π n_c ],.; (b) {[ E_hs=-κ(k_h)+ε_1(Λ_p); P_hs=-p(k_h)+p_1(Λ_p) ],.; (c) {[ E_hs=κ(k_p)-ε_1(Λ_h); P_hs=p(k_p)-p_1(Λ_h) ],.; (d) {[ E_hs=-κ(k_h)-ε_1(Λ_h); P_hs=-p(k_h)-p_1(Λ_h)+π n_c ].. ]
In the light of the above discussion, these four excitation patterns can be converted to each other through certain numbers of particle-hole excitations. Their continuum spectra are depicted in figure <ref>.
Comparing upper (low spin-down density) and lower (near half-filling) panels of the figure <ref>, we observe that the cases (a)(c)(d), which are related to the antiholon excitations in charge sector or spinon hole excitations in spin part, are severely affected by the filling and magnetization.
When spin sector tends to vanish or charge sector becomes gapped, the profiles of excitations behave like single particle behaviours.
In general, we observe that the gapless excitations occur only at a sufficiently small energy scale.
The charge and spin excitations are well decoupled only in certain configurations demonstrated in figure <ref> and figure <ref>.
This spin-charge separation phenomenon is unique in 1D strongly correlated electron systems and can be described by the spin-coherent and spin-incoherent TLL theory.
We will further discuss the spin-coherent and spin-incoherent TLLs in the 1D Hubbard model below.
§.§ II.4 k-Λ string excitations
In comparison with the continuous systems, one peculiar excitation of 1D Hubbard model is the gapped bound states called k-Λ string, with length-1 formed by two electrons with opposite spins due to lattice effect.
Such states make important contributions to optical conductivity close to half-filling <cit.>.
The continuum spectra are shaped like a parabola with a convex opening, see the bottom of the excitation spectra in figure <ref>.
The length-1 k-Λ string shows to be the lowest charge gapped excitation among the other k-Λ strings, leading to the appearance of quasiparticle Λ_1 in the distribution of quantum number J_α^'1.
Such a gapped excitation also leads to the change of the parity of I_j.
It can be seen that the excited energy of this length-1 k-Λ has an energy and momentum E=ε^'_1(Λ_1), P=p^'_1(Λ_1)+π n_c.
For a more general consideration of k-Λ excitations, one k-Λ string is added to the occupation numbers as discussed in section Antiholon- and holon-spinon excitations, i.e., for antiholon-spinon M_e=N_GS+1,M_1=M_GS+1,M^'_1=1, of which
we decompose the multi-parameter excitation into elementary ones.
Similarly, we obtain the information of such gapped excited states.
The energy and momentum of this compound case read
(a) {[ E_hs=κ(k_p)+ε_1(Λ_p)+ε^'_1(Λ_1); P_hs=p(k_p)+p_1(Λ_p)+p^'_1(Λ_1) ],.
(b) {[ E_hs=-κ(k_h)+ε_1(Λ_p)+ε^'_1(Λ_1); P_hs=-p(k_h)+p_1(Λ_p)+p^'_1(Λ_1)+π n_c ],.
(c) {[ E_hs=κ(k_p)-ε_1(Λ_h)+ε^'_1(Λ_1); P_hs=p(k_p)-p_1(Λ_h)+p^'_1(Λ_1)+π n_c ],.
(d) {[ E_hs=-κ(k_h)-ε_1(Λ_h)+ε^'_1(Λ_1); P_hs=-p(k_h)-p_1(Λ_h)+p^'_1(Λ_1) ]..
The corresponding excitation spectra are plotted in figure <ref>. The length-1 k-Λ string leads to gapped continuum bands over the states (a) antiholon-spinon excitation, (b) holon-spinon excitation, (c) antiholon-spinon excitation and (d) holon-spinon excitation.
§.§ II.5 High spin string excitations
Now we discuss the excitations of high spin string.
Due to the antiferromagnetic ordering, the spin Λ string bound states in the 1D Heisenberg chain
have received a great deal of interest <cit.>.
Like the spin root patterns in 1D repulsive Fermi gas, the Λ strings with length greater than 1 are gapped too.
We consider the case of the quantum numbers M_1=M_GS-1, M_n=1 with n>1, leading to an extra particle Λ_n created in the parameter space of J_α^n.
This configuration corresponds to the process of flipping n-1 spin-up and simultaneously removing a spin-down from the channel of J_α^1 to constitute a length-n Λ string, and thus one has the energy ε_n(Λ_n) and momentum p_n(Λ_n) for the newly generated quasiparticle.
This situation is different from a multi-spinon-magnon excitation, where the quantum number of down-spin M is fixed <cit.>.
Meanwhile we can apply a particle-hole excitation in spin sector together with the length-n Λ string.
In general, its energy and momentum are given by
E = -ε_1(Λ_h)+ε_1(Λ_p)+ε_n(Λ_n),
P = -p_1(Λ_h)+p_1(Λ_p)+p_n(Λ_n).
In figure <ref>, we present the continuum excitation spectra for high spin strings of length-2 and -3.
The magnitudes of the energy gaps depend on the length of the spinon bound states.
§ III. UNIVERSAL THERMODYNAMICS
The 1D Hubbard model lays out profound many-body physics at zero and finite temperatures.
It exhibits rich quantum phases and universal thermodynamics when the temperature is much less than the Fermi energy.
A comprehensive understanding of the universal low temperature behaviour still remains challenging due to complexity of the Bethe ansatz equations and its N!-many terms of the wave function.
Most study on thermal and magnetic properties of the 1D Hubbard model has been carried out at zero temperature and half-filling or in dilute limit due to the complexity and intricacy of spin and charge root string patterns in the Bethe ansatz equations and the TBA equations.
Even in the low-temperature regime, there is no well established understanding of universal behaviour, such as TLL and quantum scaling functions etc.
Likely, this can be resorted to numerical simulation by iterative means <cit.>.
However, analytical results of the TLL and quantum criticality are still elusive.
In this section, we will present our analytical and numerical study of universal thermodynamics and quantum criticality of the 1D Hubbard model by means of the TBA equations near quadruple critical point. In this regime, the model is rarely studied.
§.§ III.1 Dimensionless ratios and phase diagram
In this subsection, we study the dimensionless ratios and their application in the 1D Hubbard model.
Before doing so, let us first give a brief view of the ground state phase diagram determined by equations (<ref>) and (<ref>).
In the limit of zero temperature, the dressed energies κ(k) and ε_1(Λ) can be used to analyze the phase transitions and thermodynamic quantities.
In the grand canonical ensemble, the magnetic field and chemical potential determine the integration boundaries Q, A through the conditions κ(Q)=0,ε_1(A)=0.
The particle density n_c=∫_-Q^Qḳρ(k), the spin-down electrons per site n_↓=∫_-A^AΛ̣σ_1(Λ) and the magnetization m=n_c/2-n_↓ can be obtained from the root densities equations (<ref>) and (<ref>).
The magnetic field and chemical potential drive the system from one phase to another at zero temperature.
Phase transitions usually occur with the conditions κ(0)=0 or ε_1(0)=0, or with the presence of an energy gap κ(π)=0 in part.
Using these conditions we may give five phases of states <cit.>: (I) vacuum with Q=0, A=0 or n_c=m=0; (II) partially filled and spin fully polarized state with 0<Q<π, A=0 or 0<n_c<1, m=n_c/2; the boundary condition is given by Q=0 and Q=π; (III) half-filled and spin fully polarized state with Q=π, A=0 or n_c=1,m=1/2; (IV) partially filled and magnetized band with 0<Q<π, 0<A≤∞ or 0<n_c<1,0≤ m<n_c/2; the phase boundary between phase (II) and (IV) is given by κ(0)<0<κ(π), ε_1(0)=0; (V) half-filled, magnetized band, also referred to as Mott insulator with Q=π, 0<A≤∞ or n_c=1,0≤ m<n_c/2, the phase boundary between (V) and (IV) is given by κ(π)=0,ε_1(0)<0.
Substituting each phase boundary conditions into the dressed energy equations (<ref>) and (<ref>), we can analytically determine all critical fields in the B-μ plane, also see <cit.>.
As a matter of fact, thermal properties feature dramatic quantum fluctuations around a quantum critical point (QCP),
whereas the dimensionless ratios, the ratios between the fluctuations of two types of sources, can become comparable to temperature, for instance the susceptibility Wilson ratio R^χ_s_w <cit.> and the compressibility Wilson ratio R^χ_c_w <cit.>
R^χ_s_w=4/3(π k_B/μ_Bg)^2χ_s/C_v/T, R^χ_c_w=π^2k^2_B/3χ_c/C_v/T,
describing the competition between magnetic fluctuation or particle number fluctuation and thermal fluctuations, respectively.
In the above expressions, χ_s (χ_c) is the magnetic susceptibility (compressibility), respectively.
These ratios can be constants independent of temperature in the TLL phase.
Therefore the ground state phase diagram can be characterized by the dimensionless ratios.
Moreover, the Grüneisen ratio <cit.>, which was introduced by Eduard Grüneisen <cit.> in the beginning of 20th Century in the study of the effect of volume change of a crystal lattice on its vibrational frequencies,
Γ=Vp̣/Ṭ|_V,N/Ẹ/Ṭ|_V,N=1/T∂^2p/∂μ^2∂ p/∂ T-∂^2p/∂μ∂ T∂ p/∂μ/∂^2p/∂μ^2∂^2p/∂ T^2-(∂^2p/∂μ∂ T)^2,
has been widely used to study the caloric effect of solids and phase transitions associated with the changes of volume, chemical potential, interaction and magnetic field.
Similar to the magnetic Grüneisen ratio, which quantifies the magnetocaloric effect in the refrigeration with the variation of magnetic field, the interaction driven Grüneisen ratio will be studied later, quantifying the caloric effect in the refrigeration driven by the variation of the interaction strength.
A significant aspect of these ratios is their characterization of the TLL and their scaling behaviour near critical points <cit.>. In figure <ref> (a), we present the three-dimensional plot of the magnetic Wilson ratio in the B-μ-R^χ_s_w parameter space, while in figure <ref> (b) we show the Wilson ratio as a function of μ for fixed B.
It is remarkable that the magnetic Wilson ratio is suddenly enhanced near the QCP, distinguishing different phases.
Moreover, in terms of bosonization results of the magnetic susceptibility, specific heat, Luttinger parameter and velocity, the Wilson ratios are given by
II: R^χ_s_w ≈ 2,
IV: R^χ_s_w ≈ 4v_cK_s/(v_s+v_c),
V: R^χ_s_w ≈ 8K_s,
I and III: R^χ_s_w = 0,
where K_s is the Luttinger parameter for spin, and v_c,s are sound velocities for charge and spin, respectively. These relations (<ref>) are in excellent agreement with figure <ref>.
The ground state of the 1D repulsive Hubbard model consists of five phases:
empty lattice I, partially filled and fully-polarized phase II, fully-filled and -polarized phase III, partially-filled and -polarized phase IV, and fully-filled and partially-polarized phase V (Mott insulator) in figure <ref>.
Among these, the phase IV shows the most abundant physics
In this phase, charge and spin degrees of freedom coexist and dramatically make up the phase of spin-charge separated TLLs <cit.>, showing non-Landau Fermi liquid behaviour in one dimension.
In the TLL phase, all quasiparticles form collective motions and thus decouple into two propagating modes with different different velocities v_c, v_s<cit.>.
which rely on the value of the coupling and can be effectively estimated from TBA equations. The separated phenomenon can also be observed from their excitation spectra. In APPENDIX A, we show that the phase IV can be seen as two-component free fluids characterised by additivity of free energy at low-energy level, while phase II and V are representatives of single TLL:
f = f_0-π T^2/6(1/v_c+1/v_s) phase IV,
f = f_0-π T^2/61/v_c phase II,
f = f_0-π T^2/61/v_s phase V,
where v_c and v_s are defined through v_c=κ^'(Q)/(2πρ(Q)), v_s=ε_1^'(A)/(2πσ_1(A)) with cut-off points Q,A respectively.
The relation between free energy and velocity is a universal common nature enjoyed by a large family of systems<cit.>.
Taking derivative of free energy with respect to temperature,
specific heat is directly derived C_v=π T /3(1/v_c+1/v_s),C_v=π T/(3v_c),C_v=π T/(3v_s) for IV,II and V, respectively.
The above results (<ref>)-(<ref>) provide the universal leading order correction of the temperature at low energy of interacting many-body systems, i.e., characteristic of thermodynamics in the TLLs.
§.§ III.2 Contributions from k-Λ string in phase III
In the low-temperature limit μ,B>T, the low energy behaviours of the system are immune to the gapped string excitations
when the states are away from the QCPs in figure <ref>.
By analysing the TBA equations (<ref>), (<ref>) and (<ref>), we observe that it is reasonable to ignore the gapped string states and retain only the gapless string states in both Λ and k-Λ strings at low energy.
The driving terms in these equations significantly determine the contributions to the low energy states.
The length-n Λ strings and k-Λ string have lesser contributions to the low energy states when n is larger.
Here we just calculate the contributions from the length-1 Λ and k-Λ strings in the phase III in which both are gapped.
The TBA equations suggest that the greater the value of the chemical potential μ and magnetic field B relative to the temperature T, the smaller the contributions of the k-Λ and Λ strings.
In the phase III, the absolute value of chemical potential is small, while the magnetic field B is large.
Therefore, without losing generality, we here consider the length-1 k-Λ and Λ strings at low temperature |μ|∼ T and B>T in the phase III.
After tedious iterations on free energy shown in APPENDIX B, we obtain a close form of the free energy
f=-μ-u-B+g_3/2T^3/2+g_2T^2+g_5/2T^5/2+O(T^3,^-4B/T,^4μ/T),
where the coefficients are given explicitly by
g_3/2 = -λ_1π^1/2/η^1/2_1^-2B+η_0/T+f_1/2/2π^1/2^2μ/T+f_3/2/2π^1/2,
g_2 = -(λ_1f_3/2/η^1/2_1u+f_1/2/2η^1/2_1π u)^-2B+η_0/T+f_1/2f_3/2/4π u^2μ/T,
g_5/2 = -(λ_1f^2_3/2/2η^1/2_1π^1/2u^2+λ_2π^1/2/4η^3/2_1+f_1/2f_3/2/2η^1/2_1π^3/2u^2)^-2B+η_0/T+(3f_1/2f^2_3/2/32π^3/2u^2+f_3/2/32π^1/2)^2μ/T+f_5/2/32π^1/2,
where we denote the function f_n=_n(-^2-μ-2u-B/T), and the parameters λ_1=1/π√(1+u^2), λ_2=1-2u^2/π(1+u^2)^5/2, η_0=4(u-√(1+u^2)), and η_1=2/(1+u^2)^3/2.
By analysing (<ref>), we observe that in the functions g_3/2,2,5/2, the terms involving exponents ^-2B+η_0/T and ^2μ/T come from gapped length-1 Λ and k-Λ strings, respectively, while the terms that do not contain any exponential terms stem from the contributions of gapless charge k.
In the above results, we have omitted the contributions from the strings with the lengths more than length-2.
From figure <ref>, we observe that our analytic expression of the free energy is consistent with the numerical result obtained from the TBA equations
(<ref>), (<ref>) and (<ref>).
The contributions from the k-Λ string are greater than that from the Λ strings due to the choice of parameters B>T and |μ|∼ T.
Moreover, from the absolute value of free energy, we observe that the order of the k-Λ string contributions is O(10^-4) as temperature tends to T=1.
In general, it can be safely ignored in the low temperature behaviour.
In the phase III, the contributions from the gapped Λ strings are very small and negligible too.
But the length-1 Λ string plays an important role in the phase IV and V.
More detailed calculations of the free energy for the phase III are given in APPENDIX B.
§.§ III.3 Magnetic properties at zero temperature near quadruple critical point
The magnetic properties of the 1D Hubbard model at zero temperature provide a benchmark for understanding universal low energy physics.
In order to get insightful analytical results of magnetic properties of the model, here we try to analytically calculate of the magnetic properties under the conditions of high density (Fermi point Q tends to π) and high magnetization limits (A tends to 0), namely, the region in figure <ref> around the quadruple point.
The physics in this region has not been studied thoroughly in the literature.
The main idea is to expand the TBA equations in terms of small quantities δ=π-Q and A.
To this end, we need to calculate the integral interval of charge from the range [0,Q] to [0,π-Q] and thus the second term of (<ref>) can be re-expressed by substituting k by π-k, namely
∫_-Q^Qḳcos k a_1(sink-Λ) κ(k)
= ∫_-Q^0ḳcos k (a_1(sink+Λ)+a_1(sink-Λ))κ(k)
= -4Re√(1-(Λ-iu)^2)+4u+∫_0^π-Qḳcos k (a_1(sink+Λ)+a_1(sink-Λ)) κ(π-k).
Using the above expression, the spin dressed energy is rewritten in terms of small integral boundaries
ε_1(Λ) = ε^0(Λ) +∫_0^π-Qḳcos k (a_1(sink+Λ)+a_1(sink-Λ)) κ(π-k)
-∫_-A^AΛ̣^' a_2(Λ-Λ^') ε_1(Λ^'),
where ε^0(Λ)=2B-4Re√(1-(Λ-iu)^2)+4u is the leading term.
We further write the dressed energies κ(π-k)=∑_i=0^∞κ^i(π-k)δ^i,ε_1(Λ)=∑_i=0^∞ε^i(Λ)A^i, where the coefficients κ^i, ε^i can be determined by iterations of the TBA equations.
Here, we present κ(k) in terms of δ =π-Q
∫_0^π-Qḳcos k (a_1(sink+Λ)+a_1(sink-Λ)) κ(π-k)
= δ 2a_1(Λ)κ^0(π)+δ^2 (2a_1(Λ)κ^1(π)-a_1(Λ)κ^0'(π))
+δ^3 (2a_1(Λ)κ^2(π)-a_1(Λ)κ^1'(π)+1/3(a_1^”(Λ)-a_1(Λ))κ^0(π)+1/3a_1(Λ)κ^0”(π)),
where the prime ' (”) denotes first (second) derivative with respect to k. In order to derive above expansion, the following derivative formula are used
F(n) = ∫_0^a(n)x̣f(x,n),
F^'(n) = ∫_0^a(n)x̣∂f(x,n)/∂n+f(a(n),n)ạ(n)/ṇ,
here n stands for a small quantity.
Using the above expressions, we may calculate (<ref>)
ε_1(Λ)=ε^0(Λ)-∫_-A^AΛ̣^' a_2(Λ-Λ^') ε_1(Λ^'),
where the leading term ε^0(Λ) incorporating the role of charge is modified as:
ε^0(Λ) = 2B-4Re√(1-(Λ-iu)^2)+4u
+δ(2a_1(Λ)κ^0(π))
+δ^2(2a_1(Λ)κ^1(π)-a_1(Λ)κ^0'(π))
+δ^3(2a_1(Λ)κ^2(π)-a_1(Λ)κ^1'(π)+1/3(a_1^”(Λ)-a_1(Λ))κ^0(π)+1/3a_1(Λ)κ^0”(π)).
Taking an expansion on equation (<ref>) and comparing order by order with respect to the small A on both sides, we obtain the coefficients of the first three orders in ε_1(Λ)
ε_1(Λ) = ε^0(Λ)+A(-2a_2(Λ)ε^0(0))
+A^2(-2a_2(Λ)ε^1(0)-a_2(Λ)ε^0'(0))
+A^3(-a_2(Λ)ε^1'(0)-2a_2(Λ)ε^2(0)-1/3(a_2^”(Λ)ε^0(0)+a_2(Λ)ε^0”(0))),
with
ε^0(0) = 2B+4(u-√(1+u^2))+2/π uκ^0(π)δ+1/π u(2κ^1(π)-κ^0'(π))δ^2,
ε^0'(0) = 0, ε^0”(0)=4/(1+u^2)^3/2.
The quantities associated with the charge degrees of freedom can be calculated using the same manner.
Consequently, expressions of charge and spin equations can be written in terms of the orders of δ and A
κ(k) = δ̅M_cA̅^T,
ε_1(Λ) = δ̅M_sA̅^T,
where δ̅=(δ^0,δ^1,δ^2,δ^3),A̅=(A^0,A^1,A^2,A^3), T denotes transpose and M_c,M_s are two matrices which take the following forms
M_c= (
-2cos k-μ-2u-B 0 0 0
2a_1(sink)β_2 4/π ua_1(sink)β_1 0 0
-2/π ua_1(sink)β_2 -4/π^2 u^2a_1(sink)β_1+8/π^2 u^2a_1(sink)β_2 0 0
(2/π^2 u^2a_1(sink)β_2
+1/3a_1^”(sink)β_2
+4/3(1+u^2)^3/2a_1(sink)) 0 0 0
),
M_s=
[ (2B+4u
-4Re√(1-(Λ-iu)^2)) 2a_1(Λ)β_1 0 (1/3(a_1^”(Λ)
-a_1(Λ))β_1
-2/3a_1(Λ)); -2a_2(Λ)β_2 (4/π u(a_1(Λ)β_2
-a_2(Λ)β_1)) 8/π^2 u^2a_1(Λ)β_1 0; 2/π ua_2(Λ)β_2 (-4/π^2 u^2a_1(Λ)β_2
-8/π^2 u^2a_2(Λ)β_2
+4/π^2 u^2a_2(Λ)β_1) 0 0; ( -2/π^2 u^2a_2(Λ)β_2
-1/3a_2^”(Λ)β_2
-4/3(1+u^2)^3/2a_2(Λ)) 0 0 0 ].
To simplify our notations in the following analysis, we define
β_1=2-μ-2u-B, β_2=2B+4(u-√(1+u^2)),
which characterize the leading contributions in the charge and spin dressed energies, respectively.
By analysing the construction of the matrices M_c,M_s, it can be deduced that β_1 and β_2 are at least the second orders of A or δ, namely we have
κ(k) = -2cos k-μ-2u-B+2a_1(sink)β_2A+4/3(1+u^2)^3/2a_1(sink)A^3,
ε_1(Λ) = 2B+4(u-Re√(1-(Λ-iu)^2))
+2a_1(Λ)β_1δ-2/3a_1(Λ)δ^3-2a_2(Λ)β_2A-4/3(1+u^2)^3/2a_2(Λ)A^3.
In the above results, we only retain terms up to the third order.
By applying the Fermi point conditions κ(Q)=0 and ε_1(A)=0, we obtain β_1 and β_2 as
β_1 = δ^2+4η_1/3π uA^3,
β_2 = -4/3π uδ^3-η_1(A^2+2/3π uA^3)
with η_1=2/(1+u^2)^3/2.
Utilizing iteration repeatedly, the Fermi point conditions give rise to (up to the order O(β^2_1,β^2_2))
δ^2 = β_1-4/3π uη^1/2_1(-β_2)^3/2{1-1/π u[(-β_2/η_1)^1/2+2β^3/2_1/(-β_2)]},
η_1 A^2 = -β_2-2/3π u[(-β_2)^3/2/η^1/2_1+2β_1^3/2]+2/3π^2 u^2(-β_2)^2/η_1
+8/3π^2 u^2η^1/2_1[1/2(-β_2)^1/2β_1^3/2+(-β_2)^3/2β_1^1/2].
Similarly, we apply the above technique to deal with the density Bethe ansatz equations (<ref>) and (<ref>) and find
the matrix form:
ρ(k) = δ̅M_dcA̅^T,
σ_1(Λ) = δ̅M_dsA̅^T,
where the matrices M_dc,M_ds are given by
M_dc = [ 1/2π 0 0 0; 2cos ka_1(sink)λ_1 -2/π^2 ucos ka_1(sink) 0 0; -2/π ucos ka_1(sink)λ_1 4/π ucos ka_1(sink)(2/π uλ_1+1/2π^2 u) 0 0; (2/π^2 u^2cos ka_1(sink)λ_1
+1/3a_1^”(sink)cos kλ_1
+1/3cos ka_1(sink)λ_2) 0 0 0 ],
M_ds = [ 1/πRe1/√(1-(Λ+iu)^2) -1/πa_1(Λ) 0 -1/6πa_1^”(Λ); -2a_2(Λ)λ_1 4/π u(a_1(Λ)λ_1+1/2πa_2(Λ)) -4/π^3 u^2a_1(Λ) 0; 2/π ua_2(Λ)λ_1 (-4/π^2 u^2a_1(Λ)λ_1
-8/π^2 u^2a_2(Λ)λ_1
-2/π^3 u^2a_2(Λ)) 0 0; ( -2/π^2 u^2a_2(Λ)λ_1
-1/3a_2^”(Λ)λ_1
-1/3a_2(Λ)λ_2) 0 0 0 ],
where the constant values λ_1=1/π√(1+u^2), λ_2=1-2u^2/π(1+u^2)^5/2 related to the interaction scale.
By integrating the densities within the Fermi points (<ref>), (<ref>), the particle number n_c and down-spin number n_↓ are given by
n_c = 1-2∫_0^π-Qḳρ(π-k)=1-2δ(1/2π-2λ_1/π u A+2λ_1/π^2 u^2 A^2)-4/π^3 u^2δ^2 A,
n_↓ = 2[A(λ_1-2/π uδ(1/2π-2λ_1/π u A))-A^21/π u(λ_1-1/π^2 uδ)+A^3(λ_1/π^2 u^2+λ_2/6)].
It is more intuitive to express the Fermi points in terms of particle numbers.
Based on the above relations, we have the small parameters near the Fermi points δ=π-Q and A in terms of doping parameter n̂_c=1-n_c and n_↓,
δ = πn̂_c(1+2/un_↓+4/u^2n_↓^2),
A = π u/2ζ_1 n_↓[1+ζ_1(n̂_c+n_↓/2)+ζ^2_1(n̂_c+n_↓/2)^2-ζ_2n_↓^2]
with ζ_1=1/π uλ_1, and ζ_2=λ_2/24λ^3_1.
Using the free energy (<ref>) with T=0 and the above expression (<ref>) for κ(k), we may obtain
f=-μ-u-B-8λ_1/3(1+u^2)^3/2A^3-2/3πδ^3.
On the other hand, in terms of the discrete symmetries of the repulsive Hubbard model <cit.>, the free energy of the repulsive system can be transformed to that of the attractive case via the relation f_r(μ,B,T,u)=f_a(-B,-μ,T,-u)-μ-B, where subscript a(r) means attractive (repulsive) interaction.
It turns out that the results obtained not only cover that of the attractive model <cit.> in the low density and strong coupling limits, but also are valid for the arbitrary interaction strength away from the strong coupling limit.
In general, it is straightforward to show that there is a one-to-one correspondence from the high density area for repulsive case to low density regime for attractive case <cit.>.
Take the derivative of free energy, one can obtain the relationship between their thermodynamic quantities:
∂ f_r/∂μ = -∂ f_a/∂ B-1 ⟶ n_r,μ=1-2m_a,B,
∂ f_r/∂ B = -∂ f_a/∂μ-1 ⟶ 2m_r,B=1-n_a,μ.
Using these results, and by the definitions of spin and charge characteristic velocities v_c=κ^'(k)/2πρ(k)|_k=Q, v_s=ε_1^'(Λ)/2πσ(Λ)|_Λ=A, we obtain the following quantities
κ^'(Q) = 2δ-1/3δ^3,
ρ(Q) = 1/2π-2λ_1/π uA+2λ_1/π^2 u^2A^2-2λ_1/π^3 u^3A^3+2λ_1/π u^3δ^2 A+2λ_1/3π u^3A^3
-λ_2/3π uA^3+λ_1/π uδ^2A+2/π^3 u^2δ A-2/π^4 u^3(1+4πλ_1)δ A^2,
ε_1^'(A) = 2η_1A+η_2/6A^3,
σ_1(A) = λ_1+λ_2/2A^2-1/π^2 uδ+4λ_1/π^2 u^2δ A-8λ_1/π^3 u^3δ A^2
+1/π^2 u^3δ A^2-4/π^4 u^3δ^2 A+1/3π^2 u^3δ^3-λ_1/π u A+1/π^3 u^2δ A
+λ_1/3π u^3A^3+λ_1/π^2 u^2A^2-1/π^4 u^3δ A^2-λ_1/π^3 u^3A^3-λ_2/6π uA^3
with η_2=121-4u^2/π(1+u^2)^7/2.
The analytic results and numerical simulation of sound velocities and thermodynamic properties by the solution of TBA equations are showed in figure <ref>.
From the figure <ref> (a), we observe that v_c approaches zero at the critical point for the phase transition from magnetized phase IV to the Mott phase V.
On the other hand, v_s goes to zero that signifies an approach to the spin fully polarized phase II.
Thus the velocities can serve as a signature of quantum phase trasnition.
In figure <ref> (b), we show the specific heat in terms of velocities at various temperatures: demonstrating the analytical results in comparing with numerical calculations.
The analytical results of the specific heat are given by
C_v = π T/3(1/v_c+1/v_s) phase IV,
C_v = π T/31/v_c phase II,
C_v = π T/31/v_s phase V.
The quantity C_v/T becomes temperature independent for the TLL states.
§.§ III.4 Additivity rules of charge and spin susceptibilities
Now we calculate the charge and spin susceptibilities χ_c=∂ n_c/∂μ|_B(m) and χ_s=∂ m/∂ B|_μ(n_c) with m=n_c/2-n_↓.
In the grand canonical ensemble driven by external potentials B and μ, we have the following expressions of spin and charge susceptibilities
χ_c = ∂ n/∂μ|_B=(-1/π+4 λ_1/π uA-4 λ_1/π^2 u^2A^2-8/π^3 u^2δ A )∂δ/∂μ
+(4 λ_1/π uδ-8λ_1/π^2 u^2δ A-4/π^3 u^2δ^2)∂ A/∂μ=χ_c^(1)+χ_c^(2),
χ_s = ∂ m/∂ B|_μ=[-1/2π-4/π^3 u^2δ A+2/π^2 u(1+πλ_1)A-2/π^3 u^2(1+5πλ_1)A^2 ]∂δ/∂ B
+[-2λ_1(1-2/π uA+3/π^2 u^2A^2)-2/π^3 u^2δ^2.
+.2/π^2 u(1+πλ_1)δ-4/π^3 u^2(1+5πλ_1)δ A-λ_2 A^2]∂ A/∂ B
= χ_s^(1)+χ_s^(2).
We observe that the charge susceptibilities χ_c^(1), χ_c^(2) (the spin susceptibility χ_s^(1), χ_s^(2)) denote the contributions from different resources δ and A, respectively, see (<ref>) and (<ref>).
The susceptibilities can be split into two parts which are reminiscent of the additivity rules found in the FFLO state of attractive situation <cit.>.
It should be noted that the decomposition terms stem from two sources, i.e., the changes of charge and spin with respect to the chemical potential and magnetic field.
In the (<ref>) and (<ref>), ∂δ/∂μ, ∂ A/∂μ, ∂δ/∂ B and ∂ A/∂ B can be determined by taking the derivative of both sides of (<ref>) and (<ref>) (note: β_1, β_2 depend on μ and B):
∂ A/∂μ = 1/η_1(A+A^2/π u)π u/δ-4η_1/π uA^2,
∂δ/∂μ = -(A+A^2/π u)/2δ(A+A^2/π u)-8/π^2 u^2δ^2 A^2,
∂ A/∂ B = -1-1/π uδ/η_1(A+A^2/π u)-4η_1/π^2 u^2δ A^2,
∂δ/∂ B = -1-3A/π u/2δ(1+A/π u)-8/π^2 u^2δ^2 A.
We plot the magnetic susceptibility in figure (<ref>) (c) and observe that the susceptibility does not depend on temperature in the TLL regime, but becomes divergent in the vicinities of the critical points.
In the canonical ensemble with fixed density and magnetization, we take the total derivative of (<ref>) and (<ref>) with respect to n_c and m under the condition ṇ_↓=ṇ_c/2-ṃ
δ̣ = [π(1+2/un_↓+4/u^2n_↓^2)-πn̂_c(1/u+4/u^2n_↓)]̣̂n_c-πn̂_c(2/u+8/u^2n_↓)ṃ,
Ạ = ̣̂n_c{-π uζ_1/4[1+ζ_1(n̂_c+n_↓/2)+ζ^2_1(n̂_c+n_↓/2)^2-ζ_2n_↓^2].
.+π uζ_1/2n_↓[3ζ_1/4+3ζ^2_1/2(n̂_c+n_↓/2)+ζ_2n_↓]}
+ṃ{-π uζ_1/2[1+ζ_1(n̂_c+n_↓/2)+ζ^2_1(n̂_c+n_↓/2)^2-ζ_2n_↓^2].
.+π uζ_1/2n_↓[-ζ_1/2-ζ^2_1(n̂_c+n_↓/2)+2ζ_2n_↓]}.
Here ∂μ/∂ n_c|_m,∂ B/∂ m|_n_c can be similarly extracted from (<ref>) (<ref>).
Similar to the (<ref>)-(<ref>) with the difference in the quantity being differentiated, we have
-∂μ/∂n̂_c(m)|_m(n̂_c)-∂ B/∂n̂_c(m)|_m(n̂_c)=2δ∂δ/∂n̂_c(m)|_m(n̂_c)+4η_1/π uA^2∂ A/∂n̂_c(m)|_m(n̂_c),
∂ B/∂n̂_c(m)|_m(n̂_c)=-2/π uδ^2∂δ/∂n̂_c(m)|_m(n̂_c)-η_1(A+A^2/π u)∂ A/∂n̂_c(m)|_m(n̂_c).
Therefore, by solving this set of equations together with total differential expressions (<ref>) and (<ref>), we obtain the charge and spin susceptibilities in terms of n_c, m in the canonical ensemble
1/χ_c = ∂μ/∂ n_c|_m=2δ(1-δ/π u)∂δ/∂n̂_c|_m-η_1A(1-3A/π u)∂ A/∂n̂_c|_m
= 1/χ_c^(1)+1/χ_c^(2),
1/χ_s = ∂ B/∂ m|_n̂_c=-2/π uδ^2∂δ/∂ m|_n̂_c-η_1A(1+A/π u)∂ A/∂ m|_n̂_c
= 1/χ_s^(1)+1/χ_s^(2).
In contrast to the the case for fixed external potentials, these formulas expose reciprocal additivity relation at a fixed magnetization and density, see (<ref>) and (<ref>).
The two ensembles are related to each other by the Jacobian determinant evaluated by total differential of μ, B with respect to n, m.
Apart from the numerical arithmetic and approximation method as we have conducted above, we would like to discuss dressed charge matrix Z<cit.>
Z=(ξ_cc(Q) ξ_cs(A)
ξ_sc(Q) ξ_ss(A)),
whose elements are determined by
ξ_a b(x_b)=δ_a b+∑_d∫_-X_d^X_dx̣_dξ_a d(x_d) K_d b(x_d, x_b).
Here the kernels are given by
K_c c(x_c, y_c) = 0,
K_s c(x_c, x_s) = a_1(sin(x_c)-x_s),
K_c s(x_c, x_s) = cos(x_c) a_1(sin(x_c)-x_s),
K_s s(x_s, y_s) = -a_2(x_s-y_s).
These dressed charges can be used to calculate rigorous solutions for susceptibilities and conformal dimensions in asymptotics of correlation functions
<cit.>.
We will discuss about this study later.
One advantage of this method is that the iteration process is greatly simplified both analytically and numerically, and the final result is clear and unambiguous.
In the grand canonical ensemble, some exact relations are relevant to the dressed charge matrix (<ref>), for example <cit.>,
χ_c|_B = ∂ n_c/∂μ|_B=Z^2_cc/π v_c+Z^2_cs/π v_s=χ_c^(1)+χ_c^(2),
χ_s|_μ = ∂ m/∂ B|_μ =(Z_cc-2Z_sc)^2/2π v_c+(Z_cs-2Z_ss)^2/2π v_s=χ_s^(1)+χ_s^(2).
Regarding the circumstance of variable changes, we can work out the Jacobian determinant from the equations (<ref>) and (<ref>), namely, J=2(detZ)^2/(π^2v_cv_s).
As a result, we obtain [The second equality in the equation (6.79) of the book <cit.> misses a factor of 2.] for canonical ensemble
1/χ_c|_m = ∂μ/∂ n_c|_m=π/4v_c(Z_cs-2Z_ss)^2+v_s(Z_cc-2Z_sc)^2/(detZ)^2=1/χ_c^(1)+1/χ_c^(2),
1/χ_s|_n_c = ∂ B/∂ m|_n_c =π/2v_cZ_cs^2+v_sZ_cc^2/(detZ)^2=1/χ_s^(1)+1/χ_s^(2).
Comparing our analytic results (<ref>) and (<ref>) (or (<ref>), (<ref>)) with the dressed charge formula (<ref>) and (<ref>) (or (<ref>), (<ref>)), we observe that the derivative term with respect to δ in the (<ref>) is equivalent to the first term associated with the charge velocity v_c in (<ref>), whereas the derivative term with respect to A in (<ref>) is equivalent to the second term associated with the spin velocity v_s in (<ref>), respectively.
Similar correspondences can be found from (<ref>) and (<ref>) in the grand canonical ensemble, as well as in these equations (<ref>), (<ref>) and (<ref>), (<ref>), respectively.
We remark that the results we obtained can also be applied to the phase II and the Mott phase V, i.e., omitting the terms related to A in II or δ in phase V, respectively.
Explicitly, the corresponding thermodynamic quantities and velocities are given by
phase II:
n_c=1-δ/π,
n_↓=0,
χ_c=1/2πδ,
χ_s=1/4πδ,
v_c=2δ -1/3δ^3,
phase V:
n_c = 1,
n_↓ = ∫_-A^AΛ̣σ_1(Λ)=2[λ_1A-λ_1/π uA^2+A^3(λ_1/π^2 u^2+λ_2/6)],
χ_c = 0,
χ_s = [2λ_1(1-2/π uA+3/π^2 u^2A^2)+λ_2 A^2]1/η_1A(1+A/π u),
v_s = η_1A+η_2/12A^3/π(λ_1+λ_2/2A^2-λ_1/π u A+λ_1/3π u^3A^3+λ_1/π^2 u^2A^2-λ_1/π^3 u^3A^3-λ_2/6π uA^3).
Finally, we make direct comparison between our analytical results and numerical results from the TBA equations.
In figure <ref>, the first row shows density (a), compressibility (b) in the grand canonical ensemble and (c) in the canonical ensemble.
Whereas the second row shows the magnetization (d), susceptibility (e) in the grand canonical ensemble and (f) in the canonical ensemble.
The density and magnetization in figure <ref> (a)(d) cross quantum phases from II, to IV and V as chemical potential varies.
In figure <ref> (b), (c), (e), (f), we observe additivity rules in both the grand canonical and the canonical ensemble.
All analytic results agree well with the corresponding numerical results in this figure.
§.§ III.5 Quantum criticality near quadruple critical point
The results obtained in the previous subsections present an elementary understanding of the ground state properties and the behaviour of the TLLs.
Although the criticality induced by the variation of some external potentials such as magnetic field and chemical potential has been well studied in literature, this is not the case for the interaction-driven quantum critical behaviour, even though interaction plays a central role in many-body systems.
In this subsection, we study quantum phase transitions and universal scaling functions of properties of the 1D Hubbard model in terms of external fields. And in the last section we focus on the interaction driven quantum transitions.
From the phase diagram figure (<ref>), it can be observed that a phase transition occurs in the repulsive Hubbard model when a degree of freedom appears, disappears or become gapped.
Although phase transition occurs at zero temperature, thermal and quantum fluctuations can reach the same level of the energy scale in the V-shaped critical regions at finite temperatures.
A natural question is whether quantum critical region has its own set of universal laws similar to the TLL.
The obvious answer, of course, is that the quantum criticality for the same universality class of models is insensitive to the microscopic details of the systems and shares general and universal critical phenomena characterised by the critical exponents <cit.>.
For example, the universal scaling laws of magnetization and susceptibility satisfy <cit.>
m = m_0+T^d/z+1-1/(vz)O_1[μ-μ_c/T^1/vz,B-B_c/T^1/vz,u-u_c/T^1/vz],
χ_s = χ_s0+T^d/z+1-2/(vz)O_2[μ-μ_c/T^1/vz,B-B_c/T^1/vz,u-u_c/T^1/vz],
respectively.
Where m, χ_s represent the first and second orders of thermodynamic quantities respectively, μ_c, B_c, u_c denotes the critical fields at the critical point and z, v stand for the dynamical and the correlation critical exponent, respectively.
The scaling functions (<ref>) and (<ref>) consist of two parts: the first term denotes the background contributions stemming from the unchanged degrees of freedom, and the second term accounts for the singular part stemming from the sudden change of the density of state of one degree of freedom.
Another salient feature of critical region is that the characteristic length scale diverges, providing us with a feasible opportunity to capture system information by fractional exclusive statistics (FES) <cit.>.
In what follows, we embark on the analytical derivation of the singular behaviours of thermodynamical properties involved solely charge quasimomenta k and length-1 Λ strings.
Without losing generality, we first consider the dressed energy equations for the magnetized phase IV
κ(k) = -2 cosk-μ-2 u-B-T ∫_-∞^∞Λ̣a_1(sink-Λ) ln(1+e^-ε_1(Λ)/T),
ε_1(Λ) = 2 B -T∫_-π^πḳcos k a_1(sink-Λ) ln(1+e^-κ(k)/T)
+T ∫_-∞^∞Λ̣^' a_2(Λ-Λ^') ln(1+e^-ε_1(Λ^')/T),
where a_n(x)=1/2 π2 n u/(n u)^2 + x^2.
There are five phase transitions in total, I-II, II-III, III-V, II-IV and IV-V, among which the four cases can be treated uniformly around the quadruple point.
I-II phase transition: To this end, we first study the phase transition from an empty lattice phase I to partially filled II with the absence of down-spin.
In this transition, the phase transition boundary is simply expressed as 2+2u+B_c+μ_c=0 with subscript B_c and μ_c stand for the critical fields.
Thus the charge dressed energy is given by κ(k)=-2cos k-μ-2u-B, showing free fermions on a 1D lattice.
This gives the free energy f=u+T^3/2/2π^1/2_3/2(-^Δ B+Δμ/T).
Using fundamental thermodynamic relations, we directly obtain the first and second order thermodynamic quantities
m = n/2=-T^1/2/4π^1/2f_1/2,
χ_s = χ_c/2=-T^-1/2/4π^1/2f_-1/2,
c_v/T = -3T^-1/2/8π^1/2f_3/2+T^-3/2(Δ B+Δμ)/2π^1/2f_1/2-T^-5/2(Δ B+Δμ)^2/2π^1/2f_-1/2
with f_n=_n(-^Δ B+Δμ/T).
For the remaining four phase transitions, what we need to prepare before obtaining the scaling forms is to express the dressed energy equations and the free energy in terms of polylog functions.
Under the assumption of n→1, n_↓→0, we expand the kernal function a_n around k=π and Λ =0, the coupled equations become as
κ(k) = -2 cosk+2(1/π u^3I_1-6/π u^5I_2 )sin^2k+( -μ-2u-B-2/π uI_1+2/π u^3I_2)
= -2cosk+2C_1sin^2k+C_2,
ε_1(Λ) = Λ^2(η_1+2J_1/π u^3-12J_2/π u^5-I_1/4π u^3+3I_2/8π u^5)
+(2B+4(u-√(1+u^2))-2J_1/π u+2J_2/π u^3+I_1/π u-I_2/4π u^3)
= D_1Λ^2+D_2 ,
where the functions C_1, C_2, D_1, D_2 denote the corresponding coefficients
C_1 = 1/π u^3I_1-6/π u^5I_2,
C_2 = -μ-2u-B-2/π uI_1+2/π u^3I_2,
D_1 = η_1+2J_1/π u^3-12J_2/π u^5-I_1/4π u^3+3I_2/8π u^5,
D_2 = 2B+4(u-√(1+u^2))-2J_1/π u+2J_2/π u^3+I_1/π u-I_2/4π u^3.
while the integrals (I_1,I_2) and (J_1,J_2) are related to spin and charge degrees of freedom and make contributions only near zero point.
Integrating by parts, the formal solutions in terms of polylog functions read
I_1 = ∫_0^∞Λ̣T ln(1+^-ε_1(Λ)/T)=-T^3/2π^1/2/2D^1/2_1_3/2(-^-D_2/T),
I_2 = ∫_0^∞Λ̣Λ^2 T ln(1+^-ε_1(Λ)/T)=-T^5/2π^1/2/4D^3/2_1_5/2(-^-D_2/T),
J_1 = T∫_0^πḳcos k ln(1+^κ(k)/T)
= T^3/2/√(1-2C_1)Γ(3/2)_3/2(-^2+C_2/T)-T^5/2/8(1-2C_1)^5/2Γ(5/2)_5/2(-^2+C_2/T),
J_2 = T∫_0^πḳcos k sin^2 k ln(1+^κ(k)/T)=T^5/2/3(1-2C_1)^3/2Γ(5/2)_5/2(-^2+C_2/T).
With the help of the presentations of these four integrals, the Gibbs free energy is given by
f=-μ-u-B-2λ_1I_1-λ_2I_2+1/πJ_1+1/2πJ_2.
Although we have greatly simplified the dressed equations, the four integrals (I_1,I_2) and (J_1,J_2) are still coupled to each other.
These four quantities are intertwined and are needed to carefully distinguish the primary and secondary contributions according to the critical field conditions.
Based on the above equations (<ref>)-(<ref>), we proceed to evaluate critical behaviours from three aspects: critical fields, polylog functions and free energy.
The polylog functions contain the sources of criticality, from which we can derive the scaling function of the free energy for quantum critical region.
For the part that causes criticality, we take expansions under the limit of T≫Δ B or T≫Δμ.
II-III phase transition: For the phase transition II-III, the critical field is determined by 2-2u-B_c-μ_c=0 and ε_1(Λ) is gapped.
The free energy is given by
f=-μ-u-B+T^3/2/2π^1/2_3/2(-^-Δ B-Δμ/T),
leading to the scattering forms of thermodynamics
m = n/2=1/2+T^1/2/4π^1/2f_1/2,
χ_s = χ_c/2=-T^-1/2/4π^1/2f_-1/2,
C_v/T = -3T^-1/2/8π^1/2f_3/2-T^-3/2(Δ B+Δμ)/2π^1/2f_1/2-T^-5/2(Δ B+Δμ)^2/2π^1/2f_-1/2
with f_n=_n(-^-Δ B-Δμ/T).
These present universal scaling behaviour for the charge gapped phase transition in a lattice.
The situation is very similar to the case of I-II transition where only charge sector exists.
III-V phase transition: For the phase transition III-V, the emergence of spin degree of freedom on a half-filled lattice generates the criticality with respect to the critical field B_c=2√(1+u^2)-2u, independent of chemical potential.
This gives the integrable I_1≈-T^3/2π^1/2/2η^1/2_1_3/2(-^-2Δ B/T).
In this case κ(k) is always less than zero in charge momentum space.
For the charge degree, |2+C_2|≫ T, thus the J_1 with respect to temperature magnitude can be evaluated by
J_1=1/2π^1/2T^3/2_3/2(-^2+C_2/T)≈1/2π^1/2T^3/2[-^2+C_2/T+1/2^3/2^2(2+C_2)/T⋯]≪1/2π^1/2T^3/2≈ I_1.
For this reason, J_1,J_2 can be safely neglected, equivalent to the case of XXX spin chain.
Whereas for spin degrees of freedom, the term I_1 is relevant to the low temperature criticality, in contrast the integral I_2 has higher order of power of the temperature than that of I_1.
Under this circumstance, we determine the free energy near the phase transition III-V as
f=-μ-u-B+T^3/2π^1/2λ_1/η^1/2_1_3/2(-^-2Δ B/T).
Using the standard thermodynamic relation,
we give the following forms of scaling functions for density, magnetization, compressibility, susceptibility and specific heat
n = 1, m=1/2+T^1/2π^1/2λ_1/η^1/2_1f_1/2,
χ_c = 0, χ_s=-2T^-1/2π^1/2λ_1/η^1/2_1f_-1/2,
C_v/T = -3T^-1/2π^1/2λ_1/4η^1/2_1f_3/2-2T^-3/2π^1/2λ_1Δ B/η^1/2_1f_1/2-4T^-5/2π^1/2λ_1Δ B^2/η^1/2_1f_-1/2
with f_n=_n(-^-2Δ B/T).
II-IV phase transition: For quantum phase transition II-IV, the phase transition occurs at ε_1(0)=0 with charge dispersion κ(k)=-2cos k-μ-2u-B.
Let Q denote Fermi point of κ(k), which satisfies 2cos Q=-μ-2u-B.
Via (<ref>) corresponding to κ(Q)=0, the critical field is obtained
B_c = 2√(1+u^2)-2u-2/3π u(π-Q)^3,
μ_c = -2cos Q-2u-B_c=2-2√(1+u^2)-(π-Q)^2+2/3π u(π-Q)^3.
From (<ref>) and (<ref>) at criticality, we observe that the term C_1 is a small quantity, while 2+C_2/T=κ(π)/T in polylog function in J_1, J_2 is a large quantity.
Using the property of the polylog function
_p(-^w)w>>1⟶-2∑_k=0^∞η(2k)w^p-2k/Γ(p-2k+1)+O(-^w),
we have
J_1 ≈ 1/2(1+C_1)[-4/3(κ(π))^3/2-π^2T^2/6(1/κ(π))^1/2]
+3/32(1+5C_1)[8/15(κ(π))^5/2+π^2T^2/3(κ(π))^1/2],
J_2 ≈ -1/4(1+3C_1+15/2C^2_1)[8/15(κ(π))^5/2+π^2T^2/3(κ(π))^1/2].
While I_1, I_2 indicate that they are the orders of O(T^3/2) and O(T^5/2), respectively.
Therefore we reasonably ignore the effect of I_2.
The leading term I_1 is given by
I_1≈-T^3/2π^1/2/2D^1/2_1_3/2(-^-D_2/T).
In the above equations (<ref>) and (<ref>), we have an approximation C_1≈1/π u^3I_1,C_2≈-μ-2u-B-2/π uI_1.
It follows that
κ(π) = 2+C_2=2-μ-2u-B-2/π uI_1=2+2cos Q-2/π uI_1=β_1-2/π uI_1,
where β_1=2-μ-2u-B which is defined previously.
Follow these approximations, we rewrite the functions J_1 and J_2
J_1 ≈ -2/3β_1^3/2+2β_1^1/2/π uI_1-π^2T^2/12β_1^1/2-2β_1^3/2/3π u^3I_1+1/20β_1^5/2-β_1^3/2/4π uI_1+π^2T^2β_1^1/2/32,
J_2 ≈ -2/15β_1^5/2+2β_1^3/2/3π uI_1-π^2T^2β_1^1/2/12.
Substituting (<ref>) to the definition of D_2 in (<ref>) which reflects the distance away from QCP,
D_2 can then be represented with Δ B, Δμ as
Δ t≡:D_2 ≈ 2B+4(u-√(1+u^2))-2J_1/π u≈2B+4(u-√(1+u^2))+4/3π uβ_1^3/2
= 2B+4(u-√(1+u^2))+4/3π u(β_1c^3/2-3β_1c^3/2/2(Δ B+Δμ))-2B_c+2B_c
= 2(1-β_1c^1/2/π u)Δ B-2β_1c^1/2/π uΔμ,
where β_1c=2-μ_c-2u-B_c.
In the above equation, we only inserted the leading term -2/3β_1^3/2 of J_1 that is irrelevant of T.
While J_2 has been neglected and the identity 2-β_1-2u-B-μ=0 has been used.
Substituting the results of J_1, J_2, I_1 into the free energy (<ref>), we have
f≈-μ-u-B-2/3πβ_1^3/2-1/60πβ_1^5/2-π T^2/12β_1^1/2-π T^2β_1^1/2/96+T^3/2π^1/2a_0/η^1/2_1_3/2(-^-Δ t/T),
here a_0=λ_1-β_1^1/2/π^2 u+β_1^3/2(2λ_1/3π u^3η_1-1/24π^2 u+1/3π^2 u^3) and Δ t=2(1-β_1c^1/2/π u)Δ B-2β_1c^1/2/π uΔμ.
Taking derivatives of free energy with respect to B, μ and T, we obtain the following scaling forms of the thermal and magnetic properties
n = 1-β_1^1/2/π-β_1^3/2/24π-2 T^1/2β_1c^1/2 a_0/π^1/2uη^1/2_1f_1/2,
m = 1/2-β_1^1/2/2π-β_1^3/2/48π+ T^1/2(1-β_1c^1/2/π u)π^1/2a_0/η^1/2_1f_1/2,
χ_c = 1/2πβ_1^1/2+β_1^1/2/16π-4 T^-1/2β_1ca_0/π^3/2u^2η^1/2_1f_-1/2,
χ_s = 1/4πβ_1^1/2+β_1^1/2/32π-2 T^-1/2(1-β_1c^1/2/π u)^2π^1/2a_0/η^1/2_1f_-1/2,
C_v/T = π/6β_1^1/2+πβ_1^1/2/48-3 T^-1/2π^1/2a_0/4η^1/2_1f_3/2-2 T^-3/2π^1/2a_0/η^1/2_1[(1-β_1c^1/2/π u)Δ B-β_1c^1/2/π uΔμ]f_1/2
- 4T^-5/2π^1/2a_0/η^1/2_1[(1-β_1c^1/2/π u)Δ B-β_1c^1/2/π uΔμ]^2f_-1/2
with f_n=_n(-^-Δ t/T).
It is further noticed from (<ref>) that the background of free energy involves two types of terms, the temperature-dependent term -π T^2/12β_1^1/2-π T^2β_1^1/2/96 comes from the contribution of TLL, whereas the remining terms -μ-u-B-2/3πβ_1^3/2-1/60πβ_1^5/2 is equivalent to the energy of non-interacting electrons on a lattice.
For simplicity and without losing generality, we only keep the order of β_1^3/2 in the free energy.
For the 1D repulsive Hubbard model, zero-temperature background is given by
f_H≈-μ-u-B-2/3πβ_1^3/2≈β_1-2-2/3πβ_1^3/2.
While the ground state free energy of a non-interacting lattice system is given by
f_NI=-1/π[∑_↑,↓2sin(k_F,σ)+μ_σsin(k_F,σ)],
in which k_F,σ=arccos(-μ_σ/2)=π n_σ, μ_↑=μ+B, μ_↓=μ-B.
In the limit of high density at spin-polarized phase, these two free energies (<ref>) and (<ref>) are equivalent
f_NI≈-2/π(β_1^1/2-1/6β_1^3/2)-(μ+B)n ≈-2/π(β_1^1/2-1/6β_1^3/2)-(2-β_1)(1-β_1^1/2/π)=f_H.
In light of quantum criticality, we refer to the phase II as the background.
Thus the terms that do not contribute to the singular part in thermodynamic quantities can be readily recognised, namely,
χ^II_c = 1/2πβ_1^1/2+β_1^1/2/16π,
χ^II_s = 1/4πβ_1^1/2+β_1^1/2/32π,
C^II_v/T = π/6β_1^1/2+πβ_1^1/2/48.
These background results are in agreement with (<ref>) with δ=β_1^1/2(1+1/24β_1), here β_1=2-μ-2u-B ≈(π-Q)^2-1/12(π-Q)^4=δ^2-1/12δ^4.
Furthermore the Wilson ratios for the phase II are given by
R^χ_s_w=4π^2/3χ_s/C_v/T≈2, R^χ_c_w=π^2/3χ_c/C_v/T≈ 1.
IV-V phase transition: The phase transition from IV to V displays a novel subtlety of quantum criticality with charge.
At this phase transition, the charge degree of freedom becomes gapped and the dressed energy κ(π) approaches zero.
We can directly use the dressed energy equations with the help of (<ref>) and κ(π)=0 under the condition δ=0, giving the critical fields
μ_c = 2-2√(1+u^2)+1/(1+u^2)^3/2A^2_c(1-2/π uA_c),
B_c = -2u+2√(1+u^2)-1/(1+u^2)^3/2A^2_c(1+2/3π uA_c).
From (<ref>) and (<ref>), it is convenient to represent A_c in term of B_c
A_c^2=-β_2/η_1-2/3π u(-β_2)^3/2/η^3/2_1+2/3π^2 u^2(-β_2)^2/η^2_1,
with η_1=2/(1+u^2)^3/2. Due to the criticality induced from charge degree, precisely opposite to the transition from phase II to IV, we rewrite the (<ref>) as
2B+4(u-√(1+u^2)) +2η_1 /3π uA^3 +η_1 A^2=0.
This equation represents the boundary condition. Using the relation -D_2 at zero temperature is approximate by η_1A^2, we get the main contribution I_1=2η_1A^3/3 of the first iteration. Comparing this equation with the boundary condition and (<ref>)(<ref>), up to the leading contribution,
we have -D_2 ≈η_1A^2+2J_1/(π u), which is the leading order in the arguments of the polylog functions in I_1 and I_2.
The integral I_1 and I_2 serve as the background contributions resulting from of the spin degrees of freedom of the TLL.
Up to the leading order of the temperature contributions and using the approximation (<ref>), we obtain the function for the second iteration
I_1 ≈ 2/3η_1A^3+π^2T^2/12η_1A+2A/π uJ_1-2A^3/3π u^3J_1+π T^2/144u^3η_1A^2,
I_2 ≈ 2/15η_1A^5+π^2T^2/12η_1A+2A^3/3π uJ_1.
The term 2+C_2 in the charge dressed energy determines the criticality in the charge degrees of freedom.
Substituting the above expressions of I_1 and I_2 into the C_2 in (<ref>) and using the critical fields (<ref>) and (<ref>), we give
Δ t≡: 2+C_2=-Δμ-(1-4A_c/π u+A_c)Δ B.
Subsequently, the scaling function J_1 is given by
J_1=1/2T^3/2π^1/2(1+2η_1/3π u^3A^3)_3/2(-^Δ t/T).
Using the leading order contribution in J_1 and I_1, I_2, we finally obtain the following scaling form of the Gibbs free energy
f≈-μ-u-B-4/3λ_1η_1A^3-π^2 T^2λ_1/6η_1A-2/15λ_2η_1A^5-π^2 T^2λ_2/12η_1A+T^3/2b_0/2π^1/2_3/2(-^Δ t/T)
with b_0=1-4λ_1/uA+(η_1/π+2λ_1-u^2λ_2)2A^3/3u^3. Using the thermodynamic relations, the scaling forms of thermal and magnetic properties are given explicitly by
n = 1+ T^1/2b_0/2π^1/2f_1/2,
m = 1/2-2λ_1 A/1+A/π u-1/3λ_2A^3+ T^1/2(1-4A_c/π u+A_c)b_0/4π^1/2f_1/2,
χ_c = - T^-1/2b_0/2π^1/2f_-1/2,
χ_s = 2λ_1 /η_1A(1+A/π u)^3+λ_2/η_1A/1+A/π u- T^-1/2(1-4A_c/π u+A_c)^2b_0/4π^1/2f_-1/2,
C_v/T = π^2 λ_1/3η_1A+π^2 λ_2A/6η_1-3 T^-1/2b_0/8π^1/2f_3/2- T^-3/2b_0/2π^1/2[Δμ+(1-4A_c/π u+A_c)Δ B]f_1/2
- T^-5/2b_0/2π^1/2[Δμ+(1-4A_c/π u+A_c)Δ B]^2f_-1/2,
where we denoted
f_n=_n(-^Δ t /T).
Moreover, we observe that the phase V represents the source of the background for the spin degrees of freedom.
Thus we may easily identify the contributions from Mott phase V in the above scaling functions, namely
χ^V_c = 0,
χ^V_s = 2λ_1 /η_1A(1+A/π u)^3+λ_2/η_1A/1+A/π u,
C^V_v/T = π^2 λ_1/3η_1A+π^2 λ_2A/6η_1.
Comparing above results with equations (<ref>) to (<ref>), we find these results consistent with each other.
Consequently, the Wilson ratios are given by
R^χ_s_w=4π^2/3χ_s/C_v/T≈8/(1+A/π u)^3+8λ_2A^3/π uλ_1, R^χ_c_w=π^2/3χ_c/C_v/T=0,
showing the nature of the TLL at low temperature.
In figure <ref> (a) (b) and figure <ref> (c) (d), we plot universal scaling behaviour of electron density and magnetization, spin and charge susceptibilities as well as specific heat for phase transitions II-IV and IV-V, respectively.
They show that analytic expressions of their scaling functions (<ref>)-(<ref>) and (<ref>)-(<ref>) are in good agreement with numerical results obtained from the TBA equations.
We note that in figure <ref> (a), (c) all lines at different temperatures intersect at QCP, while figure <ref> (b), (d) show the scaling function invariant in terms of Δ B/T.
It is essential to note that all these scaling functions read off the dynamical exponent z=2 and correlation critical exponent ν=1/2, for example, the susceptibility (<ref>), also see <cit.>.
In the critical region, the polylog function represents the free fermion type of generating function associated with dynamic critical exponent z=2 and correlation critical exponent v=1/2 <cit.>.
In the zero temperature limit, the singular part of the susceptibility can be expressed as χ^s_s∝ T^-1/2(Δ B/T)^-1/2=(Δ B)^-1/2.
Regarding the definition of the critical exponent γ with respect to the general form χ∝ (g-g_c)^-γ, here g is the driving parameter, we find γ=1/2 for the second order derivatives of the free energy.
On the other hand, the correlation length can be expressed as ξ∝ T^-1/z.
In summary, at the critical point Δ B=0, we also find that thermodynamical properties C_v/T, χ_s, ξ satisfy the following scaling laws
C_v/T∝ T^d-z/z, χ_s∝ T^-γ/vz, ξ∝ T^-1/z,
which signify the non-Fermi liquid behaviour at QCP<cit.>.
§.§ III.6 Universal scaling functions at quantum criticality
In the previous subsection, we presented some analytic results for each phase transition.
We observe that the coefficients a_0 in (<ref>) and b_0 in (<ref>) of free energies solely rely on the root densities (<ref>) and (<ref>) with a_0≈σ(0),b_0≈2πρ(π).
Significantly, it is found that the free energies are related to densities and dressed energies in compact forms for different phase transitions
I-II: f = u+T^3/2π^1/2ρ(0)(κ^”(0)/2)^-1/2Li_3/2(-e^-κ(0)/T),
II-III: f = f_0+T^3/2π^1/2ρ(π)(-κ^”(π)/2)^-1/2Li_3/2(-e^κ(π)/T),
V-III: f = f_0+T^3/2π^1/2σ_1(0)(ε^”_1(0)/2)^-1/2Li_3/2(-e^-ε_1(0)/T),
II-IV: f = f_0-π T^2/6v_c+T^3/2π^1/2σ_1(0)(ε^”_1(0)/2)^-1/2Li_3/2(-e^-ε_1(0)/T),
V-IV: f = f_0-π T^2/6v_s+T^3/2π^1/2ρ(π)(-κ^”(π)/2)^-1/2Li_3/2(-e^κ(π)/T),
where f_0 comes from the ground state, the terms with T^2 reflect the contributions from the background parts.
σ_1(0) denotes the density of length-1 spin strings at Λ=0, the second derivative ε^”_1(0) ≡. d^2ε_1/dΛ^2|_Λ=0, and ρ(0), ρ(π) denotes the charge density at k=0, π, respectively.
Similarly, for the charge dressed energy, κ^”(0) ≡. d^2κ/dk^2|_k=0,κ^”(π) ≡. d^2κ/dk^2|_k=π.
The polylog function Li_3/2 represents the generating function of free fermion criticality.
The above scaling functions of the free energy for different quantum phase transitions are valid for arbitrary interaction strengths and fillings, revealing a microscopic origin of the quantum phase transitions associated with the dressed energies.
The functions -ε_1(0), -κ(0), κ(π) serve as criticality and
depend on the energy gaps away from the QCPs, i.e.
-ε_1(0), -κ(0), κ(π) ≈ α_BΔ B+α_μΔμ+α_uΔ u.
The factors α_(B,μ,u) represent the different transition paths in the vicinities of QCPs driven by external fields.
These expressions (<ref>)-(<ref>) display concise and elegant configurations independent of specific details for arbitrary filling and interaction strength, and can apply to other models with second order phase transitions associated with the dynamical critical experiment z=2 and correlation length exponent ν=1/2.
The derivations for these formulas are given in <cit.>.
§ IV. SPIN INCOHERENT LIQUID
Although spin incoherent Luttinger liquid has been studied in literature <cit.>, almost all those works are based on the framework of bosonization.
There still lacks a study of such novel phenomenon from the Bethe ansatz perspective.
In Section II, we used the variations of η-pair and spin magnetizations (Δη ^z, Δ S^z) to characterize the fractional charge and spinon excitations. Such fractionalized quasi-particles reveal fermionic nature of quasiparticles, forming the Luttinger liquid.
Meanwhile we found the only possible fractional spin excitations which can lead to the spin incoherent liquid at low temperatures.
Besides the fractional excitations, here we present rigorous results of the SILL in terms of specific heat, criticality and correlation function.
§.§ IV.1 Thermodynamics in SILL
In the previous analysis given in section <ref>, we observe that a crossover region fanning out from the critical point does show the existence of the SILL above the phase boundary of the TLL and up to a critical temperature.
This phenomenon can be revealed through thermodynamic quantities of the model, such as specific heat.
We now identify different energy scaling of the SILL from the TLL and quantum criticality.
We first analyze the variables ε_1(0) emerged in the polylog functions of II-IV transition in (<ref>),
ε_1(0)=2B-∫_-π^πḳcos ka_1(sin k-Λ)Tln(1+e^-κ(k)/T)|_Λ=0,
where ε_1(0)=0,<0,>0 at exact critical point, at phase IV and phase II, representing the distance away from the phase boundary of II-IV. In this case the Fermi point of the charge κ(Q)=0 gives 2cos Q=-μ-2u-B, leading to κ(k)=-2cos k+2cos Q, here Q is the Fermi point in charge sector. Thus ε_1(0) at zero temperature can be simplified by
ε_1(0)=2B-4u/π∫_0^Qdkcos k/u^2+sin^2k(cos k-cos Q)+O(T^2)
with the temperature term omitted. By further performing Taylor expansion around the critical point B_c for fixed μ and u, thus we have
Q≈ Q_c+1/2sin Q_cΔ B
with Q_c=arccos(-1/2(μ_c+2u_c+B_c)).
After some algebra, the quantity ε_1(0) in the vicinity of the critical point of phase transition II-IV can be found to be given by
ε_1(0) ≈ 2B-4u/π∫_0^Q_cdkcos k/u^2+sin^2k(cos k-cos Q_c+sin Q_c(Q-Q_c))
≈ 2Δ B-2u/π∫_0^Q_cdkcos k/u^2+sin^2kΔ B
≈ 2Δ B[1-1/πarctan(sin Q_c/u)],
where the second line brings in the condition ε_1(0)|_B,Q_c=2Δ B with limitation ε_1(0)|_B_c,Q_c=0. Therefore α_B=-2[1-arctan(sin Q_c/u)/π] can be obtained in terms of the definition -ε_1(0)≡:α_B Δ B.
On the other hand, in the quantum critical region, heat capacity satisfies the universal scaling form
C_v/T = c_0+c_1T^-1/2[ 3/4_3/2(-^x)- x_1/2(-^x) + x^2_-1/2(-^x)]+O((Δ B/T)^5/2),
where c_0 is the zero temperature background, c_1 a coefficient depending on the transition point, explicitly see (<ref>), and x=-ε_1(0)/T=α_B Δ B/T.
A brief discussion about interaction-driven case is given in <cit.>.
One characteristic of heat capacity is that it displays bimodal structures around QCP. Thus it is efficient to mark the QC boundaries in terms of the maxima points given by heat capacity.
The local maxima can be determined by ∂ C_v/∂ B=0, i.e.
1/4Li_1/2(-e^x)- xLi_-1/2(-e^x) - x^2Li_-3/2(-e^x)=0,
which gives two solutions x_1=-1.5629, x_2=3.6205.
Figure <ref> (a) shows an overall behavior of quantum criticality in the vicinity of the IV-II phase transition in the T-B-C_v coordinate.
The blue circled symbols denote the maxima of specific heat from analytical results (<ref>), showing a good agreement between numerical TBA equations.
In this critical region, T≫Δ B= B-B_c, other thermodynamic properties also show universal scaling behaviour given by equations (<ref>)-(<ref>).
As temperature decreases gradually from the QC part to a certain extent, TLL regions appear, see the areas below the red lines in figure <ref> (a).
Around the critical point B_c=0.55, there emerges two TLLs. In the region with B<B_c, denoted as TLL_SC, the system lies in phase IV with spin and charge degrees of freedom coexisting. More than B_c, the system lies in phase II with only charge degree, denoted as TLL_C.
In the TLL region, the specific heat C_v is linearly dependent on T, also see discussion in Section III.
In the crossover region between QC and the TLL_C phase, the spin sector is gapped.
By utilizing the asymptotic behaviour of polylog function <cit.> and expanding (<ref>), the specific heat is given by
C_v≈π T/3v_c+3π^1/2a_0/4(η_1)^-1/2T^1/2e^α_BΔ B/T+O(Δ Be^α_BΔ B/T).
By comparison, in the crossover region between QC and the TLL_SC phase, the SILL lies in the temperature range E_s∼ k_Fv_s≪ k_BT≪ E_c∼ k_Fv_c.
This area corresponds to SILL (spin-incoherent Luttinger liquid) <cit.> with specific heat
C_v≈π T/3v_c+π^2a_0(η_1)^-1/2(-ε_1(0))^-1/2T/3[1+21π^2/40(-ε_1(0))^-2T^2]+O(T^4),
which manifests a gas-liquid co-existence.
We note that the coefficient before the square bracket should be π T/(3v_s). The proof is given in following.
The spin dressed energy around QCP is written as ε_1(Λ)=D_1Λ^2+D_2. Thus for the SILL region, the spin Fermi point A is related with ε_1(A)=D_1A^2+D_2=0, resulting in A=(-D_2/D_1)^1/2. On the other hand, by the definition of D_1,2 given in (<ref>) and (<ref>), we have D_1=ε^”_1(0)/2=η_1,D_2=ε_1(0).
Therefore the Fermi point A is expressed as A=(-ε_1(0)/η_1)^1/2. Using the definition of a_0, see (<ref>),
we have a_0=σ_1(0)≈σ_1(A) and ε_1^'(A)=2η_1A, which renders
π^2a_0(η_1)^-1/2(-ε_1(0))^-1/2T/3
= π^2σ_1(A)T(η_1)^1/2/31/η_1(-ε_1(0))^1/2
= π^2σ_1(A)T/3η_1A=π·2πσ_1(A)T/3ε_1^'(A)
= π T/3v_s.
This immediately gives an universal thermodynamic relation of the SILL
C_v≈π T/3( 1/v_c + 1/v_s) +7π^3T^3/40 v_s(-ε_1(0))^2 +O(T^4).
Furthermore, in figure <ref> (b) and (c), we plot the specific heat below and above the QCP for different values of magnetic fields.
It is obvious to see the region of the linear-temperature dependent specific heat, a crossover region of the SILL with both the linear- and cubic- temperature-dependent specific heat.
The latter marks the crossover region E_s∼ k_Fv_s≪ k_BT≪ E_c∼ k_Fv_c, showing the thermodynamic behaviour of the SILL.
§.§ IV.2 Correlation functions in SILL
The SILL regime with k_Fv_s<k_BT≪ k_Fv_c lies between the boundaries of TLL and QC.
In this region, the spin degree of freedom behaves like hot spins, whereas the charge still behaves as a collective motion of bosons.
As a result, the SILL largely behaves like a spin and charge decohered liquid, i.e., possessing solely a propagating charge mode but not a spin mode, see <cit.>.
In the spin sector, the magnetic exchange energy is lower than the Fermi energy, resulting in spin thermally excited with equal probability <cit.>.
In the strong coupling regime, the spin degrees of freedom shows a spin dynamics of Heisenberg chain with nondispersive spinon band due to small effective exchange coupling J=4t^2/U.
Whereas the charge acts as noninteracting fermions with dispersive spectrum κ(k)=-2tcos(k).
This SILL theory can also be captured in the excitation spectrum in the low density regime in figure <ref> (b).
The concept of SILL is helpful to explain the appearance of conductance plateau in a quantum wire<cit.>.
When interaction increases or in the vicinity of QCP, the spin velocity progressively dwindles to zero. This means that the spin sector loses dynamics,
indicating the emergence of the SILL.
In this regime E_s≪ T≪ E_c, the spin sector is totally thermal averaged and equally excited, whereas the charge remains at relevant low-energy performance, rendering the correlation functions independent of temperature <cit.>.
We note that the TLL theory has its own applicable condition T≪ E_c, E_s.
Whereas for the region E_s≪ T≪ E_c, the energy scales of charge and spin degrees of freedom can be dealt with separately <cit.>.
Under such a circumstance, the finite-temperature correlation functions in terms of conformal field theory <cit.> still remain valid for the charge and spin degrees of freedom operating with different limits, i.e.
|x±iv_ct|≪ v_c/T, |x±iv_st|≫ v_s/T.
This is essential to capture the asymptotic behaviour of the SILL.
Here we would like to mention that the remaining temperature term in the spin sector can be replaced by the typical energy scale of spin T∼ E_s∼ J∼(k_F↑+k_F↓)/2· v_s≡ k_Fv_s.
With the help of the finite temperature asymptotics of correlation functions under the condition (<ref>), the two-point correlation functions of prime fields can be obtained as
⟨ϕ(x,t)ϕ(0,0)⟩=∑ A(D_c,D_s,N^±_c,N^±_s)exp(-2iD_ck_F,↑x)exp(-2i(D_c+D_s)k_F,↓x)
×1/(x-iv_ct)^2Δ^+_c(x+iv_ct)^2Δ^-_c(2πα k_F)^2Δ^+_s+2Δ^-_s^-πα(2Δ^+_s+2Δ^-_s) k_Fx,
where the conformal dimensions in gapless phases read
2Δ^±_c(ΔN,D) = (Z_ccD_c+Z_scD_s±Z_ssΔ N_c-Z_csΔ N_s/2detZ)^2+2N^±_c,
2Δ^±_s(ΔN,D) = (Z_csD_c+Z_ssD_s±Z_ccΔ N_s-Z_scΔ N_c/2detZ)^2+2N^±_s.
Here Z is the dressed charge matrix given in (<ref>), while N_α^±,ΔN⃗,D⃗ are related to the three types of excitations: adding particle from left and right Fermi points, changing total particle number, moving of the particle center displacement, also see <cit.>.
From equation (<ref>), the spin-spin correlation in the spin sector displays exponential decay as a function of distance, while the correlation in the charge sector shows a power-law decay.
Explicitly, based on the correlation functions obtained from the finite temperature CFT, we may calculate various two-point correlations of field operators close to critical field B_c in SILL regime, including the single particle Green's function
* G^↑
G^↑_B→ B_c∼exp(-ik_F,↑x)/(x-iv_ct)^1-2/π√(1-B/B_c)(2πα k_F)^1/2-1/π√(1-B/B_c)^-πα(1/2-1/π√(1-B/B_c)) k_Fx+h.c..
This result was derived for the region close to the critical field B_c, corresponding to the phase transition II-IV. Similarly, for other correlations
* G^↓
G^↓_B→ B_c ∼ exp(-ik_F,↓x)/(x-iv_ct)^1/4+1/π√(1-B/B_c)(x+iv_ct)^1/4-1/π√(1-B/B_c)
×(2πα k_F)^1-2/π√(1-B/B_c)^-πα(1-2/π√(1-B/B_c)) k_Fx+h.c..
* G^n
G^n_B→ B_c ∼ n^2 +(exp(2ik_F,↑x)+exp(-2ik_F,↑x))1/|x-iv_ct|^2-8/π√(1-B/B_c)
×(2πα k_F)^2-4/π√(1-B/B_c)^-πα(2-4/π√(1-B/B_c)) k_Fx
+(exp(2ik_F,↓x)+exp(-2ik_F,↓x))(2πα k_F)^2-4/π√(1-B/B_c)^-πα(2-4/π√(1-B/B_c)) k_Fx
+(exp(2i(k_F,↑+k_F,↓)x)+exp(-2i(k_F,↑+k_F,↓)x))1/|x-iv_ct|^2.
* G^⊥
G^⊥_B→ B_c ∼ (exp(i(k_F,↑+k_F,↓)x)+exp(-i(k_F,↑+k_F,↓)x))1/|x-iv_ct|^1/2
×(2πα k_F)^1/2+1/π√(1-B/B_c)^-πα(1/2+1/π√(1-B/B_c)) k_Fx.
* G^p
G^p _B→ B_c ∼ exp(-i(k_F,↑+k_F,↓)x)1/(x-iv_ct)^9/4(x+iv_ct)^1/4
×(2πα k_F)^1/2-3/π√(1-B/B_c)^-πα(1/2-3/π√(1-B/B_c)) k_Fx+h.c..
It should be noticed that although these results are obtained in a very rough approximation, they capture the essential features of the SILL <cit.>, i.e.,
spin-spin correlation decays exponentially while the correlation in the charge degree of freedom behaves like spinless noninteracting fermions.
The field theory approach was given in <cit.>.
The reason why the crossover in the vicinity of phase transition between IV and V can not develop such a similar concept as charge incoherent Luttinger liquid is that near a half-filled lattice, the charge velocity monotonically and exponentially tends to zero <cit.>.
This means subtlety occurring near the Mott phase transition due to rapid vanishing charge collective mode.
This limitation is called as the holon confinement.
In the next section, we will develop a new concept, Contact susceptibility, to study the Mott phase transition.
§ V. CONTACT AND CONTACT SUSCEPTIBILITY
In this section, we focus on an experimentally measurable quantity double occupancy <cit.>, as well as the associated Contact, revealing the competition between thermal fluctuations and quantum fluctuations from different sources, i.e., external fields and interaction.
The double occupancy serves as an efficient instrument to demarcate phase transition, especially the Mott phase transition in the extended Hubbard model with long range interaction <cit.>.
On the other hand, the partial wave Contact, which was first proposed in ultracold Fermi gas <cit.>, has become
an important theme in the study of ultracold atoms <cit.>.
Here we will introduce the concept of Contact susceptibilities with respect to the temperature, magnetic field and chemical potential, and investigate applications of these Contact susceptibilities.
We will show that the susceptibilities build up a general connection between interaction-driven quantum criticality and the phase transitions induced by external fields.
Using these relations, we will also obtain caloric effect in interaction-driven quantum refrigeration and scaling laws at quantum criticality.
§.§ V.1 Double occupancy and Mott phase
In the Hubbard model, the lattice filling parameter, interaction strength and external fields can drive different phase transitions <cit.>.
In contrast to the dilute limit case, the Mott insulator phase induced by interaction is much less understood.
When the interaction strength increases up to a critical value u_c<cit.> in the canonical ensemble, the system reaches a Mott insulator state in the extended Hubbard model.
The half-filled phase can be delineated by the Luther-Emery liquid with one gapless and one gapped sectors <cit.>, in which the excitations comply with bosonic and fermionic statistics, respectively.
In order to explore the magnetic order and detect the Mott phase transition, one can introduce the double occupancy d=1/N∑_i⟨ n_i,↑n_i,↓⟩, which depicts the probability of two electrons with opposite spin occupying a single lattice site with potential energy E_pot=u d.
The double occupancy has been invested extensively in theoretical and experimental research on the Hubbard model at half filling.
It appears to show discontinuity as the interaction approaches critical coupling in interaction-induced Mott transition (note that u_c=0 for this model in the canonical ensemble in which this phenomenon is unconspicuous) and exhibits nonmonotonic behavior in the half-filled state as temperature varies, similar to the Pomeranchuk effect (the melting pressure of liquid Helium-3 shows a trend of first decreasing and then increasing with temperature) <cit.>.
It also reveals the competing physics from the charge and spin fluctuations.
For doping-induced Mott transition, d is significantly suppressed in the area of μ<u and generates a pronounced signal when density exceeds unity <cit.>.
It is useful to locate the Mott transition point through the detection of double occupancy.
In the canonical ensemble, the double occupancy can be obtained from the free energy f by
d = 1/4∂ f/∂ u-1/4+n_c/2,
in which the last two terms -1/4+n_c/2 stem from the extra terms -2uN+uL in the Hamiltonian.
We can define C=∂ f/∂ u to be the lattice version of Contact C,
C=∂ f/ ∂ u=4 d -2n_c+1,
which is analogous to the Tan's Contact <cit.> in the continuous systems of ultracold atoms.
Regarding the role that the double occupancy can reflect the phase information, we expect that d distinguishes different phases with and without internal degrees of freedom, see figure <ref> (a).
d always vanishes for phases I, II, and III and has no demarcation lines between these phases.
The boundaries of II and IV or III and V are obvious, while for IV and V the inflection point of the contour line marks the phase transition.
By the fact that the double occupancy essentially reflects charge fluctuations at Mott insulator accompanying with the existence of the antiferromagnetic order, we demonstrate the roles of magnetic field B and interaction strength u on d in figure (<ref>) (b).
It is clear to observe that repulsive interaction always lowers the possibility of double occupancy which is quite intuitive.
When the interaction increases, the electrons with different spins are repel each other more strongly and the system becomes more incompressible.
The infinity coupling u→∞ makes the particles fully localized.
The decrease caused by magnetization originates from the tendency that the magnetic field tends to align the spins of electrons in the field direction.
Through the derivative of the background term in (<ref>) with respect to u, the exact analytic expression of the Contact C_0 at Mott phase V of ground state can be obtained as
C_0=-1-4/3∂/∂ u(λ_1η_1)A^3-4λ_1η_1A^2∂ A/∂ u-2/15∂/∂ u(λ_2η_1)A^5-2/3λ_2η_1A^4∂ A/∂ u,
where ∂ A/∂ u is given in (<ref>) below.
We plotted the double occupancy through (<ref>) (solid lines) in figure <ref> (b), showing a good agreement between this analytical result and the numerical calculation from the TBA equations.
Based on the relation from (<ref>), it is more essential to investigate the variations of the Contact with respect to the changes of the external fields and temperature.
§.§ V.2 Contact susceptibilities and Mott phase transition
In comparison to the double occupancy d, the Contact C is more essential to capture many-body effects induced by interaction strength.
Using the thermal potential f=e-μ n_c-2Bm-Ts-uC and the Maxwell relations in its derivatives with respect to the temperature, chemical potential and magnetic fields, we may build up general relations between Contact susceptibilities and interaction-driven variations of density, magnetization and entropy:
∂ n_c/∂ u = -∂ C/∂μ,
∂ m/∂ u = -∂ C/∂ (2B),
∂ s/∂ u = -∂ C/∂ T.
We prove that these Contact susceptibilities will provide striking features of interaction effect in the thermodynamics of the model.
Building on the Contact susceptibilities, we further show that the Contact susceptibilities exhibit universal scaling behaviour in the quantum critical region <cit.>.
The Mott insulator phase with average one electron occupying one site is of particular interest in many-body physics.
Here using the free energy (<ref>) for the transition from IV to V, we obtain the Contact susceptibilities
C = -1-4/3f_11A^3-4λ_1η_1A^2∂ A/∂ u-2/15f_21A^5-2/3λ_2η_1A^4∂ A/∂ u+T^1/2b_0b_1/2π^1/2f_1/2,
∂ C/∂μ = -b_0b_1/2π^1/2 T^1/2f_-1/2,
∂ C/∂ (2B) = ∂ C_0/∂ 2B -b_0b_1/4π^1/2T^1/2(1-4A_c/π u+A_c)f_-1/2,
∂ C/∂ T = b_0b_1/4π^1/2T^1/2f_1/2+b_0b_1/2π^1/2T^3/2[Δμ+(1-4A_c/π u+A_c)Δ B]f_-1/2,
where C_0 denotes the background terms of ground state, that is equation (<ref>). We have denoted the background of the magnetic susceptibility and other functions as
∂ C_0/∂ 2B = -4A^2(f_11+1/6f_21A^2)∂ A/∂( 2B)-8η_1A(λ_1+1/3λ_2 A^2)∂ A/∂ (2B)∂ A/∂ u,
-4η_1A^2(λ_1+1/6λ_2A^2)∂^2 A/∂ (2B) ∂ u,
f_11 = ∂/∂ u(λ_1η_1),f_21=∂/∂ u(λ_2η_1), ∂ A/∂ B=-1/η_1 A(1+A/π u),
b_1 = ∂ (2+C_2)/∂ u=-2-8/π u(1+u^2)^3/2A^2∂ A/∂ u.
Solving (<ref>), we have
∂ A/∂ u = (1+u^2)u-(1+u^2)^1/2/A(1+A/π u)+3u/2(1+u^2)A[1+2/9π u(4+1/u^2)A]/1+A/π u,
∂^2 A/∂ B ∂ u = 1+2A/π u/η_1 A^2 (1+A/π u)^2∂ A/∂ u
-1/η_1π u^2(1+A/π u)^2.
In figure <ref>, we demonstrate scaling behaviour of Contact susceptibilities, confirming the analytic expressions (<ref>)-(<ref>) (solid lines) with the numerical data from TBA (symbols).
§.§ V.3 Interaction-driven quantum cooling
It is remarkable to observe that equation (<ref>) essentially relates entropy, temperature and interaction strength.
This relation is of great importance for interaction driven quantum cooling since the interaction can be tuned in cold atom experiments via Feshbach resonance, and temperature plays a vital role in controlling the entropy.
Therefore an adiabatic cycle process can be used in realization of quantum refrigeration in real physical systems.
Inspired by this, here we consider an isentropic process by ramping up or down the interaction strength.
Focusing on the (T,u) coordinates for a fixed magnetic field, we perform total derivatives on entropy s, i.e.,
ṣ=∂ s/∂ uụ+∂ s/∂ TṬ=0.
Using equation (<ref>) and the relation ∂ s/∂ T=C_v/T, the points on the isentropic line in the (u,T) coordinates satisfy the relation:
C_v/T∂ T/∂ u=∂ C/∂ T.
Thus the interaction driven Grüneisen ratio Γ_int is related to ∂ C/∂ T via the relation Γ_int=∂ C/∂ T· u/C_v.
Note that ∂ C/∂ T dramatically changes near quantum phase transition (see equation (<ref>) and figure <ref>), where C_v has a minimum (see figure <ref> ) and the entropy has a maximum (see figure <ref>).
A good cooling effect is observed when the interaction drives the system approaching a critical point.
The lowest temperature point can be estimated by ∂ C/∂ T=0.
To characterise the refrigerating efficiency, we give a brief description on the lowest reachable temperature during an adiabatic cooling cycle.
From the results (<ref>) of ∂ C/∂ T, the condition for determining the lowest temperature reads
1/2_1/2(-^x)-x_-1/2(-^x)=0,
where x=α_uΔ u/T.
It is found that, to an good approximation, x≈1.3117.
Thus the extreme point can be determined.
In figure <ref>, we plot the isentropic lines near the phase boundaries of II-IV (a) and IV-V (b), respectively.
It is obvious that entropy shows a minimum in the quantum critical regions. Away from QCP with temperature T≪ |u-u_c|, entropy linearly depends on temperature, see the left part of figure <ref> (a) and right part of figure <ref> (b).
In figure <ref> (b) we draw an Otto cycle for the interaction-driven refrigeration process.
The stages A, D lie around QCP, whereas the stages B,C are located in the TLL area.
There contains four steps to cool the target material. For A → B, the working substance is adiabatically ramped up from the target temperature T_ target to the nonthermal higher temperature stage B.
Then through a hot isochore process B → C, the working substance comes into contact with the ambient, transferring heat to the high temperature source.
While the temperature of working substance reduces to the one at the thermal state C.
Next, for the isentrope process C → D, the working substance is adiabatically ramped down to the low temperature stage D.
This is an opposite process contrast to the A → B.
Finally, for the isochore process D → A, the working substance contacts with the target object, absorbing heat from the target material and reaching the thermal state A.
Consequently, the target object is cooled down by this cycle.
Now let us determine the lowest temperature which can be reached through an isentropic process indicated in the figure <ref>.
From equations (<ref>)-(<ref>), the phase II (V) contains one charge (spin) degrees of freedom.
Consequently, their entropy s_L1 and s_L2 are given by (<ref>) and (<ref>), respectively, namely,
s_L1 ≈ π T_L1/3v_c,
s_L2 ≈ π T_L2/3v_s.
Comparing isentropic lines with the same entropy for phase II, IV and V, for example s=0.00008 in figure <ref>, the temperature of TLL_C (phase II) is higher than that of TLL_S (phase V) since charge velocity v_c changes faster than spin velocity v_s when the interaction is changed around the critical point, i.e.,
T_L1>T_L2.
On the other hand, when it approaches the QCP, i.e., at the extreme low temperature for each isentropic line, entropy have explicit expressions for the transition II-IV and IV-V
s_s1 ≈ λ_3 π^1/2σ_1(0)(ε^”_1(0)/2)^-1/2T_c1^1/2,
s_s2 ≈ λ_3 π^1/2ρ(π)(-κ^”(π)/2)^-1/2T_c2^1/2,
respectively, where λ_3=x_1/2(-^x)-3/2_3/2(-^x)≈ 1.3467.
With the above analysis, we observe that the entropy shows a square root dependence on the temperature at extreme point, it is proportional to temperature in the Luttinger liquid.
Therefore, considering an isentropic cooling process through the ramping up or down in the T-u plane around critical phase transitions from II to IV or from V to IV, see figure <ref>,
the minimum temperatures can be reached
II-IV: T_c1^1/2/ T_L1 = π^1/2(ε^”_1(0)/2)^1/2/3λ_3v_cσ_1(0),
V-IV: T_c2^1/2/ T_L2 = π^1/2(-κ^”(π)/2)^1/2/3λ_3v_sρ(π),
respectively. A brief discussion about interaction-driven quantum cooling is given in <cit.>. Based on previous results (<ref>)- (<ref>), the leading contributions to minimum temperature near quadruple critical point are
II-IV: T_c1^1/2/ T_L1 ≈ π^1/2η_1^1/2/6λ_1λ_3δ,
V-IV: T_c2^1/2/ T_L2 ≈ 2λ_1π^5/2/3λ_3η_1 A.
These suggest that the lowest temperature can be reached around QCPs, which can be also reachable through adiabatic demagnetizaion cooling, see the study of the Grüneisen parameters in <cit.>.
Moreover, we further note that the location of the minimum temperature for each contour entropy line in the T-u plane is governed by the relation α_u Δ u/T≈1.3117 in (<ref>), where α_u needs to be determined.
§.§ V.4 Calculation of α_u for phase transition IV-V
One remaining problem from the discussion in the previous subsection is that the underlying coefficient α_u is not given.
It not only determines the scaling factor of the critical temperature at quantum criticality driven by the interaction, but also determines the extreme point associated with the lowest temperature in an isotropic process in the refrigeration cycle.
We know that α_u emerged in polylog function is related to the phase transition line via the formula κ(π)=α_BΔ B+α_μΔμ+α_uΔ u.
For transition IV-V, the phase boundary between phase IV and V is marked by κ(π)=0, where κ(π) is expressed via ε_1(Λ) by
κ(π) = 2-μ-2 u-B+∫_-A^AΛ̣a_1(Λ) ε_1(Λ),
ε_1(Λ) = 2 B-4 √(1-(Λ-i u)^2)+4u
-∫_-A^AΛ̣^' a_2(Λ-Λ^')ε_1(Λ^').
However for the transition IV-V, the corresponding equations (<ref>) and (<ref>) are not in polynomial forms in terms of interaction u.
In this part, we present a numerical scheme with equation (<ref>) based on the quality that phase transition from IV to V occurs due to the emergence of Mott insulator with constant density.
In addition to the magnetic field B and chemical potential μ that can drive the phase transition, the interaction also drives the system from one phase to others.
In figure <ref> (a), we plot the phase diagram in the (u-μ-n_c) coordinate at fixed magnetic field B=0.82714.
The vacuum phase I has n_c=0.
The density is 0<n_c<1 corresponding to phase II or IV and n_c=1 for the phase III or V.
The boundary line for IV and V (or II and III) is marked with a constant density n_c=1. On this transition line, we perform total derivatives of n_c, yielding
ṇ_c=∂ n_c/∂ uụ+∂ n_c/∂μμ̣=0.
Substituting (<ref>) for ∂ n_c/∂ u=-∂ C/∂μ with (<ref>) and ∂ n_c/∂μ=χ_c with (<ref>) into (<ref>), near the boundary line n_c=1 in the u-μ plane, we have
α_u=-α_μ∂μ/∂ u.
where we use b_1=α_u,-1=α_μ. Near the phase transition IV to V, we see α_μ=-1 since the chemical potential μ only appears in the charge leading term, no contribution to the driving terms in spin string, see (<ref>) and (<ref>).
At this point, we relate the value of α_u to the slope at QCP in the u_c-μ_c plane.
Therefore equation (<ref>) provides us with a way to obtain the unknown coefficient α_u.
Moreover, ∂μ/∂ u can also be obtained directly from (<ref>) and (<ref>)
2∂μ/∂ u = -2+∫_-A^AΛ̣[∂ a_1(Λ)/∂ uε_1(Λ)+a_1(Λ)∂ε_1(Λ)/∂ u],
∂ε_1(Λ)/∂ u = -4∂/∂ u[ √(1-(Λ-i u)^2)-u]
-∫_-A^AΛ̣^'∂/∂ u[a_2(Λ-Λ^')ε_1(Λ^')],
where the factor 2 emerged at the leftmost end of (<ref>) arises from the derivative of κ(π) and μ. Then ∂μ/∂ u can be obtained by iteratively solving the above two equations.
Here, we use color map to get α_u instead, which can apply to arbitrary systems without knowing explicit analytic formulas. In figure <ref> (a), we plot density phase diagram containing QCP (u_c, μ_c) of interest.
In order to evaluate α_u(u_c,μ_c) numerically for the interaction-driven phase transitions, compared to figure <ref> (c) (d) driven by external fields, we choose two adjacent points (u_1,μ_1),(u_2,μ_2) near QCP u_c=1, see the green areas in figure <ref> (a).
It is evaluated from (<ref>) that α_u(u_c=1)≈(μ_1-μ_2)/(u_1-u_2)≈-1.9627.
Using this value as the argument factor of the scaling function for the phase transition IV-V with the other part already known in equations (<ref>) and (<ref>), we plot the density scaling law in figure <ref> (b) in terms of the variation of interaction strength.
One can see that the obtained α_u captures well the thermodynamic scaling law.
§.§ V.5 Calculation of α_u for phase transition II-IV
Similar to the analysis of the parameter α_u for the transition of IV to V, the relevant coefficients α_B,α_μ,α_u for transition II to IV are related to each other.
Contrast to the Mott-insulator transition, this transition arises from the introduction of spin-down electrons, i.e., the phase transition points are determined by zero spin-down particle density n_↓=0.
Considering a magnetic field B and coupling strength u driven phase transition, we perform total derivative of n_↓, i.e.,
ṇ_↓=∂ n_↓/∂ uụ+∂ n_↓/∂ (2B)(̣2B)=0.
Substituting ∂ m/∂ B=χ_s with (<ref>) and (<ref>) into the (<ref>), with the help of n_↓=n_c/2-m we obtain
α_u=-α_B∂ B/∂ u.
Recall that (<ref>) gives us the value of α_B=-2[1-arctan(sin Q_c/u)/π].
We also note that the quantity ∂ B/∂ u can be obtained from (<ref>), namely, near the critical line, we have the following relation
B=2u/π∫_0^Qdkcos k/u^2+sin^2k(cos k-cos Q).
Therefore, ∂ B/∂ u is easily obtained through the derivative on both sides of this equation with respect to u.
From equation (<ref>), we finally get the following analytic expression
α_u = 4/π[∫_0^Q_cdkcos^2 k/u_c^2+sin^2 k-2u_c^2∫_0^Q_cdkcos^2 k/(u_c^2+sin^2 k)^2.
. +arctan(sin Q_c/u_c)+sin(2Q_c)/2(u_c^2+sin Q_c^2)].
In summary, there exist simple relations between any pair of these scaling factors α_B, α_μ and α_u.
In the previous two subsections, via Contact susceptibilities expressions (<ref>) and (<ref>), we build up relationships between α_u and α_μ as well as between α_u and α_B, see (<ref>) and (<ref>). In fact, in terms of thermodynamic potential, Maxwell relations can also build a connection between the charge susceptibility ∂ n_c/∂ (2B) and magnetization susceptibility ∂ m/∂μ. Moreover, (<ref>) and (<ref>) remain valid for other transitions, like I-II, II-III and III-V. Based on this feature, we can conclude that the following relations
α_u/α_μ = -∂μ/∂ u,
α_u/α_B = -∂ B/∂ u,
α_B/α_μ = -∂μ/∂ B
hold true in general, i.e., for arbitrary interaction strength and density.
In the above, we have discussed the exact values of α_B for the phase transition II-IV and α_μ for the phase transition IV-V. The remaining coefficients can be derived through equations (<ref>), (<ref>), (<ref>) in a straightforward way.
Here, we give the values of α_μ for II-IV and α_B for IV-V
II-IV: α_μ = 2/πarctan(sin Q_c/u),
IV-V: α_B = -1/2+∫_0^AΛ̣a_1(Λ)∂ε_1(Λ)/∂ B.
Together with equations (<ref>)-(<ref>), the results obtained here offer more general description of quantum criticality in terms of full internal and external potentials in the 1D Hubbard model in arbitrary experimentally controllable parameters.
These hold true for the second order phase transition in higher dimensional quantum systems too.
§ VI. CONCLUSION AND REMARKS
We have presented analytical results of thermal and magnetic properties of 1D Hubbard model, ranging from elementary spin and change excitations, to the spin incoherent Luttinger liquid, universal thermodynamics, quantum criticality and interaction-driven refrigeration.
A summary of our new results is the following:
1) We have studied elementary excitations involving fractional spin and charge excitations, gapped charge k-Λ strings and spinon bound state excitations as well as some combinations of them in section II. These are complementary to that studied in <cit.>.
Based on the study of such excitations, together with the dimensionless Wilson ratio, universal scalings of thermodynamics of the model, we have a rigorous investigation of the spin incoherent Luttinger liquid, which was previously studied only in the framework of effective theory via bosonization.
From the conformal field theory point of view, we have given rigorously the characteristics of various correlation functions near the phase transition from phase II to phase IV, showing an existence of collective mode in charge degrees of freedom rather than the spin degrees of freedom.
2) We have presented general analytical results of thermodynamics, independent of microscopic details of the model.
Explicitly, we have determined the additivity rules of charge and spin susceptibilities in grand canonical ensemble equations (<ref>) (<ref>) and canonical ensemble equations (<ref>) (<ref>).
Away from the critical point and in the low-energy regime, in general, coherent spin and charge degrees of freedom give rise to the well known phenomenon of spin-charge separation, indicating the nature of the TLL.
The crossover regime between the TLL and the quantum critical region belongs to the SILL, indicating a coexistence of liquid and gas, see equation (<ref>).
Besides the thermodynamics, the quantum criticality and universal scaling laws induced by the variation of magnetic field and chemical potential have been obtained analytically and confirmed numerically, see general results of criticality equations (<ref>)-(<ref>).
In addition, the interaction-driven quantum critical behaviour has been studied too.
3) We have introduced the lattice version of the Contact and Contact susceptibilities with respect to the external fields and temperature.
In particular, we have investigated applications of Contact susceptibilities, that build up a general connection between interaction-driven quantum criticality and the phase transitions induced by external fields.
By virtue of the Contact susceptibilities, we have discussed Mott transition, quantum refrigeration and interaction-induced quantum transitions, see section V.
In view of the rapid advances in trapping and controlling ultracold atoms in experiment, the results obtained here will provide direct guidance to explore experimentally various many-body phenomena in the 1D Hubbard model, such as quantum criticality, spin coherent and incoherent Luttinger liquids, generalized hydrodynamics and transport properties.
These relations reveal deep insights into quantum criticality of the Hubbard model in higher dimensions.
Furthermore, applications of our method to quantum metrology and other quantum technologies are also highly desirable.
§ ACKNOWLEDGEMENTS
We thank the Integrable Theory Group at APM for their helpful discussions. X.W.G. acknowledges kind hospitalities of Rice University and Institute for Advanced Study, Tsinghua University during his visits in 2018 and 2021, respectively.
J.J.L. performed the analytical and numerical study of the model. X.W.G. supervised J.J.L. for conducting the results reported in this paper. Both X.W.G. and H.P. initiated this study.
J.J.L. and X.W.G. is supported by the NSFC key grant No. 12134015, the NSFC grant No. 11874393 and No. 12121004.
X.W.G. is also partially supported by the Innovation Program for Quantum Science and Technology 2021ZD0302000.
H.P. acknowledges support from the US NSF (PHY-2207283) and the Welch Foundation (Grant No. C-1669).
§ APPENDIX A: UNIVERSAL THERMODYNAMICS
Universal low energy physics can be derived from the TBA equations under the conditions of B/T≫1 and |μ|/T≫1.
In such low energy limits, the TBA equations (<ref>) and (<ref>) reduce to two coupled equations in terms of charge quasimomentum k and spin length-1 Λ string, see (<ref>) and (<ref>).
Analysing the structure of the simplified TBA equations (<ref>) and (<ref>), we find that it is efficient to take Sommerfeld expansion of the kernel functions in terms of the Fermi points.
Let us denote the fermi point Q, A for charge and spin degrees of freedom satisfying κ(Q)=0,ε_1(A)=0.
For our convenience, we denote ∂a̅_1(sin k,Λ)/∂Λ=a_1(sin k-Λ), ∂a̅(sin k,Λ)/∂Λ=a_1(sin k-Λ)+a_1(sin k+Λ).
Now we focus on the second term in the (<ref>)
-∫_-∞^∞dΛ a_1(sink-Λ) Tln(1+e^-ε_1(Λ)/T)
= -a̅_1(sin k,Λ) Tln(1+e^-ε_1(Λ)/T)|_-∞^∞-∫_0^∞dΛa̅(sink,Λ)1/1+e^ε_1(Λ)/T∂ε_1(Λ)/∂Λ
= -∫_0^∞dΛa̅(sink,Λ)1/1+e^ε_1(Λ)/T∂ε_1(Λ)/∂Λ
= -∫_ε_1(0)^ε_1(∞)dε_1 a̅(sink,Λ(ε_1))1/1+e^ε_1/T
= -∫_ε_1(0)/T^ε_1(A)/Tdx T a̅(sink,Λ(Tx))1/1+e^x-∫_ε_1(A)/T^ε_1(∞)/Tdx T a̅(sink,Λ(Tx))1/1+e^x
= -∫_ε_1(0)/T^ε_1(A)/Tdx T a̅(sink,Λ(Tx))(1-1/1+e^-x)-∫_ε_1(A)/T^ε_1(∞)/Tdx T a̅(sink,Λ(Tx))1/1+e^x
= -∫_ε_1(0)/T^0dx T a̅(sink,Λ(Tx))+∫_0^-ε_1(0)/Tdx T a̅(sink,Λ(-Tx))1/1+e^x
-∫_0^ε_1(∞)/Tdx T a̅(sink,Λ(Tx))1/1+e^x
= -∫_ε_1(0)^0dε_1 a̅(sink,Λ(ε_1))+∫_0^∞dx T a̅(sink,Λ(-Tx))-a̅(sink,Λ(Tx))/1+e^x
= ∫_ε_1(0)^0dε_1 ∂Λ/∂ε_1ε_1 (a_1(sink-Λ)+a_1(sink+Λ))
-∫_0^∞dx T (a_1(sink-Λ(0))+a_1(sink+Λ(0)))Λ^'(0)2Tx/1+e^x
= ∫_0^AdΛε_1(Λ) (a_1(sink-Λ)+a_1(sink+Λ))-2T^2(a_1(sink-A)+a_1(sink+A))Λ^'(0)π^2/12
= ∫_0^AdΛε_1(Λ) (a_1(sink-Λ)+a_1(sink+Λ))
-π^2T^2/61/ε_1^'(A)(a_1(sink-A)+a_1(sink+A)).
In the above calculations, the following equations are used:
a̅(sink,Λ(Tx))-a̅(sink,Λ(-Tx))/2Tx=∂a̅/∂Λ|_Λ=Λ(0)Λ^'(0),
Λ^'(0)=1/dε_1/dΛ|_ε_1=0=1/ε^'_1(A), ∫_0^∞d x x/1+e^x=π^2/12.
Conducting similar calculation in the integrals of ε_1(Λ), the dressed energies can be converted to
κ(k) = -2 cosk-μ-2 u-B+∫_-A^AdΛε_1(Λ) a_1(sink-Λ)
-π^2T^2/61/ε_1^'(A)(a_1(sink-A)+a_1(sink+A)),
ε_1(Λ) = 2 B- π^2T^2/6κ^'(Q)cos Q (a_1(sinQ-Λ)+a_1(sinQ+Λ))
+π^2T^2/6ε_1^'(A) (a_2(Λ-A)+a_2(Λ+A))
+∫_-Q^Qd k κ(k) cos k a_1(sink-Λ)
-∫_0^AdΛ^'ε_1(Λ^') (a_2(Λ-Λ^')+a_2(Λ+Λ^')).
Compare above equations with root densities (<ref>) and (<ref>), we gain insight that there is an implicit connection between the two sets of equations which are highly symmetric. In order to get a compact and closed form of the free energy, combing equations (<ref>) and (<ref>) with (<ref>) and (<ref>) and using Sommerfeld expansion, we obtain the free energy
f = -∫_-π^πd k/2π Tln(1+e^-κ(k)/T)+u
= -π^2T^2/6κ^'(Q)+∫_0^Qd k/πκ(k)+u
= -π^2T^2/6κ^'(Q) +u
+∫_-Q^Qdk[-2 cosk-μ-2 u-B-π^2T^2/61/ε_1^'(A)(a_1(sink-A)+a_1(sink+A))]ρ(k)
+∫_-A^AdΛ[2 B- π^2T^2/6κ^'(Q)cos Q (a_1(sinQ-Λ)+a_1(sinQ+Λ)) .
. +π^2T^2/6ε_1^'(A) (a_2(Λ-A)+a_2(Λ+A))]σ_1(Λ)
= -π^2T^2/6κ^'(Q) +u+[∫_-Q^Qḳ(-2 cosk-μ-2 u-B)ρ(k)+2B∫_-A^AΛ̣σ_1(Λ)]
-π^2T^2/6ε_1^'(A)[∫_-Q^Qḳ(a_1(sink-A)+a_1(sink+A))ρ(k)-(a_2(Λ-A)+a_2(Λ+A))σ_1(Λ)]
-π^2T^2/6κ^'(Q)[∫_-A^AdΛcos Q (a_1(sinQ-Λ)+a_1(sinQ+Λ))]
= -π^2T^2/6κ^'(Q)+f_0-π^2T^2/6ε_1^'(A)[σ_1(A)+σ_1(-A)+∫_-A^AdΛ(a_2(Λ-A)+a_2(Λ+A))σ_1(Λ)]
-π^2T^2/6κ^'(Q)[ρ(Q)+ρ(-Q)-1/π]+π^2T^2/6ε_1^'(A)∫_-A^AdΛ(a_2(Λ-A)+a_2(Λ+A))σ_1(Λ)
= f_0-π^2T^2ρ(Q)/3κ^'(Q)-π^2T^2σ_1(A)/3ε_1^'(A)
= f_0-π T^2/6(1/v_c+1/v_s),
where f_0=∫_-Q^Qḳ(-2 cosk-μ-2 u-B)ρ(k)+2B∫_-A^AΛ̣σ_1(Λ) is the background contribution from zero temperature. The form of the free energy serves as universal thermodynamics of the TLL in terms of spin and charge degrees of freedom.
Notice that in the above calculation process, following integral identity was used
{[ f_1=f^(0)_1+K_12*f_2; f_2=f^(0)_2+K_21*f_1+K_22*f_2; g_1=g^(0)_1+K_21*g_2; g_2=g^(0)_2+K_12*g_1+K_22*g_2 ]→∫ f_1g^(0)_1+∫ f_2g^(0)_2=∫ g_1f^(0)_1+∫ g_2f^(0)_2.
.
In spin polarized band II, spin degree vanishes. The free energy in low temperature is simplified as f=f_0-π T^2/61/v_c, specific heat C_v/T=π/31/v_c accordingly. While in phase V with charge half filled, the free energy is denoted as f=f_0-π T^2/61/v_s and specific heat C_v/T=π/31/v_s.
§ APPENDIX B: FREE ENERGY OF PHASE III
In previous consideration, the effects from k-Λ strings are ignored at low-temperature limit T≪ B, μ.
Meanwhile, one can infer from equations (<ref>) and (<ref>) that the leading orders of ε_n^' and high Λ string with length n contributing to free energy are ^2nμ/T and ^-2nB/T, respectively, rendering these gapped terms omitted reasonably.
In order to estimate those energy scales, in this appendix, we rigorously calculate the contributions to the thermodynamics from the lowest gapped k-Λ strings ε_1^' and from the real spin rapidity ε_1.
To make calculations less difficult, we choose area that ε_1 is strictly positive and make the two driving terms ^2μ/T, ^-2B/T commensurable.
Here we treat the problem in the weak chemical potential region and high particle density regime, i.e. close to the quadruple critical point.
Consequently, we are intend to discuss the case in phase III.
On the basis of above limitations, for the phase III, the TBA equations in terms of length-1 ε_1 and ε_1^' are rewritten as
ε_1(Λ) = 2B-4Re√(1-(Λ-iu)^2)+4u+∑_m=1^∞ A_1 m * Tln(1+e^-ε_m/T)
-2J_1a_1(Λ)+2J_2[π/ua^2_1(Λ)-4π^2/u^2Λ^2a^3_1(Λ)],
ε^'_1(Λ) = -2μ-2J_1a_1(Λ)+2J_2[π/ua^2_1(Λ)-4π^2/u^2Λ^2a^3_1(Λ)],
κ(k) = -2 cosk-C_12sin^2k+C_2,
in which C_1,C_2 are the integral functions related to the gapped dressed energies ε_1 and ε_1^'
C_1 = ∫_0^∞Λ̣[π/ua^2_1(Λ)-4π^2/u^2Λ^2a^3_1(Λ)] [Tln(1+^-ε^'_1(Λ)/T)-Tln(1+^-ε_1(Λ)/T)],
C_2 = -μ-2 u-B+∫_0^∞Λ̣2a_1(Λ) [Tln(1+^-ε^'_1(Λ)/T)-Tln(1+^-ε_1(Λ)/T)],
and J_1,J_2 come from charge degree
J_1 = T^3/2/√(1+2C_1)Γ(3/2)_3/2(-^2+C_2/T)-T^5/2/8(1+2C_1)^5/2Γ(5/2)_5/2(-^2+C_2/T),
J_2 = T^5/2/3(1+2C_1)^3/2Γ(5/2)_5/2(-^2+C_2/T).
Above expressions are similar to previous equations (<ref>)-(<ref>), except that the contribution from k-Λ string is taken into account.
In terms of the above formations of TBA equations within high density region, the Gibbs free energy per site is given by
f = -μ-u-B-T ∫_-∞^∞Λ̣/πRe1/√(1-(Λ-i u)^2)ln(1+^-ε_1(Λ)/T)
-Tln(1+^κ(0)/T)+1/πT^3/2/√(1+2C_1)Γ(3/2)_3/2(-^2+C_2/T)
+1/24πT^5/2(1+8C_1)/(1+2C_1)^5/2Γ(5/2)_5/2(-^2+C_2/T).
In order to get C_1,2 or J_1,2, we should solve the coupled equations (<ref>)- (<ref>).
We may ignore the convolution terms whose leading order is 1+^-2B/T.
After carefully analysing the terms related to the parameter Λ, the integral term ln(1+^-ε_1(Λ)/T) is approximately equal to ^-ε_1(Λ)/T, and with expansions around Λ=0 we arrive at,
^-ε_1(Λ)/T ≈ ^-2B+η_0/T^-η_1Λ^2/T^g_Λ/T,
where we denote g_Λ =2J_1a_1(Λ)-2π/uJ_2a^2_1(Λ)+8π^2/u^2J_2Λ^2a^3_1(Λ).
Conducting similar manipulation to charge bound states and using (<ref>) and (<ref>) related with the integral ε_1 and ε_1^', we get the coefficients C_1, C_2
C_1 = T^2μ/T[π/ut_2-4π^2/u^2t_3]-T^-2B+η_0/T[π/us_2-4π^2/u^2s_3],
C_2 = -μ-2u-B+2Tt_1^2μ/T-2Ts_1^-2B+η_0/T,
in which
s_1(s_2,s_3) = ∫_0^∞Λ̣a_1(Λ)(a^2_1(Λ),Λ^2 a^3_1(Λ))^-η_1Λ^2/T^g_Λ/T,
t_1(t_2,t_3) = ∫_0^∞Λ̣a_1(Λ)(a^2_1(Λ),Λ^2 a^3_1(Λ))^g_Λ/T,
resulting from the subleading terms of ε_1, ε_1^'.
To this point, we note that (<ref>) and (<ref>) depend on the value of J_1, J_2, while (<ref>) relies on C_1,C_2.
Meanwhile,
we observe that C_1, C_2 are given by J_1, J_2, in turn J_1 and J_2 also contain C_1 and C_2. Therefore, the remaining problem is to get the expressions for C_1, C_2 and the entire story can be solved.
By complicated and lengthy iterative procedures, the J_1 and J_2 can be obtained explicitly
J_1 = 1/2π^1/2T^3/2τ_1+1/2π^1/2T^5/2f_3/2^-2B+η_0/T(π/us^(0)_2-4π^2/u^2s^(0)_3)-3/32π^1/2T^5/2τ_2,
J_2 = π^1/2T^5/2/4τ_2,
where f_n=_n(-^2-μ-2u-B/T), s^(0)_2=1/2π^3/2u(a^-1/2-a^-3/2), s^(0)_3=π^-5/2a^-3/2/4,a=η_1u^2/T, erfc is complementary error function, and τ_1, τ_2 are defined by
τ_1 = f_3/2+(1+T^1/2f_3/2/2π^1/2 u+3Tf^2_3/2/16π u^2)^2μ/Tf_1/2-[^a erfc(√(a))+.
. (T^1/2f_3/2/π^3/2 u+Tf^2_3/2/2π^2 u^2)π^1/2/a^1/2-(2T^1/2f_3/2/π^3/2 u+3Tf^2_3/2/2π^2 u^2)π^1/2/2a^3/2]^-2B+η_0/Tf_1/2,
τ_2 = f_5/2+^2μ/Tf_3/2-^a erfc(√(a))^-2B+η_0/Tf_3/2.
Substituting these results obtained here into the expression of free energy (<ref>), then the final solution (<ref>) is finally obtained.
199
pruschke1995anomalous Pruschke T, Jarrell M and Freericks J K 1995 Adv. Phys. 44 187–210
dagotto2005complexity Dagotto E 2005 Science 309 257–62
yanase2003theory Yanase Y, Jujo T, Nomura T, Ikeda H, Hotta T and Yamada K 2003 Phys. Rep. 387 1–149
miyake2007new Miyake K 2007 J. Phys.: Condens. Matter 19 125201
mott1968metal Mott N F 1968 Rev. Mod. Phys. 40 677
schulz1990correlation Schulz H J 1990 Phys. Rev. Lett. 64 2831
stafford1993scaling Stafford C A and Millis A J 1993 Phys. Rev. B 48 1409
jordens2008mott Jördens R, Strohmaier N, Günter K, Moritz H and Esslinger T 2008 Nature 455 204–7
kokalj2013thermodynamics Kokalj J and McKenzie R H 2013 Phys. Rev. Lett. 110 206402
ramirez1997colossal Ramirez A P 1997 J. Phys.: Condens. Matter 9 8171
hart2015observation Hart R A, Duarte P M, Yang T-L, Liu X, Paiva T, Khatami E, Scalettar R T, Trivedi N, Huse D A and Hulet R G 2015 Nature 519 211–4
chiu2018quantum Chiu C S, Ji G, Mazurenko A, Greif D and Greiner M 2018 Phys. Rev. Lett. 120 243201
giamarchi2017firmer Giamarchi T 2017 Nature 545 414–5
boll2016spin Boll M, Hilker T A, Salomon G, Omran A, Nespolo J, Pollet L, Bloch I and Gross C 2016 Science 353 1257–60
parsons2016site Parsons M F, Mazurenko A, Chiu C S, Ji G, Greif D and Greiner M 2016 Science 353 1253–6
cheuk2016observation Cheuk L W, Nichols M A, Lawrence K R, Okan M, Zhang H, Khatami E, Trivedi N, Paiva T, Rigol M and Zwierlein M W 2016 Science 353 1260–4
schneider2012fermionic Schneider U et al 2012 Nat. Phys. 8 213–18
salomon2019direct Salomon G, Koepsell J, Vijayan J, Hilker T A, Nespolo J, Pollet L, Bloch I and Gross C 2019 Nature 565 56–60
mitra2018quantum Mitra D, Brown P T, Guardado-Sanchez E, Kondov S S, Devakul T, Huse D A, Schauß P and Bakr W S 2018 Nat. Phys. 14 173–7
Lieb:1968 Lieb E H and Wu F Y 1968 Phys. Rev. Lett. 20 1445
takahashi1972one Takahashi M 1972 Prog. Theor. Phys. 47 69–82
essler2005one Essler F H L, Frahm H, Göhmann F, Klümper A and Korepin V E 2005 The One-Dimensional Hubbard Model (Cambridge: Cambridge University Press)
koepsell2019imaging Koepsell J, Vijayan J, Sompet P, Grusdt F, Hilker T A, Demler E, Salomon G, Bloch I and Gross C 2019 Nature 572 358–62
scherg2021observing Scherg S, Kohlert T, Sala P, Pollmann F, Madhusudhana B H, Bloch I and Aidelsburger M 2021 Nat. Commun. 12 4490
gall2021competing Gall M, Wurz N, Samland J, Chan C F and Köhl M 2021 Nature 589 40–3
vijayan2020time Vijayan J, Sompet P, Salomon G, Koepsell J, Hirthe S, Bohrdt A, Grusdt F, Bloch I and Gross C 2020 Science 367 186–9
hirsch1985two
Hirsch J E 1985 Phys. Rev. B 31 4403
jarrell1992hubbard
Jarrell M 1992 Phys. Rev. Lett. 69 168
georges1992hubbard
Georges M and Kotliar G 1992 Phys. Rev. B 45 6479
glocke2007half
Glocke S, Klümper A and Sirker J 2007 Phys. Rev. B 76 155121
de2011thermodynamics
De Leo L, Bernier J-S, Kollath C, Georges A and Scarola V W 2011 Phys. Rev. A 83 023606
ibarra2020thermodynamics
Ibarra-García-Padilla E, Mukherjee R, Hulet R G, Hazzard K R A, Paiva T and Scalettar R T 2020 Phys. Rev. A 102 033340
wietek2021mott Wietek A, Rossi R, Šimkovic IV F, Klett M, Hansmann P, Ferrero M, Stoudenmire E M, Schäfer T and Georges A 2021 Phys. Rev. X 11 041013
frahm1990critical Frahm H and Korepin V E 1990 Phys. Rev. B 42 10553
frahm1991correlation Frahm H and Korepin V E 1991 Phys. Rev. B 43 5653
qin2022hubbard Qin M, Schäfer T, Andergassen S, Corboz P and Gull E 2022 Annu. Rev. Condens. Matter Phys. 13 275–302
kim2020spin Kim A J, Šimkovic IV F and Kozik E 2020 Phys. Rev. Lett. 124 117602
ilievski2017ballistic Ilievski E and De Nardis J 2017 Phys. Rev. B 96 081118(R)
ilievski2018superdiffusion Ilievski E, De Nardis J, Medenjak M and Prosen T 2018 Phys. Rev. Lett. 121 230602
guardado2020subdiffusion Guardado-Sanchez E, Morningstar A, Spar B M, Brown P T, Huse D A, and Bakr W S 2020 Phys. Rev. X 10 011042
bertini2021finite Bertini B, Heidrich-Meisner F, Karrasch C, Prosen T, Steinigeweg R and Žnidarič M 2021 Rev. Mod. Phys. 93 025003
moeckel2008interaction Moeckel M and Kehrein S 2008 Phys. Rev. Lett. 100 175702
eckstein2009thermalization Eckstein M, Kollar M and Werner P 2009 Phys. Rev. Lett. 103 056403
schlunzen2017nonequilibrium Schlünzen N, Joost J-P, Heidrich-Meisner F and Bonitz M 2017 Phys. Rev. B 95 165139
landau1959theory Landau L 1959 Sov. Phys. JETP 8 70
baym2008landau Baym G and Pethick C 2008 Landau Fermi-Liquid Theory: Concepts and Applications (New York: Wiley)
tomonaga1950remarks Tomonaga S-i 1950 Prog. Theor. Phys. 5 544–69
luttinger1963exactly Luttinger J M 1963 J. Math. Phys. 4 1154–62
haldane1981luttinger Haldane F D M 1981 J. Phys. C: Solid State Phys. 14 2585
Scopa2022 Scopa S, Calabrese P and Piroli L 2022 Phys. Rev. B 106 134314
lorenz2002evidence Lorenz T, Hofmann M, Grüninger M, Freimuth A, Uhrig G S, Dumm M and Dressel M 2002 Nature 418 614–7
demler2002fractionalization Demler E, Nayak C, Kee H-Y, Kim Y B and Senthil T 2002 Phys. Rev. B 65 155103
kollath2005spin Kollath C, Schollwöck U and Zwerger W 2005 Phys. Rev. Lett. 95 176401
jompol2009probing Jompol Y, Ford C J B, Griffiths J P, Farrer I, Jones G A C, Anderson D, Ritchie D A, Silk T W and Schofield A J 2009 Science 325 597–601
schmidt2010spin Schmidt T L, Imambekov A and Glazman L I 2010 Phys. Rev. B 82 245104
ma2017angle Ma Y et al 2017 Nat. Commun. 8 14231
he2020emergence He F, Jiang Y-Z, Lin H-Q, Hulet R G, Pu H and Guan X-W 2020 Phys. Rev. Lett. 125 190401
senaratne2022spin Senaratne R, Cavazos-Cavazos D, Wang S, He F, Chang Y-T, Kafle A, Pu H, Guan X-W and Hulet R G 2022 Science 376 1305–8
Recati:PhysRevLett.90.020401
Recati A, Fedichev P O, Zwerger W and Zoller P 2003
Phys. Rev. Lett. 90 020401
voit1993charge Voit J 1993 J. Phys.: Condens. Matter 5 8305
Kim:1996
Kim C, Matsuura A Y, Shen Z-X, Motoyama N, Eisaki H, Uchida S, Tohyama T and Maekawa S 1996
Phys. Rev. Lett. 77 4054
auslaender2005spin
Auslaender O M, Steinberg H, Yacoby A, Tserkovnyak Y, Halperin B I, Baldwin K W,
Pfeiffer L N and West K W 2005
Science 308 88–92
Kim:2006
Kim B Jet al 2006
Nat. Phys. 2 397–401
veness2016mobile Veness T and Essler F H L 2016 Phys. Rev. B 93 205101
cheianov2004nonunitary Cheianov V V and Zvonarev M B 2004 Phys. Rev. Lett. 92 176401
fiete2004green Fiete G A and Balents L 2004 Phys. Rev. Lett. 93 226401
fiete2007colloquium Fiete G A 2007 Rev. Mod. Phys. 79 801
fiete2005theory Fiete G A, Qian J, Tserkovnyak Y and Halperin B I 2005 Phys. Rev. B 72 045315
fiete2005transport Fiete G A, Le Hur K and Balents L 2005 Phys. Rev. B 72 125416
hew2008spin Hew W K, Thomas K J, Pepper M, Farrer I, Anderson D, Jones G A C and Ritchie D A 2008 Phys. Rev. Lett. 101 036801
meden1999nonuniversality Meden V 1999 Phys. Rev. B 60 4571
giamarchi2003quantum Giamarchi T 2003 Quantum Physics in One Dimension (Oxford: Oxford University Press)
miranda2003introduction Miranda E 2003 Braz. J. Phys. 33 3–35
Cavazos-Cavazos:2022 Cavazos-Cavazos D, Senaratne R, Kafle A and Hulet R G 2022 arXiv:2210.06306
giamarchi1991umklapp Giamarchi T 1991 Phys. Rev. B 44 2905
hubbard1963electron Hubbard J 1963 Proc. R. Soc. Lond. A 276 238–57
brinkman1970application Brinkman W F and Rice T M 1970 Phys. Rev. B 2 4302
han2016charge Han X J, Liu Y, Liu Z Y, Li X, Chen J, Liao H J, Xie Z Y, Normand B and Xiang T 2016 New J. Phys. 18 103004
luther1974backward Luther A and Emery V J 1974 Phys. Rev. Lett. 33 589
voit1998dynamical Voit J 1998 Eur. Phys. J. B 5 505–19
orgad2001spectral Orgad D 2001 Philos. Mag. B 81, 377–98
sciolla2013competition Sciolla B, Tokuno A, Uchino S, Barmettler P, Giamarchi T and Kollath C 2013 Phys. Rev. A 88 063629
paiva2010fermions Paiva T, Scalettar R, Randeria M and Trivedi N 2010 Phys. Rev. Lett. 104 066406
gorelik2012universal Gorelik E V, Rost D, Paiva T, Scalettar R, Klümper A and Blümer N 2012 Phys. Rev. A 85 061602(R)
werner2005interaction Werner F, Parcollet O, Georges A and Hassan S R 2005 Phys. Rev. Lett. 95 056401
dare2007interaction Daré A-M, Raymond L, Albinet G and Tremblay A-M S 2007 Phys. Rev. B 76 064402
Degu2000 Deguchi T, Essler F H L, Göhmann F, Klümper A, Korepin V E and Kusakabe K 2000 Phys. Rep. 331 197–281
YangCN:1969 Yang C N and Yang C P 1969 J. Math. Phys. 10 1115
klumper1996exact Klümper A and Bariev R Z 1996 Nucl. Phys. B 458 623–39
Batchelor:2007 Batchelor M T, Guan X-W, Oelkers N and Tsuboi Z 2007 Adv. Phys. 56 465–543
carmelo1991renormalized Carmelo J, Horsch P, Bares P A and Ovchinnikov A A 1991 Phys. Rev. B 44 9967
Ess1994a Essler F H L and Korepin V E 1994 Phys. Rev. Lett. 72 908
DMRG Feiguin A E and Heidrich-Meisner F 2007 Phys. Rev. B 76, 220508(R)
Lüscher A, Noack R M and Läuchli A M 2008 Phys. Rev. A 78 013637
Rizzi M, Polini M, Cazalilla M A, Bakhtiari M R, Tosi M P and Fazio R 2008 Phys. Rev. B 77 245105
Tezuka M and Ueda M 2008 Phys. Rev. Lett. 100 110403
Tezuka M and Ueda M 2010 New J. Phys. 12 055029
Penc1995 Penc K, Mila F and Shiba H 1995 Phys. Rev. Lett. 75 894
Krivnov1975 Krivnov V Ya and Ovchinnikov A A 1975 Zh. Eksp. Teor. Fiz. 67 1568–81
Woyna1983 Woynarovich F 1983 J. Phys. C: Solid State Phys. 16 6593
Bogoliubov1988 Bogoliubov N M and Korepin V E 1988 Mod. Phys. Lett. B 1 349
Bogoliubov N M and Korepin V E 1990 Theor. Math. Phys. 82 231-43
Woyna1991 Woynarovich F and Penc K 1991 Z. Physik B - Condensed Matter 85 269-80
Ess1994b Essler F H L and Korepin V E 1994 Nucl. Phys. B 426 505–33
Sacra1995 Sacramento P D 1995 J. Phys.: Condens. Matter 7 143
Cheng:2018A Cheng S, Yu Y-C, Batchelor M T and Guan X-W 2018 Phys. Rev. B 97 121111(R)
Cheng:2018B Cheng S, Jiang Y-Z, Yu Y-C, Batchelor M T and Guan X-W 2018 Nucl. Phys. B 929 353–76
Cheng:2018C Cheng S, Yu Y-C, Batchelor M T and Guan X-W 2018 Phys. Rev. B 97 125145
white1989numerical White S R, Scalapino D J, Sugar R L, Loh E Y, Gubernatis J E and Scalettar R T 1989 Phys. Rev. B 40 506
caffarel1994exact Caffarel M and Krauth W 1994 Phys. Rev. Lett. 72 1545
wang2020zero Wang Y, Esterlis I, Shi T, Cirac J I and Demler E 2020 Phys. Rev. Research 2 043258
timrov2021self Timrov I, Marzari N and Cococcioni M 2021 Phys. Rev. B 103 045141
senechal2000spectral Sénéchal D, Perez D and Pioro-Ladrière M 2000 Phys. Rev. Lett. 84 522
ovchinnikov1969excitation Ovchinnikov A A 1969 Sov. Phys. JETP 29
nocera2016magnetic Nocera A, Patel N D, Fernandez-Baca J, Dagotto E and Alvarez G 2016 Phys. Rev. B 94 205145
molter2014bound Mölter J, Barthel T, Schollwöck U and Alba V 2014 J. Stat. Mech. 2014 P10029
WangZ:2018 Wang Z et al 2018 Nature 554 219–23
kawakami1989thermodynamic Kawakami N, Usuki T and Okiji A 1989 Phys. Lett. A 137 287–90
usuki1990thermodynamic Usuki T, Kawakami N and Okiji A 1990 J. Phys. Soc. Jpn. 59 1357–65
takahashi1974low Takahashi M 1974 Prog. Theor. Phys. 52 103–14
Wilson:1975 Wilson K G 1975 Rev. Mod. Phys. 47 773
guan2013wilson Guan X-W, Yin X-G, Foerster A, Batchelor M T, Lee C-H and Lin H-Q 2013 Phys. Rev. Lett. 111 130401
yu2016dimensionless Yu Y-C, Chen Y-Y, Lin H-Q, Römer R A and Guan X-W 2016 Phys. Rev. B 94 195129
boehler1980experimental Boehler R and Ramakrishnan J 1980 J. Geophys. Res.: Solid Earth 85 6996–7002
kuchler2003divergence Küchler R et al 2003 Phys. Rev. Lett. 91 066405
Gruneisen-AdP-1908
Grüneisen E 1908 Ann. Phys. 331 211–216
yu2020gruneisen Yu Y-C, Zhang S and Guan X-W 2020 Phys. Rev. Research 2 043066
guan2013fermi Guan X-W, Batchelor M T and Lee C 2013 Rev. Mod. Phys. 85 1633
sachdev2011quantum Sachdev S 2011 Quantum Phase Transitions (Cambridge: Cambridge University Press)
hazzard2011techniques Hazzard K R A and Mueller E J 2011 Phys. Rev. A 84 013604
fukui1995haldane Fukui T and Kawakami N 1995 Phys. Rev. B 51 5239
vitoriano2009fractional Vitoriano C and Coutinho-Filho M D 2009 Phys. Rev. Lett. 102 146404
zhang2022interaction Zhang X B, Chen Y Y, Liu L X, Deng Y J and Guan X-W 2022 Natl. Sci. Rev. 9
continentino2017quantum Continentino M A 1994 Phys. Rep. 239 179–213
continentino1989critical Continentino M A, Japiassu G M and Troper A 1989 Phys. Rev. B 39 9734
continentino2005quantum Continentino M A 2005 Braz. J. Phys. 35 197
LPG:2022 Luo J-J, Pu H and Guan X-W 2023 Phys. Rev. B 107 L201103
mohankumar2007two Mohankumar N 2007 Comput. Phys. Commun. 176 665–9
campo2012thermal Campo Jr V L, Capelle K, Hooley C, Quintanilla J and Scarola V W 2012 Phys. Rev. A 85 033644
van2018competing van Loon E G C P, Rösner M, Schönhoff G, Katsnelson M I and Wehling T O 2018 npj Quant. Mater. 3 32
tan2008energetics Tan S 2008 Ann. Phys. 323 2952–70
Tan S 2008 Ann. Phys. 323 2987–90
Tan S 2008 Ann. Phys. 323 2971–86
ZhangSZ:2009 Zhang S and Leggett A J 2009 Phys. Rev. A 79 023601
Braaten:2009 Braaten E and Platter L 2009 Laser Phys. 19 550–3
Stewart:2010 Stewart J T, Gaebler J P, Drake T E and Jin D S 2010 Phys. Rev. Lett. 104 235301
Yu:2015 Yu Z, Thywissen J H and Zhang S 2015 Phys. Rev. Lett. 115 135304
Yoshida:2015 Yoshida S M and Ueda M 2015 Phys. Rev. Lett. 115 135303
Cui:2016 Cui X 2016 Phys. Rev. A 94 043636
chen2014critical Chen Y-Y, Jiang Y-Z, Guan X-W and Zhou Q 2014 Nat. Commun. 5 5140
giamarchi1997mott Giamarchi T 1997 Physica B: Condens. Matter 230 975–80
|
http://arxiv.org/abs/2307.03319v1
|
20230706222142
|
Covering Uncommon Ground: Gap-Focused Question Generation for Answer Assessment
|
[
"Roni Rabin",
"Alexandre Djerbetian",
"Roee Engelberg",
"Lidan Hackmon",
"Gal Elidan",
"Reut Tsarfaty",
"Amir Globerson"
] |
cs.CL
|
[
"cs.CL"
] |
Machine Learning to detect cyber-attacks and discriminating the types of power system disturbances
[
==================================================================================================
Human communication often involves information gaps between the interlocutors. For example, in an educational dialogue, a student often provides an answer that is incomplete, and there is a gap between this answer and the perfect one expected by the teacher. Successful dialogue then hinges on the teacher asking about this gap in an effective manner, thus creating a rich and interactive educational experience. We focus on the problem of generating such gap-focused questions (s) automatically. We define the task, highlight key desired aspects of a good , and propose a model that satisfies these. Finally, we provide an evaluation by human annotators of our generated questions compared against human generated ones, demonstrating competitive performance.
§ INTRODUCTION
Natural language dialogues are often driven by information gaps. Formally, these are gaps between the epistemic states of the interlocutors. Namely, one knows something that the other does not, and the conversation revolves around reducing this gap. An important example is the education setting where teachers ask students questions, and receive answers that may be incomplete. With the expectation of what a complete answer should contain, the teacher then engages in a gap-focused dialogue to help the student to arrive at a complete answer. There are multiple other application settings of information gaps, including support-line bots, long-form Q&A, and automated fact checking.
The core challenge in this setting is how to
generate effective questions about the information gap.
In terms of formal semantics and pragmatics, this gap can be viewed as the complementary of the common-ground <cit.> held by the interlocutors.
Somewhat surprisingly, despite much work on dialogue learning <cit.> and question generation <cit.>,
little attention has been given to generating questions that focus on such information gaps.
The formal traditional approach to representing the dialogic information gap is via the set of propositions that are known to one side but not the other <cit.>. However, this set can be quite large, and it is also unclear how to turn these propositions into dialogue utterances. We propose an arguably more natural representation: a generated set of natural language questions whose answers represent the information that the teacher needs to ask about to reduce the gap. We call these gap-focused questions (s). A key advantage of this representation is that the generated questions can be used directly in the teacher-student dialogue.
Given a complete teacher answer and a partial student answer, there are many questions that could be asked, but some are more natural than others. For example, consider the complete answer “A man is wearing a blue hat and a red shirt and is playing a guitar”, and a student response “There is a man playing the guitar”.
Two candidate questions could be “What color hat is the man wearing?” and “What is the man wearing?”. The second question is arguably more natural as it does not reveal information that is not in the teacher-student common ground, namely that a hat is being worn.
The above demonstrates some of the complexity of generating effective s, and the need to rely on certain discourse desiderata.
In this work we define the challenge, a novel question generation task, and we detail the desired properties of the generated questions. Subsequently, we provide a model for generation that aims to satisfy these desiderata, and demonstrate its competitiveness via a task of generating questions to fill the gap between premises and hypotheses in a standard natural language inference (NLI) setup.
In designing desired properties for s, we take inspiration from theories of collaborative communication, and in particular Grice's maxims <cit.>. For example, the maxim of quantity states that speakers are economic and do not communicate what is already known. Thus, the teacher should not ask about what is already in the common ground with the student.
In the above example, this means not asking “What is the man playing?”.
We describe additional desiderata in <ref>.
Furthermore, according to Grice's maxim of quality, one tries to be truthful, and does not give information for which one lacks adequate evidence. In our setup this requires the teacher to ask only about facts that are known to the them. e.g., do not ask “which song is the man singing?”.
To tackle the GFQ challenge, we show how general-purpose NLP models (question generation, question answering, and constituency parsing) can be used to generate s that satisfy the discourse desiderata. See Figure <ref> for an outline of the process.
To assess our model, we consider pairs of texts that contain information gaps, and evaluate our ability to capture these gaps using s. Such texts are readily available in NLI datasets that contain pairs of a premise and an entailed hypothesis with less information. We consider the SNLI dataset <cit.>, and use human annotators to evaluate the merit of our approach relative to s generated by humans.
Our contribution is three-fold. First, we propose the novel setup of gap-focused questions, a key element of a student-teacher discourse as well as other settings such as automated fact checking. Second, we identify desiderata inspired by conversational maxims, and provide a model for generating questions that satisfy them. Third, we demonstrate the merit of our model on an NLI dataset.
§ RELATED WORK
Natural dialogue is a key goal of modern NLP and, despite substantial progress, there is still a considerable difference between humans and models. In this work we focus on dialogues where the bot (teacher) knows more than the user (student), and the goal is to gradually decrease this knowledge gap via gap-focused follow-up questions.
Several works have focused on the problem of follow-up question generation in dialogues. However, to the best of our knowledge, none of these focus on information gaps as we do.
<cit.> introduce the problem of inquisitive question generation, where the goal is to generate questions about facts that are not in the text. This is not done in reference to a complete text, and is thus principally different from our goal. In fact, in our settings, an inquisitive question would typically be a bad , since it refers to information that is outside the knowledge of both teacher and student. Prior works considered a related task referred to as answer-agnostic question generation <cit.>, but with a focus on factual questions, whereas the inquistive setting is broader.
Another class of follow-up questions are clarification ones <cit.>, which can also be viewed as a special case of inquistive questions. Again, there is no reference to a complete text that defines the information gap. Finally, there are works on follow-up questions guided by rules as in the SHARC dataset <cit.>.
Our setting is also related to the challenge of explainable NLI <cit.>, namely the task of explaining why a certain sentence entails another. The output can be viewed as a novel explanation mechanism of why the student text is entailed by the source text, as it explicitly refers to the gap between these texts.
Our work is inspired by novel uses of question generation models, particularly in the context of evaluating model consistency <cit.>. In these, question generation is used to find “LLM hallucinations” where the generated text is not grounded in a given reference text. Our task can be viewed as the inverse of the knowledge grounding task, and our particular focus is on the questions generated rather than just pointing to information gaps. An additional line of work in this vein is QA-based semantics, where text semantics are represented via a set of questions rather than a formal graph <cit.>.
§ CRITERIA FOR GAP-FOCUSED QUESTIONS
Given a complete source text and a student text , our goal is to construct a model that takes and as input and produces a set of one or more questions Q that ask about the information gap between and . If one takes the term “information gap” literally, there are many such possible questions (e.g., which word appears in but not in ). In a natural language setting we are obviously interested in questions that are natural, that is, would likely be asked by a human who knows and has heard the student description . When defining the desiderata for the generated questions, we consider what knowledge is held by the teacher and the student and what information is inside and outside their common ground (see Figure <ref>). We next identify desired properties for the generated questions, followed by a description of our model for generating gap-focused questions that satisfy these desiderata.
The following desired properties of an effective GFQ are loosely based on collaborative communication concepts <cit.>:
* P1: Answerability: Only ask questions that can be answered based on the complete text (areas A∪ B in Figure <ref>). This follows from Grice's maxim of relevance; speakers say things that are pertinent to the discussion.
* P2: Answers should not be in the common ground: If the student has already demonstrated knowing a fact in , there is no reason to ask about it again. Namely, in Figure <ref>, we don't want to ask about information in B. This pertains to Grice's maxim of quantity; speakers are economic, they do not utter information beyond the bare minimum that is necessary to ask the question, and they will refrain from repeating already-known information.
* P3: Questions should only use information known to the user: The question itself should rely only on information in and not in . For example if is “A Woman is wearing a blue hat” and is “A woman is wearing something”, it is preferable not to ask “What color is the hat?” as it refers to information that did not appear in (i.e., that the woman is wearing a hat). This is loosely related to the Grice maxim of manner, where one tries to be clear, brief, and orderly. If we were to ask questions using information unknown to the user (in area A in figure <ref>), we may introduce unnecessary details and obscurity into the discussion.[Note that in some cases this may only be partially possible and a “hint” must be provided in order to be able to phrase a grammatically correct and semantically sensible question.]
* P4: Do not be too specific. Asking about information that is too specific has two key shortcomings. First, it may obscure an information-gap that is broader and is of greater importance. Second, it is more likely the student will not be able to answer an overly specific detail. This again follows from Grice's maxim of manner.
§ THE GFQS GENERATION APPROACH
We next describe our modeling approach for the GFQ generation problem, with the goal of capturing the properties described above.
Before describing our s generation approach, we briefly outline the NLP components we rely on in the question generation process:
A question generation model G that, given an input text T and a span X⊂ T, generates questions about T whose answer is X.
For example:
Input: “The dog and cat are brown”; “brown”
Output: {“What color is the dog?”,“What color is the cat?”, “What color are the dog and cat?”}
A question answering model A, that takes as input a text T and a question Q about the text, and returns the answer or an indication that the question is unanswerable from the text.
For example:
Input: “The dog is brown”;“What color is the dog?”
Output: “brown”
Input: “The dog is brown”; “What is the dog's name?”
Output: “UNANSWERABLE”
A constituency parser P, that takes a text X, breaks it down into sub-phrases (constituents), and returns a parse tree.
Additional details about these components can be found in appendix <ref>.
We are now ready to describe our approach for generating s. The model generates an ordered set of possible follow-up questions via the following steps, which roughly correspond to the desired criteria described in <ref>:
* Step 1: Generate answerable questions (P1). Using the constituency parser P, we extract the spans of all the constituents in the source text , except for the very large ones (those spanning the entire S) as well as single word spans containing functional elements (e.g., prepositions). For each span X⊂, we use the question generation model G to generate a set of questions whose answer should be X, thus creating a set Q_T of questions that satisfy the answerablity property. We denote this set Q_T and assign =Q_T.
* Step 2: Filter common ground answers (P2). We now wish to remove questions that are part of the common ground, i.e., answerable by the student text . To that end, we use the question answering model A, and for each q∈ if A(, q) ≠“UNANSWERABLE”, we set = ∖{q}.
* Step 3: Ask using common ground information (P3). We prefer questions that do not reveal information beyond the common ground. This is not always strictly possible and thus, instead of filtering questions, we rank them according to the (possibly zero) amount of additional information they reveal. We do so as follows. Let R be all the answers to the questions in Q_G. Sort by the number of constituents it shares with R (in ascending order). The first element thus uses the least number of facts unknown to the student.
Step 1: Generate answerable questions (P1). Using the constituency parser P, we extract the spans of all the constituents in the source text , except for those spanning the entire sentence, and single word spans containing functional elements (e.g., prepositions). For each span X⊂, we use the question generation model G to generate a set of questions whose answer should be X, thus creating a set of questions that satisfy the answerablity property. We denote this set Q_T and assign =Q_T.
Step 2: Filter questions whose answers are in the common ground. (P2). We next wish to remove questions that are answerable by the student text . To that end, we use the question answering model A, and for each q∈ if A(, q) ≠“UNANSWERABLE”, we set = ∖{q}.[Note that Step 2 will also filter out questions that the student answered incorrectly. This would be an area for improvement in future models.]
Step 3: Prefer questions which only use information known to the user (P3). We prefer questions that do not reveal information beyond what is known to the user. This is not always strictly possible and thus, instead of filtering, we rank questions according to the (possibly zero) amount of additional information they reveal. To do so, let R be all the answers to the questions in Q_G. By construction R contains spans from that the student didn't mention, i.e. these are spans that we would prefer not to appear in the generated questions. For each q∈, we count the number of items in R included in q. We sort in ascending order by this number and return the first element. We thus return a question that uses the least number of facts unknown to the student.
§ EXPERIMENTS
We next describe an evaluation of our model.
Data: We use the SNLI Dataset <cit.> where a Natural language inference (NLI) pair contains two sentences denoting a premise and a hypothesis, and the relation between them can be entailment, contradiction and neutral.
We focus on pairs labeled as entailment, and filter out those with bi-directional entailment, so that there is a gap between hypothesis and premise.
We thus filter bi-directional entailments using an NLI model (similar to <cit.>). In the resulting set of one-directional entailments, the information in the premise () is strictly greater than the information in the hypothesis ().
We do not use any data for training, and apply our model to the test partition of the SNLI dataset.
Evaluation Benchmark:
In order to compare the quality of our automatically generated questions to manually generated ones, we asked human annotators to generate questions for 200 instances of the SNLI test set (see Appendix <ref> for the annotator instructions). We emphasize that these questions were only used for evaluation, as explained below, and not for training the model. They were collected after model design was completed. We release this evaluation dataset to the public, it is available https://storage.googleapis.com/gresearch/gap-focused-questions/data.ziphere. See additional details about this dataset in appendix <ref>.
Annotator Evaluation of Generated Questions: As with other generative settings, offline evaluation is challenging. In fact, even if we had human generated questions for all SNLI, using those for evaluation would need to assume that they are exhaustive (otherwise the model can generate a good question but be penalized because it is not in the set generated by humans). Instead, as is commonly done <cit.>, we rely on human evaluation. We present annotators with , and a candidate q and ask them to provide a 1-5 score of how well q functions as a follow-up question (see Appendix <ref> for annotators instructions). We use 3 annotators per question.
Compared Models:
We compare four generation approaches:
Human: Questions generated by human annotators;
Step 1: This model selects a random question out of those generated by the question generation model (i.e., Step 1 in <ref>). We note that this is already a strong baseline because its questions are based on the source text.
Step 2: The outcome of Step 2 in <ref> where only questions not answerable by the student text are kept. Step 3: The outcome of Step 3, where we additionally aim for questions which use information known to the user.
Results: Table <ref> provides the average scores for each of the considered models and the human generated questions. It can be seen that each step contributes to the score, and human generated questions are somewhat better than our final model (Step 3). Using the Wilcoxon signed-rank test for paired differences, we found that all differences were significant at p-value ≤ 0.05.
Experiment 1: Comparison to Baselines.
Table <ref> provides the average scores for each model. It can be seen that human generated questions are the best, followed by our full model and the random selection baseline. To assess the statistical significant of the differences between the different models, we apply the non-parametric Wilcoxon signed-rank test on the score difference on each pair of generation methods. We find that our approach is superior to the strong baseline with p-value 5.8e-04, whereas, as expected, the human generation dominates with p-value 6.8e-08.
Examples: Figure <ref> shows an example of the three stages, and a human generated question. Appendix <ref> provides more examples.
Error Analysis: We analyze cases where our final model (Step 3) received low scores from the annotators (an average score of 3 and lower). In our analysis we have observed three main loss patterns (sometimes appearing together): (1) Poor question phrasing — these are questions whose structure or choice of words is less natural than if a person were to ask the same question. See example in the first row in Table <ref>. (2) Questions which include information outside of the teacher-student common ground. These are cases where the minimum criterion defined in Step 3 still results in a question with some information unknown to the user. See examples in the first 2 rows in Table <ref>. (3) Questions including information outside the complete source text. In rare cases we have found that the question generation model generates questions that include “hallucinations” or point to issues in the semantic understanding of the complete source text. See the third example in Table <ref>.
§ CONCLUSION
We consider the task of question generation in a novel setting where there is an information gap between speakers, and the gap-focused questions (s) aim to reduce this gap. Building on advances in question generation and question answering, we show how to generate useful s that meet several natural criteria inspired by theories cooperative conversation.
It is natural to ask whether one can employ a fully generative approach for GFQs using LLMs. This is a natural direction for future study, and we believe that the criteria and design choices we studied here will be significant in defining and evaluating such future work.
§ LIMITATIONS
We present the first study of generating questions for filling in information gaps. Our method is limited in several ways. First, it focuses on information that is explicitly missing, and does not discuss information that is inaccurate or incomplete in other ways. Second, it only asks one follow-up question and does not address multi-turn dialogue about a student answer, or multiple student answers. Finally, our approach makes somewhat restricted use of the student answer, and it will be better to generate questions that directly uptake information from the student text <cit.>. We leave the deep investigation of these for future work.
§ ACKNOWLEDGMENTS
We thank Avi Caciularu for constructive feedback on this work.
§ ETHICS AND IMPACT
Regarding risks, as with any NLP model, care must be taken in application, so that it generates truthful information, and does not introduce biases. However, we think this is not a major concern in our case as our modeling will generate text directly related to the source and student texts. In terms of impact, our approach can be used to improve a wide array of applications, including educational dialogue (e.g., reading comprehension), support-line bots, and automated fact checking.
acl_natbib
§ ANNOTATING GUIDELINES
Here we provide all the guidelines to annotators, for both human question generation
and human rating of questions generated by the model.
Guidelines for the human annotator task of writing follow-up questions:
We depict the guidelines and the examples for the writing follow-up questions task in Figure <ref>, and the task design in Figure <ref>.
Guidelines for the human annotator task of rating follow-up questions:
We depict the guidelines of the task of rating the follow-up questions in Figure <ref>, the examples in Figure <ref>, and the task design in Figure <ref>.
§ ANNOTATOR RELATED INFORMATION
Annotators were paid by the hour, and recruited as contractors for a variety of annotating projects by our team and related teams. The annotators are all native English speakers (from Canada and the US). They are also aware of the way in
which the information will be used. There are no special ethical sensitivities in the collection process and thus it was exempt from an ethics review board.
§ IMPLEMENTATION DETAILS
Question Generation Model: As our question generation model G, we use the T5-xxl model <cit.> fine-tuned on SQuAD1.1
<cit.>. We also use beam search and question filtering, similarly to <cit.>, see this work for further details.
Question Answering Model: For our question answering model A, we use the T5-xxl model <cit.> fine-tuned on SQuAD2.0 <cit.>.
Constituency Parser: We use the Berkeley Neural Parser <cit.>, implemented in the spaCy package.[We used spaCy3.0 – <https://spacy.io/>.]
SNLI Filtering: We consider the subset of SNLI with an “entailed” label.
Since we are not interested in the case of equivalent hypothesis and premise, we filter out bi-directional entailments using an NLI model (similar to <cit.>). In the resulting set of one-directional entailments, the information in the premise () is strictly greater than the information in the hypothesis (), which is our case of interest.
§ COMPUTATIONAL RESOURCES DETAILS
In terms of computational resources, the project is lightweight, as it required no training at all, and just running inference steps of pre-trained models (question answering, question generation and parsing), all of which run in several minutes on standard GPUs.
§ GFQ TEST RELEASED DATASET
We release a benchmarking dataset of 200 examples from SNLI test with a human generated gap-focused question. The data is available https://storage.googleapis.com/gresearch/gap-focused-questions/data.ziphere.
Details about the dataset
We asked 3 annotators to write questions for each SNLI pair (see guidelines in appendix <ref>) and used a heuristic to select a single GFQ. When selecting this single question our goal is to prefer GFQs where multiple annotators chose to write a question about the same topic. We therefore apply the following heuristic: for each human written question q we used our question answering model A and define a as the answer to this question given T_c: a = A(T_c, q). We then count n: the number of annotators which produced questions leading to the same answer a, we look at the questions for which n is maximal and choose a random question from there.
License This data as well as the underlying SNLI data are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License [http://creativecommons.org/licenses/by-sa/4.0/].
§ EXAMPLES OF GENERATED QUESTIONS
Here we provide examples of questions generated by humans and by the different models we consider.
Table <ref> reports questions generated by Step 1, Step 2, Step 3 and Human.
§ DATA RELATED INFORMATION
The data collected from annotators contains the manually generated questions and the scoring of generated questions. There are no issues of offensive content or privacy in this data, as it based closely on the SNLI dataset.
|
http://arxiv.org/abs/2307.01719v2
|
20230704134552
|
MOPO-LSI: A User Guide
|
[
"Yong Zheng",
"Kumar Neelotpal Shukla",
"Jasmine Xu",
"David",
"Wang",
"Michael O'Leary"
] |
q-fin.PM
|
[
"q-fin.PM",
"cs.AI",
"cs.LG"
] |
^1Illinois Institute of Technology, Chicago, IL, USA 60616
^2Morningstar, Inc., Chicago, IL, USA 60602
Corresponding Email:
MOPO-LSI: A User Guide
Yong Zheng^1, Kumar Neelotpal Shukla^2, Jasmine Xu^2, David (Xuejun) Wang^2, Michael O'Leary^2
August 1, 2023
==================================================================================================
MOPO-LSI is an open-source Multi-Objective Portfolio Optimization Library for Sustainable Investments. This document provides a user guide for MOPO-LSI version 1.0, including problem setup, workflow and the hyper-parameters in configurations.
§ INTRODUCTION
Several studies <cit.> have examined financial portfolio optimization, which involves selecting financial assets in a meticulous manner to align with an investor's goals while considering their tolerance for risk. Asset allocation, a crucial aspect of this process, entails distributing an investor's portfolio across various asset classes (such as stocks, bonds, and cash) based on their risk tolerance and investment objectives. The main objective of traditional portfolio optimization is to construct a portfolio that either maximizes expected returns at a specific risk level or minimizes risk for a given level of expected returns.
Multiple studies <cit.> have highlighted the increasing interest in sustainable investments. Investors are recognizing the importance of incorporating Environmental, Social, and Governance (ESG) factors <cit.> into their decision-making processes. Sustainable investments, also known as socially responsible investing <cit.> or ESG investing <cit.>, involve an investment approach that goes beyond financial returns and considers the impact of investments on the environment, society, and corporate governance practices. This trend signifies a growing awareness among investors about the broader implications of their investment choices.
MOPO-LSI <cit.> is an open-source Multi-Objective Portfolio Optimization Library for Sustainable Investments. It was specifically designed and developed for sustainable investments in mutual funds by using multi-objective optimization (MOO). By integrating ESG factors into portfolio optimization in MOPO-LSI, we aim to generate long-term financial returns while promoting sustainable and responsible practices. This document provides a user guide for MOPO-LSI version 1.0, including problem setup, hyper-parameters and example of the outputs.
§ MOPO-LSI: PROBLEM SETUP
The MOPO-LSI library facilitates asset allocations for a collection of mutual funds, where the dataset provides ESG scores for each mutual fund. These ESG scores are categorized as positive and negative factors. Positive ESG factors encompass aspects beneficial to society, such as clean energy, well-being, and public health. On the other hand, negative ESG variables are associated with harm, such as environmental pollution, regulatory violations, and human rights abuses. In MOPO-LSI, the primary objectives involve constructing a financial portfolio that maximizes positive ESG scores, minimizes negative ESG scores, and manages tracking error. The tracking error quantifies the deviation from standard benchmarks at a specific risk level (e.g., aggressive, moderate, conservative). The risk level serves as a comprehensive factor that captures risks, returns, and volatility. Maintaining the tracking error within a specified range ensures that the portfolio remains within the desired risk level chosen by the user (e.g., aggressive, moderate, conservative).
MOPO-LSI offers portfolio solutions in two scenarios: when client preferences are known and when client preferences are unknown. In the first scenario, client preferences are represented by weights assigned to different objectives. When these preferences are known, the optimization task can be transformed into a single-objective optimization by employing the weighted sum method, as demonstrated in Equation <ref>.
max (POS_s - NEG_s - p_m× TE)
In the equation above, s refers to a portfolio solution, and TE denotes tracking error (as shown by Equation <ref>), where p_m refers to the client preference or weight on the tracking error. POS_s and NEG_s represent the weighted sum of ESG scores on positive and negative ESG dimensions, respective, where the weights here refer to client preferences on ESG factors.
POS_s = ∑_i∈ ESG+p_i× Score_i/∑_i∈ ESG+p_i
NEG_s = ∑_j∈ ESG-p_j× Score_j/∑_j∈ ESG-p_j
In the equations above, i and j are used to denote a positive and negative ESG factor, respectively. p_i and p_j refer to client preferences on dimension i and j. The score shown in Equation <ref> describes the calculation of ESG score on a single ESG factor or dimension, where w is the vector of fund weights or allocations – that's the parameter or solution we would like to learn. E denotes the ESG matrix, where each row is a mutual fund, and each column denotes an ESG factor. E^i is used to represent the i^th column in E.
Score_i=w · E^i/|w|
TE = (w-b)^TV(w-b)
In addition to the setup of the objectives, we also assign multiple constraints in order to obtain more feasible and practical solutions:
* The summation of fund allocations should be close to 1.
* Tracking error must be smaller than a pre-defined threshold which can be easily configured in the library.
* Positive ESG scores in the solution should be no smaller than the ones in the benchmark, and negative ESG scores in the solution should be no larger than the ones in the benchmark. The benchmark is determined by a selected risk level.
* The portfolio should be diverse, e.g., we cannot assign most of the assets to a same type of the mutual funds. To do so, we can set a threshold for each asset class, and this threshold can be easily defined in the configuration file of the library.
By combining the objectives and constraints above, we are able to utilize the embedded conic solver (ECOS) <cit.> in the CVXPY <cit.> library to solve the quadratic convex optimization problem in our library. The output in this scenario will be a single optimal solution.
Alternatively, in cases where client preferences on objectives are not known in advance, we employ various multi-objective evolutionary algorithms (MOEAs)<cit.> from the Pymoo<cit.> library to optimize specific goals, which include maximizing positive ESG scores, minimizing negative ESG scores, and tracking error. However, when the number of ESG dimensions increases, MOEAs might require more time to find optimal solutions. To assist clients in identifying their desired solutions more effectively, we allow them to specify individual positive and/or negative ESG factors that they wish to maximize or minimize. This approach enables us to impose constraints on the selected ESG dimensions and ensure that the solutions achieve better ESG scores in those dimensions compared to the benchmark. MOEAs generate a Pareto set, which comprises non-dominated solutions. With MOPO, users can provide additional inputs to employ multi-criteria decision-making methods <cit.> and select a single optimal solution from the Pareto set. Furthermore, we offer 2D/3D visualizations to facilitate user observation and understanding of the chosen solution.
§ MOPO-LSI: WORKFLOW
The workflow by MOPO-LSI can be depicted by Figure 1, with descriptions as follows.
* Configurations. At the beginning, users should well-define hyperparameters in different configuration files, e.g., yaml files. Regarding these parameters, refer to the user guide on the coding repository on Github.
* Run MOPO-LSI. A user can start running the library by using "python run.py" or "python run.py –config your.yaml". The default yaml to start running is the user.yaml file.
* Load system configurations. The library will first load system configurations defined by yaml files in the folder "yaml", including system.yaml, scalarization.yaml, moea.yaml. The file "system.yaml" defines file paths, output path, risk levels (i.e., conservative, moderate, aggressive), ESG groups and dimensions, the optimization model to be used, and the general parameters for system constraints. The files, "scalarization.yaml" and "moea.yaml", define the MOO problems and corresponding hyperparameters related to the optimizers.
* Load user configurations. The file, "user.yaml", is loaded, where the library will read user inputs, such as user option on the risk level, user preferences, and so on.
* Load data. The library will then load the data sets indicated in "system.yaml", including the list of funds with their ESG scores, the covariance matrix and standard asset allocations for each risk level.
* Initialize optimizer. The library initializes the optimizer by using the parameter "model" defined in "system.yaml". The MOO problem defined in corresponding yaml, either "scalarization.yaml" or "moea.yaml", will also be set up in this stage. A MOO problem defines the list of objectives and constraints to be considered in the optimization process.
* Run ptimization. The library will run optimization by using corresponding MOO algorithms (e.g., NSGA2) or optimization solvers (e.g., ECOS for the weighted sum method).
* Get optimization results. The scalarization methods can return a single optimal solution directly, while the MOEAs will return a Pareto set which is a set of non-dominated solutions. There are several selection methods built in the library to help select a single optimal solution from the Pareto set.
* Analysis of the solutions. By given a single optimal solution, the library can compare it with the standard benchmark (i.e., a non-ESG optimized solution associated with a specific risk level), output evaluation metrics (i.e., tracking error, improvement ratio on each ESG factor), and finally return the top-N mutual funds for investments. An example of the outputs by using the sample data and the weighted sum method in MOPO-LSI can be observed from Figure 2, where the gain values refer to the weighted average of the improvement ratios over each ESG groups (i.e., positive and negative ESGs). In terms of the analysis for Pareto set, the library has the option to visualize the non-dominated solutions and hypervolumes.
§ USER'S GUIDE
In this section, the specific instructions about how to use, deploy and evaluate the MOPO-LSI library.
§.§ Data Format
A sample data set was provided in the "data/SampleData" folder. Any other practice should follow the format of the following three data files:
* funds.csv refers to a list of mutual funds as candidates in portfolio optimization. Each row is a mutual fund with columns as ESG dimensions.
* cov_matrix.csv refers to the covariance matrix of the mutual funds.
* asset_allocation.csv describes benchmark distributions over six asset classes.
§.§ Experimental Configuration
There are several yaml files which store the configurations.
* System.yaml defines file paths, output path, risk levels (i.e., conservative, moderate, aggressive), ESG groups and dimensions, the optimization model to be used, as well as the general parameters for system constraints.
* The files, "scalarization.yaml" and "moea.yaml", define the MOO problems and corresponding hyperparameters related to the optimizers.
* The file, "user.yaml", defines user inputs, such as user option on the risk level, user preferences, and so on.
More details about these configurations can be found below.
§.§.§ User.yaml
* client_option: user should declare a specific risk level, i.e., conservative, moderate, or aggressive.
* client_preferences: user preferences (for scalarization only) organized for each ESG group along with the strength information (e.g., high, moderate or low). The ESG groups or dimensions with a "high" strength will be assigned the portfolio-level ESG constraints.
* preloaded_moea_results: this option is used for MOEAs only. The non-dominated solutions can be serialized and stored in an external *.pkl file. This pkl file can be loaded into this parameter, so that the library can load these non-dominated solutions directly, and re-select a single optimal solution according to the updated user preferences defined in client_preferences_moea.
* client_pos_esg & client_neg_esg: users can declare specific ESG dimensions that are of importance to them. These dimensions will be assigned the portfolio-level ESG constraints.
* client_preferences_moea: simulated client preferences to help select a single optimal solution from Pareto set.
§.§.§ yaml/Scalarization.yaml
* scalar_problem_name: a pre-defined MOO problem
* objectives_pos_neg: [True, True], turn on or off optimization for PosESG, NegESG
* ecos_eps: tolerance threshold in the ECOS solver
* ecos_max_iter: maximal number of learning iterations in the ECOS solver
* ecos_verbose: turn on or off the intermediate outputs from the optimizer
§.§.§ yaml/MOEA.yaml
* moea_problem_name: a pre-defined MOO problem
* n_threads: number of threads used for multi-threading processing
* n_population: population size
* n_offsprings: number of offsprings
* n_generations: number of generations
* moea_verbose: turn on or off the intermediate outputs from the optimizer
* optimal_selection: ASF or PW
* output_visualizations: turn on or off visualization of the single optimal solution
* output_visualize_dims: select two dimensions for visualization, e.g., ['pos_esg', 'neg_esg'], you can also add specific ESG dimensions.
* output_hypervolume_visualizations: turn on or off the visualization of hypervolumes. The system runs significantly slowly if it is turned on.
§.§.§ yaml/System.yaml
Inputs and Outputs:
* seed: seed to be used in random process
* model: an algorithm or solution to run, options: WeightedSum, NSGA2, SMSEMOA, AGEMOEA2
* dataset: assign the data set name which also refers to the subfolder name in folder "data"
* data_funds: funds.csv, the list of funds and ESG scores
* data_cov_matrix: cov_matrix.csv, the covariance matrix
* data_bmk: asset_allocation.csv, the standard asset allocations
* path_outputs: the output folder where the logs, solutions, visualizations, pkl files will be stored
* output_analysis: turn on or off the output analysis, e.g., the output of gains and top-N investments
* output_funds_top_n: the value of N for top-N investments in the outputs
General arguments:
* options: the risk levels, ['Conservative', 'Moderate', 'Aggressive']
* pos_esg_dims: a list of PosESG dimensions, format: dim:category
* neg_esg_dims: a list of NegESG dimensions, format: dim:category
Constraint arguments:
* weight_init_up_lim: the up bound for maximal fund allocations
* weight_init_low_lim: the low bound for maximal fund allocations
* weight_final_low_lim: the threshold for the library to drop and adjust fund allocations. For example, it is set as 0.001. In the final solution, any funds with allocations smaller than 0.001 will be updated to zero. Fund allocations for other funds will also be re-normalized to make sure the summation of allocation equals to 1.
* TE_cap: the cap for tracking errors, not the squared errors
* adj_ratio: TE_cap can be updated to TE_cap / adj_ratio
* esg_norm_up_lim: the up bound for ESG normalizations; it is TE**2, if it is set ≤ 0
* esg_norm_low_lim: the lower bound for ESG normalization, such as 1e-100
* dev_asset_alloc: deviation cap from benchmark allocations for each asset class
§ ACKNOWLEDGEMENT
This work was supported and funded by Morningstar, Inc. at Chicago, Illinois, USA, under grant agreement IIT-Cayuse: Grant No. 22-0300, Project No. A23-0011.
abbrv
|
http://arxiv.org/abs/2307.00680v1
|
20230702225258
|
CLIMAX: An exploration of Classifier-Based Contrastive Explanations
|
[
"Praharsh Nanavati",
"Ranjitha Prasad"
] |
cs.LG
|
[
"cs.LG"
] |
A]Praharsh Nanavati
Corresponding Author Email: praharsh19@iiserb.ac.in.
B]Ranjitha Prasad
[A]Indian Institute of Science Education and Research, Bhopal
[B]Indraprastha Institute of Information Technology, Delhi
Explainable AI is an evolving area that deals with understanding the decision making of machine learning models so that these models are more transparent, accountable, and understandable for humans. In particular, post-hoc model-agnostic interpretable AI techniques explain the decisions of a black-box ML model for a single instance locally, without the knowledge of the intrinsic nature of the ML model. Despite their simplicity and capability in providing valuable insights, existing approaches fail to deliver consistent and reliable explanations. Moreover, in the context of black-box classifiers, existing approaches justify the predicted class, but these methods do not ensure that the explanation scores strongly differ as compared to those of another class. In this work we propose a novel post-hoc model agnostic XAI technique that provides contrastive explanations justifying the classification of a black box classifier along with a reasoning as to why another class was not predicted. Our method, which we refer to as CLIMAX which is short for Contrastive Label-aware Influence-based Model Agnostic XAI, is based on local classifiers . In order to ensure model fidelity of the explainer, we require the perturbations to be such that it leads to a class-balanced surrogate dataset. Towards this, we employ a label-aware surrogate data generation method based on random oversampling and Gaussian Mixture Model sampling. Further, we propose influence subsampling in order to retaining effective samples and hence ensure sample complexity. We show that we achieve better consistency as compared to baselines such as LIME, BayLIME, and SLIME. We also depict results on textual and image based datasets, where we generate contrastive explanations for any black-box classification model where one is able to only query the class probabilities for an instance of interest.
§ INTRODUCTION
As AI technology deployment is increasing especially in safety-critical domains, it has necessitated that ML models be interpretable and trustworthy while being accurate. Trustworthiness of an AI system is possible if the target users understand the how and why about ML model predictions. Interpretability is also essential owing to severe biases that are induced in the decision-making process of deep neural networks (DNNs) when subject to adversaries <cit.>. Governments across the world have introduced regulations towards the ethical use of AI. For instance, General Data Protection Regulation (GDPR) passed in Europe requires businesses to provide understandable justifications to their users for decisions of AI systems that directly affect them <cit.>.
Popular categorization of existing XAI methods is based on XAI models being local <cit.> or global <cit.>, model agnostic <cit.> or model specific <cit.>, in-hoc or post-hoc, perturbation or saliency-based <cit.>, concept-based or feature-based <cit.>, etc.
The simplest among them is the well-established post-hoc, perturbation-based techniques such as LIME <cit.> and KernelSHAP <cit.>. Perturbation-based post hoc explainers offer a model agnostic means of interpreting black-box ML models while requiring query-level access for a single instance. These methods define a data generating process to obtain weighted perturbations (surrogate data) in the neighborhood of the index sample, and subsequently employ easy-to-explain linear regression model to obtain per-feature importance weights. Despite the widespread usage of these techniques, subsequent works have pointed out various issues. For instance, LIME leads to inconsistent explanations on a given sample <cit.>, hampers its use in safety-critical systems. Although KernelSHAP partially counters the stability issue, it employs training data for explanations. However, more importantly, these methods use feature attribution to explain the prediction of a black-box models and do not produce contrastive explanations.
More recently, contrastive <cit.> and counterfactual approaches <cit.> have been proposed. The goal of a contrastive explanation is not only to justify the output class of an input, but also what should be absent to maintain the original classification, while counterfactual explanations specify necessary minimal changes in the input so that an alternate output is obtained. In this work, we are interested in label-aware, post-hoc technique for providing model agnostic contrastive explanations in the locality of a given instance (which we refer to as the index sample).
Studies in philosophy and
social science point out that in general, humans prefer contrastive explanations <cit.>. Let us suppose that the predicted class of the black-box model for the i-th instance is c_i, and the alternative class-label is c_-i. Here, answering the question, Why c_i? leads to just explaining the predicted class as done in most of the post-hoc model agnostic techniques such as LIME, BayLIME, Unravel and KernelSHAP. However, it is natural to seek a contrastive explanation where queries are of the form why c_i and not c_-i?. As pointed out in <cit.>,
contrastive explanations highlight what is minimally sufficient in an input to justify its classification, and identify contrastive features that should be minimally present and critically absent to distinguish it from another input is seemingly close but would be classified differently. Most of the available contrastive explainers require the original training samples, are model-aware, or use complex data generation procedure which leads to opacity in explainer models <cit.>.
Alternately, we propose a contrastive explainer which is model-agnostic and perturbation-based. In the context of a classification based black-box model, a regression based explanation model provides explanations based on the surrogate dataset generated using the pre-defined data generating process. Note that the data generating process does not mandate samples from all classes since a regression based explainer does not require a balanced dataset. Essentially, this implies that the class-based feature attribution scores are provided when there may be no information about this class in the surrogate dataset. We question the basic paradigm in post-hoc perturbation based methods which advocates the use a local linear regression model and instead, we focus on a local logistic regression model. A classifier based explanation model necessitates that the surrogate samples form a balanced dataset, i.e., there are approximately equal number of samples from different classes. Essentially this implies that class-based attribution score is obtained after ensuring that surrogate data samples with all class information is present in the surrogate dataset. This leads to contrastive explanations and improved stability of the explainer method.
Contributions: In this work, we propose a contrastive label-aware sample-efficient post-hoc explainable AI (XAI) technique called CLIMAX. Briefly, our contributions are as follows:
* We propose two variants of the logistic regression (LR) based explainer and generation of a label-wise balanced surrogate dataset. Similar to LIME, the per-feature weight obtained from the LR model provides the contrastive feature attribution scores. Essentially this allows us to exploit the classification boundary of the black-box model and explain each instance, from a dual point of view, i.e.,
* Why point `a' must lie in class c_i and
* Why point `a' must not lie in classes c_-i
* Influence functions are a classic technique from robust statistics which trace a model’s prediction through the learning algorithm and back to its training data thereby identifying training points most responible for a given prediction. We use this module within our surrogate data generator as it helps reduce the sample complexity. We observe that the performance of the model after subsampling stays at par with the original model, and sometimes even surpasses it.
§ RELATED WORKS AND NOVELTY
In this work, we are interested in label-aware model-agnostic post-hoc locally interpretable models. We discuss the related works by highlighting the critical aspects in comparison to the proposed method as in the sequel.
Instability: Instability or inconsistency issues in the explanations scores of LIME over several iterations <cit.> is well-known. This inconsistency occurs due to random perturbation-based surrogate datasets. A deterministic hierarchical clustering approach for consistent explanations was proposed in DLIME <cit.>, and its biggest drawback is that it requires training data for clustering. To avoid the additional task of `explaining the explainer,' techniques like ALIME <cit.> are not preferred. A parametric Bayesian method as proposed by <cit.>, where a weighted sum of the prior knowledge and the estimates based on new samples obtained from LIME is used to get explanations in a Bayesian linear regression framework. Both LIME and BayLIME employ hyperparameters (kernel-width) that need to be tuned. Recently, <cit.> proposed a technique known as focused sampling, which utilizes uncertainty estimates of explanation scores. In <cit.>, authors propose a Gaussian process based active learning module to abate issues of instability. In this work, we used a sampling strategy that ensures that we are balanced with respect to the labels. This ensures that we obtain good stability and low inconsistency in explanation scores.
Sample Complexity: Sample efficiency in post-hoc models is a crucial factor in efficiently obtaining reliable explanations, and there is consensus in the community that explainable models must use as few samples for an explanation as possible <cit.>. Approaches such as LIME, KernelSHAP, and BayLIME do not provide any guidance on choosing the number of perturbations, although this issue has been acknowledged <cit.>. Influence functions <cit.> are known to reduce the sampling complexity and the reduced sample set can be used for providing robust explanations. We exploit influence functions to achieve fidelity and sample complexity goals simultaneously via our surrogate dataset.
Classifier-based Explainers: Techniques like LIME <cit.> and KernelSHAP <cit.> fit linear regression model on classification probabilities. This leads to a separate set of explanation scores for each class, where the scores try to explain why a class is predicted. Intuitively, regression black-box models are well-explained by linear regression explanation models and black-box classifiers are better explained by linear classifier explainers. In particular, classifier based explainers are expected to provide a robust set of explanations as they can exploit the classification boundary explicitly to provide information about why a point lies in a class c_i, and why not in the other classes, c_-i. This problem has been acknowledged in <cit.>, and the authors propose explanation scores based on confident item sets. In <cit.>, authors approximate the local decision boundary, but use a variational autoencoder for surrogate data generation, leading to opaque data generation. We propose a classifier based explainer which makes use of probabilities of all classes, and hence is more contrastive.
Constrastive Explainers:
Contrastive explanations clarify why an event occurred in contrast to another. They are inherently intuitive to humans to both produce and comprehend. There are a few techniques that already exist in the literature, such as the Contrastive Explanations Method, which makes use of pertinent positives and pertinent negatives to define those features are important and those that are not, respectively <cit.>. In <cit.>, authors propose a framework that convert an existing back-propagation explanation method to build class-contrastive explanations, especially in the context of DNNs. However, these methods are not model agnostic, and often assume access to training data. In <cit.>, authors repurpose Shapley values to generate counterfactual and contrastive global explanations. In <cit.>, authors propose Model Agnostic Contrastive Explanations Method (MACEM), to generate contrastive explanations for any classification model where one is able to only query the class probabilities for a desired input restricted to be structured tabular data.
Novelty: In comparison, CLIMAX is novel in the following ways:
* CLIMAX provides feature importances by explaining as to why the index sample belongs to a specific class and in the process, it also provides strong justification about why other classes were not predicted. This effect is brought about in CLIMAX using local classifiers for explanations, without explicitly solving for pertinent positives and negatives.
* CLIMAX is a perturbation-based technique, which implies that it does not require any access to training data.
* CLIMAX explains the decision boundary of the black-box classifier, which is the most relevant characteristic of classifiers that are optimized for accuracy.
§ MATHEMATICAL PRELIMINARIES
In this section, we describe the mathematical preliminaries of the popular local explainer namely LIME, for classifier models.
Local explainer models are interpretable models that are used to explain individual predictions of black box machine learning models. Among several methods, Local interpretable model-agnostic explanations (LIME) portrays a concrete implementation of local explainer model. These models are trained to approximate the predictions of the underlying black box model locally, in the neighborhood of the sample of interest and hence, these models may or may not be a valid explainer globally.<cit.>
Notation Let f: ℝ^d→ [0, 1] denote a black-box binary classifier, that takes a data point ∈ℝ^d (d features) and returns the probability that belongs to a certain class. Our goal is to explain individual
predictions of f locally. Let 𝒵 be a set of n' randomly sampled instances
(perturbations) around . The proximity between and any ∈𝒵 is given by π_() ∈ℝ. We denote the vector of these distances over the n' perturbations in 𝒵 as Π_() ∈ℝ^n'. Let ϕ∈ℝ^d denote the explanation in terms of feature importances for the prediction
f().
Let y_1, y_0 ∈ℝ^n' be the black-box predictions for n' surrogate samples corresponding to class-1 and class-0, respectively, such that for the i-th instance in 𝒵, y_1(i) = f(_i) and y_0(i) = 1- f(_i), and since they are probabilities, y_1(i),y_0(i) ∈ [0, 1]. LIME explains the predictions of the classifier f by learning a linear model locally around each prediction. Hence, in the case of LIME the coefficients of the linear model are assigned as ϕ are treated as the feature contributions to the black box prediction <cit.>. Accordingly, the objective function for LIME constructs an explanation that approximates the behavior of the black box accurately in the vicinity (neighborhood) of by solving:
_ϕ∑_∈𝒵 [f() - ϕ^T]^2 π_(),
which has a closed-form solution for class c ∈{0,1} given by:
ϕ̂_c = ^T(Π_()) + I)^-1(^T(Π_())y_c.
LIME assigns different importance scores to different classes as by design, it is not possible to incorporate the information about probabilities of both the classes into a single linear regression framework. As mentioned earlier, this is sufficient until the question is `why c', as this question does not seek explanations about the other classes. Furthermore, the challenge in LIME arises in selecting a valid neighborhood or locality for surrogate sampling. LIME uses random sampling where these samples are chosen heuristically: π_() is computed as the cosine or l_2 distance.
§ PROPOSED TECHNIQUES AND ALGORITHMS
We propose a classifier-based explainer, which we refer to as Contrastive Label-aware Influence-based Model-Agnostic XAI (CLIMAX), to understand and exploit the classification boundary as dictated by the black-box model so as to explain each instance, from dual points of view as stated before. Essentially, our method's reasoning is based on why a given point must lie in class c_i and not in classes c_-i. This is possible as unlike LIME, at the time of assigning scores, CLIMAX has access to all the class probabilities and the local classifier fits its boundary according to that.
CLIMAX explains the predictions of the binary classifier f(·) by learning a logistic regression model locally around each prediction where, the probability of class-1 and class-0 according to the explainer is given by σ(ϕ^T) and (1 - σ(ϕ^T)), respectively, where σ(·) is the sigmoid function. We now define two different variants of the CLIMAX method.
§.§ L-CLIMAX
In this section, we propose a local classifier explainer that results in logistic outputs, and we formally refer to this as Logistic CLIMAX, or L-CLIMAX. In order to derive the loss function, we state the following lemma.
Given a dataset 𝒟 with the i-th instance such that {_i, _i}∈𝒟 where _i ∈ℝ^d are the covariates and _i ∈ℝ^|𝒞| represents the class-probabilities, linear model on logistic outputs can be obtained as
_ϕ( - ϕ^T)^T
(Π_())
(-ϕ^T),
where the i-th entry of ∈ℝ^n' is given by (i) = log(y_i/1-y_i) obtained from the black-box model, the i-th column in ∈ℝ^d × n' is given by the surrogate sample _i, and (Π_()) is a diagonal matrix whose (i,i)-th entry is given by π_(_i).
The output of the logistic explainer model is given as
y_i = σ(ϕ^T_i) = 1/1+e^-ϕ^T_i.
The above expression can be rewritten in terms of the log-odds representation of the logistic output as
ϕ^T_i = log(y_i/1-y_i) ≜ℓ(_i).
The above formulation allows us to model the black box prediction of each perturbation _i as a linear
combination of the corresponding feature values (ϕ^T_i) plus an error term, i.e.,
ℓ(_i) = ϕ^T_i + ϵ_i,
where we model ϵ_i ∼𝒩(0,σ^2). Here, l(_i) is obtained from the black box classifier. Incorporating the objective function of LIME in the context of (<ref>) leads to
_ϕ∑__i ∈ [ℓ(_i) - ϕ^T_i]^2 π_(_i).
Rewriting (<ref>) in terms of vector and matrices namely , and (Π_()), we obtain
_ϕ( - ϕ^T)^T
(Π_())
(-ϕ^T).
Solving <ref> by including a regularizer of the form λ‖ϕ‖_2^2, we obtain the closed form solution for ϕ as
ϕ̂ = ((Π_()) ^T + λI) ^-1(Π_())
§.§ CE-CLIMAX
The second variant of CLIMAX constructs an explanation that approximates the behavior of the black box accurately in the vicinity of the index sample (neighborhood) of _0 by directly optimizing the log-loss, i.e., we obtain feature importance values by solving the following:
_ϕ∑__i ∈ f(_i)log_i+ (1-f(_i))log(1-y_i),
where y_i = σ(ϕ^T_i). We call this variant as Cross-Entropy CLIMAX or CE-CLIMAX.
Note that unlike LIME, we do not explicitly weigh each surrogate sample using π_() in the second variant.
Some of the salient aspects of both of the above formulation as compared to LIME are as follows:
* LIME-like methods that ask the question `why c_i?' provide explanations label-wise, i.e., they iterate over all labels, explain why the sample index should be a part of that class and provides the scores. L-CLIMAX and CE-CLIMAX iterate over two sets of probabilities, one corresponding to the current class label and the other corresponding to all probabilities of the remaining classes. This can be explicitly seen in the objective where we use both _i and 1-_i together as in <ref> and (<ref>).
* Interpretation of ϕ: In the case of LIME, ϕ determines the feature importances according to the regressor values. In CLIMAX, ϕ has a slightly different interpretation. Here, ϕ is larger for those features that help in increasing the `contrast' between explanations. Nevertheless, in both LIME and CLIMAX, it is safe to say that ϕ highlights important features.
* Both the above formulation has the simplicity and elegance of LIME and related methods. It remains training data agnostic and model-agnostic. Additionally, L-CLIMAX can be implemented with a slight change to the existing LIME framework.
§.§ Imbalance-aware Surrogate Sampling
An important aspect for realization of L-CLIMAX and CE-CLIMAX is surrogate sampling required to form the the set 𝒵 in the previous subsection. In order to ensure fidelity of explainers, our sampling technique needs to be imbalance-aware since we use classifier based local explainers.
We use the Bootstrap sampling technique, where we repeatedly sample the neighborhood of with replacement. The main goal is to ensure that the surrogate dataset is balanced, i.e., it consists of atleast a few samples belonging to all classes albeit in different proportions. To achieve this, we perform Gaussian sampling similar to <cit.> and increase the standard deviation appropriately (to increase the neighborhood size) to obtain surrogate instances from all classes. In order to further reduce imbalance in 𝒵, we do the following:
* Random oversampling: We oversample within the minority class in order to ensure that the classes are perfectly balanced.
* Gaussian Mixture models: A Gaussian mixture model (GMM) is a probabilistic model that assumes all the instances are generated from a mixture of a finite number of Gaussian distributions with unknown parameters <cit.>. We train a GMM consisting of c Gaussians using the bootstrapped samples, and later use it to appropriately oversample the minority classes to obtain a balanced surrogate dataset.
The above detailed sampling strategies may not necessarily improve the quality of samples. However, diminishing the imbalance in 𝒵 helps us in maintaining local fidelity and a consistent contrastive nature in the explanation scores ϕ. In the sequel, we also demonstrate the improved stability performance of CLIMAX as compared to other perturbation based methods. Subsequently, we perform forward feature selection as proposed in LIME, to obtain the top k features, and then return the scores for the explanation. We have explained the entire algorithm in Algorithm <ref>.
§.§ Sample Complexity
Sample efficiency in post-hoc models is a crucial factor in obtaining reliable explanations, and there is consensus in the research community that explainable models must use as few samples for an explanation as possible <cit.>. In both variants of Climax, we oversample the surrogate samples in order to ensure a balanced surrogate dataset, and hence, we have some redundant information within the data. Approaches such as LIME, KernelSHAP, and BayLIME do not provide any guidance on choosing the number of perturbations, although this issue has been acknowledged in <cit.>. In <cit.>, sample complexity is dictated by an acquisition function and sampling is achieved via Gaussian processes. Often, such methods turn out to be too complex.
We consider subsampling the surrogate samples using influence functions. Rooted in statistics, influence functions estimate how the model parameters change when a data point is upweighted by a small amount ϵ. Using influence functions, Koh and Liang <cit.> proposed a method for estimating the impact of removing a data point
from the training set (reducing its weight to 0) on the model parameters. We use this method to perform subsampling within our surrogate dataset to improve its quality. Influence functions help to build a tool to quantify each data point’s quality, thereby keeping good examples and dropping bad examples to improve the model’s generalization ability. Previous works focus on weighted subsampling, that is, trying to maintain the model performance when dropping several data. The steps in the case of influence subsampling <cit.> is as follows:
* Train the explainer model on the full set of surrogate samples:
θ̂ = θ∈Θargmin1/n∑_i=1^nL(_i, θ),
where θ̂, is the set of optimal parameters.
* Compute the influence function for each surrogate sample:
ρ = (ρ(_1,θ̂), ρ(_2,θ̂),..,ρ(_n,θ̂)).
Here, ρ denotes the value of the influence function <cit.>.
* Compute the sampling probability of each surrogate sample:
ψ = (ψ(_1,θ̂), ψ(_2,θ̂),..,ψ(_n,θ̂)),
where ψ denotes the sampling probability of each surrogate sample as computed in <cit.>. Using these quantities, we obtain how influential a point is, and we can trim our surrogate dataset.
* Finally, we perform subsampling based on the influence scores and train a subset model using the reduced set of surrogate samples.
θ = θ∈Θargmin1/{i,o_i=1}∑_o_i=1L(_i, θ)
Where, θ̃ gives the optimal parameters for the subsampled sets and the function o is an indicator function which is 1 if the i^th point is included in the subsampled set or not.
§ RESULTS AND DISCUSSIONS
In this section, we demonstrate the efficacy of the proposed CLIMAX framework on publicly-available datasets. In particular, we are interested in establishing the contrastive capability of CLIMAX and investigating the attributes such as stability (consistency in repeated explanations) and sample efficiency. We employ tabular(structured data), textual, and image datasets, and consider different black-box models for an explanation. [Source code available at <https://github.com/niftynans/CLIMAX>]
§.§ Datasets and Pre-processing
We chose four distinct datasets from the UCI Machine Learning
repository <cit.> as well as Scikit-Learn <cit.> for the tabular data based experiments owing to their usage in the relevant literature
for classification based prediction tasks. The description of the tabular
datasets is as follows:
* Breast Cancer: This dataset consists of 569 instances, with 30 features computed from an image of a breast mass, describing characteristics of the cell nuclei <cit.>. Hence, the classification task is to predict if the cancer is malignant or not.
* Parkinson's: The Parkinson's classification dataset consists of 195 instances of patients suffering and free from Parkinson's disease <cit.>. With 22 unique features per recording, the task is to classify whether a given patient has Parkinson's or not.
* Ionosphere: This dataset consists of 34 features, and 351 instances of radar data that was collected in Goose Bay, Labrador <cit.>. The targets were free electrons in the ionosphere. The classification task was to label the instances as `good' or `bad'.
* Diabetes: This dataset consists of 8 attributes and 768 data points that describes the medical records for Diabetes patients. It contains information about the pregnancy status, insulin levels, blood pressure and other medical attributes about the patients <cit.>.
For text, we use the Quora Insincere Questions dataset <cit.>, where the classification task is to identify if a question is sincere or not. We also use the 20 News Groups dataset <cit.> where the classification task is amongst two groups: Atheism and Christianity. We determine whether a given paragraph is written by an atheist or a Christian. Due to lack of space, we present the results for the 20News Groups dataset in the Supplementary.
For images, we use the MNIST dataset <cit.> in order to contrast the relevant regions that contribute to the prediction of each digit.
§.§ Baselines
CLIMAX focuses only on classification-based tasks. It is a perturbation-based technique, i.e., we do not assume any knowledge of the training samples or an autoencoder that may be trained on original data, but instead, we obtain surrogate samples in the vicinity of the index sample. Hence, we baseline CLIMAX using other perturbation-based methods that employ similar assumptions in their workflow. We use LIME and BayLIME <cit.> as our baselines primarily because they are perturbation based, and require the knowledge of the index sample and variance of the features in the training data. Among the array of XAI methods, S-LIME <cit.>, uses the central limit theorem to obtain the optimal number of surrogate samples is a method with good performance, and hence a good baseline. For simulating the black-box prediction model, we used a Random Forest Classifier for all the classification tabular datasets. A summary of the dataset and prediction model statistics can be found in Table <ref>. We used the open-source Scikit-Learn <cit.> implementation of the Random Forest classifier to simulate the black-box prediction models.
§.§ Numerical Results
In this section, we numerically demonstrate the stability and the contrastive nature of variants of the CLIMAX algorithm. Our method works on data of different modalities such as tabular, text and image, and hence we showcase its performance for each modality.
§.§.§ Stability in repeated explanations
For evaluating the inconsistency in explanations over multiple runs, we execute CLIMAX, and the baselines using 500, 1000, 1500, 2000, and 2500 surrogate samples and collected 20 consecutive explanations for 10 randomly selected index samples for each of the four datasets described in Table <ref>.The Jaccard's distance(J) <cit.> for measuring the consistency in explanations across the i-th, and j-th run can be computed as follows:
J(X_i,X_j) = |X_i ∩ X_j|/|X_i ∪ X_j|,
where X_i and X_j are sets consisting of top-5 features for iterations i and j. Intuitively, it can be observed that J(X_i, X_j)=1 if X_i and X_j have the same features, and J(X_i, X_j)=0 if they have no common features. Thus, a consistent explainer module will have a relatively higher value of this metric than a relatively inconsistent explainer module. We averaged this metric over all possible combinations of iterations and the 10 index samples. The results can be seen in Figure <ref>. We average the values over 500, 1000, 1500, 2000, and 2500. Across all datasets, incorporating the Cross-Entropy Loss along with sampling from a Gaussian Mixture Model (CE-GMM-CLIMAX) improved the model stability and fidelity to a large extent. Hence, we take only that method and it's influence subsampling counterpart forward and compare it with the other baseline methods in <ref>. It can be seen that for various sample sizes, CLIMAX outperforms both LIME, BayLIME, and S-LIME across all datasets. For S-LIME, we restrict the n_max parameter to be 1.5 times the size of the original number of samples.
§.§.§ CLIMAX Surrogate Data
To evaluate the quality of the surrogate dataset generated by CLIMAX be it through GMM Sampling or Random Oversampling, we collected the surrogate data generated during the stability experiment for twenty samples from all our datasets and calculated the macro-precision and recall scores. CLIMAX improves these scores, through oversampling and then subsequently subsampling by influence. We depict this in Table 2, which explains how our explainer works with the surrogate data.
It can be seen that the bootstrapping samples obtained using the procedure according to <cit.> is highly imbalanced for all datasets (first row for each dataset). Further, we see that to a large extent the imbalance is removed using ROS and GMM under CLIMAX. Although ROS leads to an improvement in precision and recall scores, the information content in the data is the same as the bootstrapped samples. This necessitates a technique like GMM that also improves the quality of data. In some cases, the IF subsampled points lead to lower precision and recall scores. However, we believe that IF maintains the quality of data, and hence, lower precision-recall scores may not translate to poor explanation quality.
§.§.§ CLIMAX on Text Datasets
To showcase CLIMAX's ability to provide robust textual explanations, we employed the information retrieval based tf–idf (term frequency–inverse document frequency) framework. We first extract features from the data using the tf-idf method. We train the black-box model and choose a test sample as the index sample. We compare our method with LIME in Figure <ref>.
We see that explanations of CLIMAX agree with LIME on many words (as in the highlighted text). However, the contrast in scores is large mainly because these explanations provide reasoning as to why one class is chosen instead of the other. The explanation of CLIMAX as compared to LIME on a large paragraph is provided using the 20 News Group dataset. Due to lack of space, we have moved this result to the supplementary.
§.§ Climax on Image Dataset
In the case of the image data, we first preprocess the data by using a popular segmentation algorithm called quickshift within the Scikit-image module <cit.>. We depict the explanations provided by CLIMAX in Fig. <ref>. Due to space constraints, we provide a comparison between LIME, CLIMAX and CEM in the supplementary.
From Fig. <ref>, we see that an interesting benefit of contrastive explanations in CLIMAX is the possibility comparing explanations across classes. We show that regions in numerals provide explanations that are complementary to each other. For example, similar to several works that investigate explanations for 3 versus 5 <cit.>, we see that the explainer is sure about class 3 due to the upper half, but neutral about the bottom half. Investigating digit 5, we see that the explainer is neutral about the upper half, but neutral about the bottom half. This shows that CLIMAX is not only contrastive within the same image, but consistent across images of different classes. We depict several such examples in the figure.
§ CONCLUSIONS AND FUTURE WORK
CLIMAX (Contrastive Label-aware Influence-based Model-Agonostic XAI) is a perturbation based explainer, which exploits the classification boundary to provide contrastive results. CLIMAX perturbs the index sample to obtain surrogate samples by oversampling the instances of the minority class using random oversampling or GMMs, in order to obtain a balanced dataset. It also employs influence subsampling in order to reduce the sample complexity. Explanation scores are then provided using a logistic regression module, and we propose two variants in this direction. As compared to other perturbation-based techniques, CLIMAX explains why a point is in class c_i and also provides information about why it is not in the remaining classes c_-i. CLIMAX gives access to the explainer to all class probabilities, helping in providing the contrastive scores. We observe that CLIMAX is able to produce more stable, faithful, and contrastive results as compared to LIME across different modalities of data. CLIMAX provides an important insight as the inherent task which is being explained is classification-based. Hence, CLIMAX is a well-rounded extension of LIME for black-box classifiers. In the future, we would like to provide uncertainty estimates for our explanations. This would help in checking the fidelity of the explanations better.
§ ADDITIONAL RESULTS AND DISCUSSIONS
In this section, we demonstrate the efficacy of the proposed CLIMAX framework on publicly-available datasets. In particular, we are interested in establishing the contrastive capability and investigating the attributes such as stability (consistency) and sample efficiency in repeated explanations. We employ tabular, textual, and image datasets and consider different black-box models for an explanation.
§.§.§ TSNE Plots: CLIMAX Surrogate Data
To evaluate the quality of the surrogate dataset generated by CLIMAX using GMM Sampling, we plot to TSNE plots in the case of MNIST image dataset. In Fig. <ref>, the index sample belonged to class 2 and we see that the GMM sampling occurs from 6 main clusters. This implies that the digit 2 is similar to six other digits, and that the sampling for the surrogate data is uniform. Similarly, in Fig. <ref>, the index sample belonged to class 3 and we see that the sampling occurs from seven classes. If we look at both the remaining classes, which are dissimilar to the digits in question, are sampled uniformly from each of the similar clusters. Hence, we conclude that the surrogate sampling via GMM is not only effective, but also explainable. In comparison to methods that incorporate autoencoders and other black-box data generation mechanisms <cit.>, we find our technique to be more transparent and trustworthy.
§.§.§ CLIMAX on Text Datasets
To showcase CLIMAX's ability to provide robust textual explanations, we employed the information retrieval based tf–idf (term frequency–inverse document frequency) framework.We first extract features from the data using the tf-idf method. We compare our method with LIME in Figure <ref>.
We see that even on longer paragraphs of text, CLIMAX maintains its contrastive capability, as compared to LIME.
§.§ CLIMAX for images Vs CEM <cit.>
In the case of CEM, the classification boundary is exploited well in terms of the regions that are Pertinently Positive and Pertinently Negative. However, the ambiguity in the sub-parts of an image due to overlap in the PP and the PN regions makes the classification of a digit uncertain. For instance, common regions in the digit 0 have been marked as pertinent positive and pertinent negative. It is not clear how the en-user is supposed to interpret these areas. In particular, joint analysis of pertinent positive and pertinent negative regions are not possible. Moreover, comparison across different digits is also not possible. CLIMAX does not face such challenges. On one hand, within the same digit, it clearly marks the regions that it is certain (in pink for all digits) and uncertain (in grey for all digits). Furthermore, we can also analyse across digits, where if one area is marked positive for a certain digit, then that same area would be marked neutral for many other digits. This nature is visible across all CLIMAX explanations.
§.§ CLIMAX for images vs LIME<cit.>
We compare CLIMAX with LIME, the most popular post-hoc explainable AI method, which set the foundation for such methods in Fig. <ref>. We see that often LIME is not able to distinguish between regions encapsulated within a digit and the digit boundary itself. This is mainly because LIME does not take as input all class probabilities, and employ any decision boundary aware mechanism. The contrastive nature of the explanations is evident here as well, where CLIMAX tends to indicate neutral regions which it is not sure about. However, LIME does not capture such contrast.
|
http://arxiv.org/abs/2307.01788v2
|
20230704155404
|
A Radon-Nikodým Theorem for Valuations
|
[
"Jean Goubault-Larrecq"
] |
math.FA
|
[
"math.FA",
"math.PR"
] |
=17pt
We enquire under which conditions, given two σ-finite,
ω-continuous valuations ν and μ, ν has density
with respect to μ. The answer is that ν has to be
absolutely continuous with respect to μ, plus a certain Hahn
decomposition property, which happens to be always true for
measures.
Constraining the binarity of black hole candidates:
a proof-of-concept study of Gaia BH1 and Gaia BH2
[
August 1, 2023
=========================================================================================================
J. Goubault-LarrecqRadon-Nikodým for valuations
0.25
< g r a p h i c s >
0.74
For the purpose of Open Access, a CC-BY public copyright licence has
been applied by the authors to the present document and will be
applied to all subsequent versions up to the Author Accepted
Manuscript arising from this submission.
§ INTRODUCTION
In its simplest form, the Radon-Nikodým theorem
<cit.> states that a σ-finite
measure ν has a measurable density with respect to a
σ-finite measure μ if and only if ν is absolutely
continuous with respect to μ. The purpose of this paper is to
investigate a similar question in the larger setting of
ω-continuous valuations, a setting which encompasses both
measures and the continuous valuations used in the semantics of
probabilistic programming languages <cit.>.
Probably the distinguishing feature of valuations compared to measures
is that they give mass to sets forming a collection that is not
necessarily closed under complements: a lattice of subsets of
valuations, a topology for continuous valuations, and what we call an
ω-topology for ω-continuous valuations.
Sets equipped with such collection of sets are Pervin spaces,
topological spaces, and what we call ω-topological spaces
respectively. They form categories , and
respectively.
As we will see, the question of the existence of density maps is more
about the category in which the density maps should reside, not so
much about the distinction between valuations and mesaures. Indeed,
on sufficiently nice topological spaces, continuous valuations and
measures are essentially the same thing, and therefore
measurable density maps will exist under the familiar
assumptions of the classical Radon-Nikodým theorem. This will also
entail that they do not exist in general as morphisms in or
, as we will see in Section <ref>.
Hence some additional assumptions are needed to ensure that density
maps exist in the relevant categories, and it is the purpose of this
paper to identify them.
Outline. We give brief preliminaries in
Section <ref>, and we develop the theory of
valuations, including ω-continuous valuations, measures and
continuous valuations, in Section <ref>. We
develop necessary conditions for density maps to exist in
Section <ref>, and we show that they are sufficient
in Section <ref>. Our final result includes the
classical Radon-Nikodým theorem as a special case.
§ PRELIMINARIES
We assume some basic knowledge about topology <cit.> and
about measure theory <cit.>. We will need the
following from domain theory <cit.>.
A directed family D in a poset P is a non-empty family such
that any two elements of D have a common upper bound in D. A
dcpo (short for directed-complete partial order) is a poset in
which every directed family has a supremum. We write D, or
_i ∈ I x_i if D = (x_i)_i ∈ I, for directed
suprema. We also write for directed union. An
ωcpo is defined similarly, except that we only require
the existence of suprema of monotone sequences
(x_n)_n ∈ (namely,
x_0 ≤ x_1 ≤⋯≤ x_n ≤⋯) instead of directed
families.
A function f X → Y between dcpos is Scott-continuous
if and only it is monotonic (order-preserving) and preserves suprema
of directed sets, namely
_i ∈ I f (x_i) = f (_i ∈ I x_i) for every directed
family (x_i)_i ∈ I. It is ω-continuous if and
only if it is monotonic and preserves suprema of monotone sequences.
The Scott topology on a dcpo has as open sets those subsets U
that are upwards-closed (if x ∈ U and x ≤ y then y ∈ U)
and such that every directed family D such that D ∈ U
intersects U. The Scott-continuous maps are exactly the continuous
maps with respect to the Scott topologies.
§ VALUATIONS AND MEASURES
As our general setting, we will consider pairs (X, ) where X
is a set and is a lattice of subsets, namely a family of
subsets of X that is closed under finite intersections and finite
unions. In particular, the empty set and X belong to .
We retrieve topological spaces by requiring that be closed
under arbitrary unions; or just under directed unions. Indeed,
the union of any family (U_i)_i ∈ I of subsets of X is equal
to the directed supremum _J finite ⊆ I⋃_j ∈ J U_j.
We will call ω-topology on X any lattice of subsets
that is at the same time an ωcpo under inclusion. Then
(X, ) is an ω-topological space. It is
equivalent to require that be closed under countable unions,
since the union of any countable family (U_n)_n ∈ of
elements of is the union
_n ∈⋃_i=0^n U_i of a chain of elements of
.
A lattice of subsets that is closed under complements is a
algebra of subsets, and an ω-topology
that is closed under complements is the same thing as a
σ-algebra. Then (X, ) is called a measurable
space.
There are categories , , ,
and whose objects are pairs (X, ) where is a
lattice of subsets, resp. an algebra of subsets, resp. a topology,
resp. an ω-topology,
resp. a σ-algebra. In each case, the morphisms
f (X, ) → (Y, ') are the maps f X → Y
such that f^-1 (V) ∈ for every V ∈'. They are
called continuous maps on , and measurable maps
on . The categories and are the
categories of Pervin spaces and Boolean Pervin spaces
respectively <cit.>.
Those
categories are all full subcategories of .
Let be the dcpo of extended non-negative real numbers
∪{∞}, with the usual ordering ≤ extended by the
stipulation that r ≤∞ for every r ∈. We will
always equip with the Scott topology of ≤, making it an
object of all the categories mentioned above. The open subsets of
that Scott topology are the half-open intervals ]t, ∞],
t ∈, plus and ∅.
We write (X, ) for the set of morphisms from
(X, ) to (implicitly equipped with its Scott
topology), in any full subcategory of containing .
In other words, the elements h of (X, ) are the
functions h X → such that h^-1 (]t, ∞]) ∈ for every t ∈.
When (X, ) is a measurable space, (X, ) is the
set of all measurable maps from (X, ) to with its
usual Borel σ-algebra, generated by the
intervals.
This is because one can write any interval as a Boolean combination of
intervals of the form ]t, ∞]. When (X, ) is a
topological space, (X, ) is known as the set of
lower semicontinuous maps from (X, ) to .
If is an ω-topology (resp., a topology), then
(X, ) is an ωcpo (resp., a dcpo) under the
pointwise ordering defined by h ≤ h' if and only if
h (x) ≤ h' (x) for every x ∈ X; additionally, suprema of
monotone sequences (resp., directed suprema) are computed pointwise:
(_i ∈ I h_i) (x) = _i ∈ I (h_i (x)). In order to
see this, it suffices to show that _i ∈ I (h_i (x)) is in
(X, ); and the inverse image of ]t, ∞] under that
map is _i ∈ I h_i^-1 (]t, ∞]), since ]t, ∞]
is Scott-open.
Given any Pervin space (X, ), a valuation ν on
(X, ) is a map ν→ that is:
* strict: ν (∅) = 0;
* monotonic: U ⊆ V implies
ν (U) ≤ν (V);
* modular: for all U, V ∈, ν (U) + ν (V) =
ν (U ∪ V) + ν (U ∩ V).
A continuous valuation is a valuation that is Scott-continuous,
and an ω-continuous valuation is a valuation that is
ω-continuous.
Continuous valuations have been the cornerstone of the
domain-theoretic semantics of probabilistic languages since Claire
Jones' PhD thesis <cit.>, and had first been
studied by Nait Saheb-Djahromi <cit.>. The
concept of valuation is older, and dates back to Smiley
<cit.>, Horn and Tarski <cit.>, and Pettis
<cit.>, at least; see <cit.>.
An ω-continuous valuation on a measurable space (X, ) is
a measure. Measures are usually defined as σ-additive
maps ν→, but the two definitions are
equivalent. Let us recall that ν→ is
additive (where is any lattice of subsets) if and only
if ν (∅)=0 and ν (U ∪ V) = ν (U) + ν (V) for
all pairs of two disjoint sets U, V ∈, and
σ-additive (where is any ω-topology) if
and only if ν (⋃_i ∈ I U_n) = ∑_i ∈ Iν (U_i)
for every countable family (U_i)_i ∈ I of pairwise disjoint
elements U_i of . The equivalence of ω-continuous
valuations and σ-additive maps on σ-algebras follows
from the following facts.
* If is an algebra of subsets, then the additive maps
ν→ are exactly the valuations on
(X, ). Indeed, if ν is additive, then strictness is
clear, monotonicity follows from the fact that if U ⊆ V,
then ν (V) = ν (V U) + ν (U) ≥ν (U), and
modularity from
ν (U) + ν (V) = ν (U V) + ν (U ∩ V) + ν (V) =
ν (U ∩ V) + ν (U ∪ V). Conversely, any valuation ν
is additive, since if U and V are disjoint, then ν (U ∪ V)
= ν (U ∪ V) + ν (U ∩ V) = ν (U) + ν (V).
* If is an ω-topology, then the σ-additive
maps are exactly the ω-continuous, additive maps. This
follows from the fact that every countably infinite union
⋃_n ∈ U_n can be written as
_n ∈⋃_i=0^n U_i, plus additivity.
Addition is Scott-continuous on , and it follows that
valuations on (X, ) form a dcpo under the stochastic
ordering, defined by μ≤ν if and only if
μ (U) ≤ν (U) for every U ∈; directed suprema are
computed pointwise:
(_i ∈ Iν_i) (U) = _i ∈ I (ν_i (U)). The same
can be said for continuous valuations on a topological space, or for
ω-continuous valuations on an ω-topological space, hence
also for measures on a measurable space, since suprema commute.
The simplest way to define a notion of integration is by the following
Choquet formula <cit.>:
∫_x ∈ X h (x) dν ∫_0^∞ν (h^-1 (]t,
∞])) dt,
for every function h ∈ (X, ), and for every valuation
ν on (X, ). The integral on the right is an ordinary
improper Riemann integral, which is well-defined because the map
t ↦ν (h^-1 (]t, ∞])) is antitonic (order-reversing).
Indeed, it is easy to see that, for any antitonic map
f →, ∫_0^∞ f (t) dt is the supremum
of the monotone sequence of lower Darboux sums
∑_k=1^N2^N f (k/2^N), N ∈. This was already
observed in the proof of Lemma 4.2 of Regina Tix's master's thesis
<cit.>, which also contains the following statement; the
proof boils down to a familiar commutation of suprema.
[Lemma 4.2, 3rd item, of <cit.>]
Riemann integration is Scott-continuous in the integrated antitonic
map. In particular, for any directed family (f_i)_i ∈ I
(countable or not) of antitonic maps from to , in the
pointwise ordering,
∫_0^∞_i ∈ I f_i (t) dt = _i ∈ I∫_0^∞ f_i (t) dt.
Equation (<ref>) makes sense for more general set functions
ν than just valuations, but we will not make use of this. We also
write ∫ h dν for ∫_x ∈ X h (x) dν.
We sum up the main properties of the Choquet integral in the following
proposition; h, h' and h_i stand for a arbitrary elements of
(X, ), μ, ν and ν_i for valuations on
(X, ), a and b are arbitrary elements of . Addition
and multiplication on are defined in the obvious way, with
the caveat that 0.∞ = ∞.0 = 0, so as to ensure that
multiplication, not just addition, is Scott-continuous. On spaces of
-valued maps and of valuations, addition and scalar
multiplication are defined pointwise. The characteristic map
χ_U X → maps every x ∈ U to 1 and all other
points to 0; χ_U is in (X, ) if and only if
U ∈. The Dirac valuation δ_x maps every
U ∈ to 1 if x ∈ U, to 0 otherwise; namely,
δ_x (U) = χ_U (x). Given a morphism
f (X, ) → (Y, '), the image valuation
f [ν] of any valuation ν on (X, ) is defined by
f [ν] (V) ν (f^-1 (V)); this is a valuation, resp. an
ω-continuous valuation, resp. a measure, resp. a continuous
valuation if ν is.
Choquet integration is:
* linear in the valuation: ∫ h d(aμ + bν) = a ∫ h
dμ + b ∫ h dν;
* Scott-continuous in the valuation: ∫ h d sup_i ∈ Iν_i = _i ∈ I∫ h dν_i if (ν_i)_i ∈ I is
directed;
* linear in the integrated function if (X, ) is an
ω-topological space and ν is an
ω-continuous valuation: ∫ (a h +b h') dν = a ∫ h
dν + b ∫ h' dν;
* ω-continuous in the integrated function if (X, )
is an ω-topological space and ν is ω-continuous
(in particular,
∫_i ∈ h_i dν = _i ∈ h_i dν),
and Scott-continuous if (X, ) is a topological space and
ν is a continuous valuation (notably,
∫_i ∈ I h_i dν = _i ∈ I∫ h_i dν if
(h_i)_i ∈ I is directed).
Additionally,
* ∫χ_U dν = ν (U) for every U ∈;
* ∫ h dδ_x = h (x) for every x ∈ X.
The argument follows classical lines, and most notably
<cit.>.
Item (i) follows from the fact that Riemann integration is itself
linear, and (ii) follows from Fact <ref>; monotonicity
is clear. Item (v) follows from the fact that
(_i ∈ I h_i)^-1 (]t, ∞]) = _i ∈ I h_i^-1
(]t, ∞], ν is Scott-continuous and Fact <ref>.
Item (iv) is proved similarly. As far as (v) is concerned, we
have
∫χ_U dν = ∫_0^∞ν (χ_U^-1 (]t, ∞))
dt = ∫_0^∞ f (t) dt where f maps every t ∈ [0, 1[
to ν (U) and every t ≥ 1 to 0. For (vi),
∫ h dδ_x = ∫_0^∞δ_x (h^-1 (]t, ∞]))
dt = ∫_0^∞ g (t) dt where g maps every t < h (x) to
1, and every t ≥ h (x) to 0.
The only
tricky point is to show item (iii).
First, we have
∫ ah dν = ∫_0^∞ν (ah^-1 (]t, ∞]). If
a=0, this is equal to 0 = a . ∫ h dν. If
a ≠ 0, ∞, this is equal to
∫_0^∞ν (h^-1 (]t/a, ∞]) dt = ∫_0^∞ν (h^-1 (]u, ∞]) . a du = a ∫_0^∞ν (h^-1 (]u,
∞]) du = a ∫ h dν. Hence ∫ ah dν = a ∫ h dν
for every a ∈; this also holds when a=∞ by (iv),
since ∞ =. Hence it suffices to show that
∫ (h+h') dν = ∫ h dν + ∫ h dν'.
We proceed in steps. We fix h. For every ϵ∈, and
for every U ∈, we claim that:
∫_ϵ^∞ν (h^-1 (]t, ∞]) ∪ (h^-1
(]t-ϵ, ∞]) ∩ U)) dt
= ∫_ϵ^∞ν
(h^-1 (]t, ∞])) dt + ∫_0^ϵν (h^-1 (]t,
∞]) ∩ U) dt.
If
∫_ϵ^∞ν (h^-1 (]t, ∞]) ∩ U) dt <
∞, then we reason as follows. By the modularity law, the fact
that the intersection of h^-1 (]t, ∞]) with
h^-1 (]t-ϵ, ∞]) ∩ U simplifies to
h^-1 (]t, ∞]) ∩ U, and the usual properties of Riemann
integrals,
∫_ϵ^∞ν (h^-1 (]t, ∞]) ∪ (h^-1
(]t-ϵ, ∞]) ∩ U)) dt
= ∫_ϵ^∞ν (h^-1 (]t, ∞])) dt
+ ∫_ϵ^∞ν (h^-1
(]t-ϵ, ∞]) ∩ U) dt
- ∫_ϵ^∞ν (h^-1 (]t, ∞]) ∩ U)
dt
= ∫_ϵ^∞ν (h^-1 (]t, ∞])) dt
+ ∫_0^∞ν (h^-1
(]t, ∞]) ∩ U) dt
- ∫_ϵ^∞ν (h^-1 (]t, ∞]) ∩ U)
dt
= ∫_ϵ^∞ν (h^-1 (]t, ∞])) dt
+ ∫_0^ϵν (h^-1 (]t, ∞]) ∩ U) dt.
If
∫_ϵ^∞ν (h^-1 (]t, ∞]) ∩ U) dt =
∞, then since h^-1 (]t, ∞]) ∩ U is included in
h^-1 (]t, ∞]) ∪ (h^-1 (]t-ϵ, ∞]) ∩ U),
both sides of (<ref>) are
equal to ∞.
Now, ∫ (h+ϵχ_U) dν is equal to
∫_0^∞ (h + ϵχ_U)^-1 (]t, ∞]) dt, and
(h + ϵχ_U)^-1 (]t, ∞]) is equal to
h^-1 (]t, ∞]) ∪ U if t < ϵ and to
h^-1 (]t, ∞]) ∪ (h^-1 (]t-ϵ, ∞]) ∩ U)
otherwise. Therefore:
∫ (h+ϵχ_U) dν
= ∫_0^ϵν (h^-1 (]t, ∞]) ∪ U) dt
+ ∫_ϵ^∞ν (h^-1 (]t, ∞]) ∪ (h^-1
(]t-ϵ, ∞]) ∩ U)) dt
= ∫_0^ϵν (h^-1 (]t, ∞]) ∪ U) dt
+ ∫_ϵ^∞ν (h^-1 (]t, ∞])) dt
+ ∫_0^ϵν (h^-1 (]t, ∞]) ∩ U) dt
by (<ref>)
= ∫_0^ϵν (h^-1 (]t, ∞]) dt +
∫_ϵ^∞ν (h^-1 (]t, ∞])) dt
+ ∫_0^ϵν (U) dt
by modularity of ν under the ∫_0^ϵ
terms
= ∫ h dν + ϵν (U).
This being done, let a very simple function be any map h'
of the form ϵ∑_i=1^n χ_U_i where
ϵ∈ and U_i ∈. By induction on n, and
using what we have just proved, we obtain that
∫ (h+h') dν = ∫ h dν + ∫ h' dν.
Finally, every h' ∈ (X, ) is the supremum of the
monotone sequence of very simple functions
h'_N 1/2^N∑_i=1^N2^Nχ_h'^-1
(]i/2^N, ∞]), N ∈. Then
∫ (h+h') dν = _N ∈∫ (h+h'_N) dν = _N
∈ (∫ h dν + ∫ h'_N dν) = ∫ h dν + ∫ h'
dν by using (iv).
Property (iv) is usually called the monotone convergence theorem (or
the Beppo Levi theorem) when applied to measurable spaces and measures.
We will also use the following baby version of the Riesz
representation theorem. A linear map
F (X, ) → is one such that
F (ah)=aF(h) for all a ∈ (positive homogeneity) and
h ∈ (X, ) and F (h+h')=F(h)+F(h') for all
h, h' ∈ (X, ) (additivity). It is equivalent
to require F (ah+bh') = a F (h) + b F (h') for all a, b ∈
and h, h' ∈ (X, ); if F is ω-continuous, then
this extends to the cases where a or b or both is equal to
∞.
Let (X, ) be an ω-topological space (resp., a
topological space). There is a one-to-one correspondence between
ω-continuous (resp., continuous) valuations ν on
(X, ) and linear ω-continuous (resp.,
Scott-continuous) maps F (X, ) →. In
one direction, F (h) ∫ h dν, and in the other
direction, ν (U) F (χ_U).
We deal with the ω-continuous case only, since the continuous
case is similar. The continuous case was also dealt with by Tix
<cit.>, using similar arguments. Given an
ω-continuous valuation ν, the map
F_ν h ↦∫ h dν is ω-continuous and
linear by items (ii) and (iv) of
Proposition <ref>. Conversely, given an
ω-continuous linear map
F (X, ) →, we define
ν_F (U) F (χ_U). Then ν_F is strict since F
maps the constant 0 map to 0 by positive homogeneity,
ω-continuous since F is, and since the map
U ↦χ_U is itself ω-continuous, and modular
because of the equality
χ_U+χ_V = χ_U ∪ V + χ_U ∩ V and the
additivity of F. We have ν_F_ν = ν, because for every
U ∈,
ν_F_ν (U) = F_ν (χ_U) = ∫χ_U dν = ν (U) by
item (v) of Proposition <ref>. In order to
show that F_ν_F = F, we realize that
F_ν_F (χ_U) = ∫χ_U dν_F = ν_F (U) = F (χ_U)
by item (v) of Proposition <ref>. Then, by the
linearity of the integral (item (iii)), F_ν_F (h) = F (h)
for every very simple function (as introduced in the proof of
Proposition <ref>), and since every element of
(X, ) is a supremum of a monotone sequence of very
simple functions, we conclude by the ω-continuity of F and
of F_ν_F (item (iv)) that F_ν_F = F.
§ DENSITY MAPS
Let (X, ) be an ω-topological space, let
g ∈ (X, ), and μ be an ω-continuous
valuation on (X, ).
The map that sends every h ∈ (X, ) to
∫ h g dμ is well-defined, linear and ω-continuous.
It is Scott-continuous provided that is a topology and μ
is Scott-continuous.
We must first show that the integral makes sense, namely that the
product map hg is in (X, ). The multiplication map
a, b ↦ ab is Scott-continuous from × to
, hence, for every t > 0, ab > t if and only if there
are two rational numbers p, q > 0 such that p>a, q>b and
pq>t. For every t > 0, (hg)^-1 (]t, ∞]) is then equal
to
⋃_p, q ∈, pq>t h^-1 (]p, ∞]) ∩ g^-1 (]q,
∞]). That is an infinite countable union, hence it is in
. Therefore hg is in (X, ).
Since product by g is linear and
ω-continuous (even Scott-continuous), the remaining claims
follow from items (iv) and (v) of
Proposition <ref>.
Proposition <ref> then turns this ω-continuous
linear function into an ω-continuous valuation, defined as
follows.
For every ω-topological space (X, ), for every
g ∈ (X, ), and for every ω-continuous
valuation μ on (X, ), we define:
(g ·μ) (U) ∫χ_U. g dμ
for every U ∈.
Lemma <ref> and Proposition <ref> together
yield the following.
For every ω-topological space (X, ), for every
g ∈ (X, ), and for every ω-continuous
valuation μ on (X, ),
* g ·μ is an ω-continuous valuation;
* g ·μ is a continuous valuation if (X, ) is a
topological space and μ is a continuous valuation;
* For every h ∈ (X, ),
∫ h d (g ·μ) = ∫ h g dμ.
In particular, if is a σ-algebra, then g ·μ is
a measure for every measure μ and every measurable map g from
X to . The measure g ·μ is sometimes written as
g dμ.
Given two valuations μ and ν on (X, ), one may wonder
when one can write ν as g ·μ for some suitable map
g—this is the goal of this paper. If ν = g ·μ, then we
will see that ν and μ must satisfy two conditions: absolute
continuity,
and what we call the Hahn decomposition property, after the Hahn
decomposition theorem of measure theory.
§.§ Absolute continuity
We take the following definition of absolute continuity. While
different from the usual definition, it is not entirely unusual, see
for example <cit.>.
Given two valuations μ and ν on a Pervin space (X, ),
we say that ν is absolutely continuous with respect to
μ if and only if for every U_0 ∈ such that
ν (U_0) < ∞, for every ϵ∈{0},
there is an η∈{0} such that for every
U ∈ such that U ⊆ U_0 and μ (U) < η,
ν (U) < ϵ.
When ν is a bounded valuation, the definition of absolute
continuity simplifies to: for every ϵ∈{0},
there is an η∈{0} such that for every
U ∈ such that μ (U) < η, ν (U) < ϵ.
The usual definition of absolute continuity is given as item (2) in
the following proposition, where we show that it is equivalent in the
case of σ-finite measures. A valuation ν on a Pervin space
(X, ) is σ-finite if and only if there is a
countable family of sets E_n ∈, n ∈, such that
⋃_n ∈ E_n=X and ν (E_n) < ∞ for each
n ∈. Replacing E_n by ⋃_k=0^n E_k if necessary,
we may assume that (E_n)_n ∈ is a monotone sequence.
This definition applies to measures as well, in which case we retrieve
the usual notion of σ-finiteness. Considering
Remark <ref>, the following is well-known for
bounded measures <cit.>, and the proof
is entirely similar.
Let ν, μ be two measures on a measurable space (X, ),
and consider the following statements.
* ν is absolutely continuous with respect to μ;
* for every U ∈ such that μ (U)=0, ν (U)=0.
Then (2) implies (1), and (1) and (2) are equivalent if
ν is σ-finite.
Let us assume that (2) holds, but not (1). There is an
ϵ > 0 and a set U_0 ∈ such that
ν (U_0) < ∞ and, for every n ∈, letting
η 1/2^n, there is an element V_n ∈ with
V_n ⊆ U_0 such that μ (V_n) < 1/2^n but
ν (V_n) ≥ϵ. In particular,
∑_n=0^∞μ (V_n) < ∞, so by the first
Borel-Cantelli lemma <cit.>,
μ (⋂_m ∈⋃_n ∈ V_n) = 0. Using
(2), it follows that
ν (⋂_m ∈⋃_n ≥ m V_n) = 0. The sets
⋃_n ≥ m V_n form a decreasing sequence of elements of
included in U_0, hence of finite ν-measure. Therefore
inf_m ∈ν (⋃_n ≥ m V_n) = 0. This is
impossible, since for every m ∈,
ν (⋃_n ≥ m V_n) ≥ν (V_m) ≥ϵ.
Conversely, we assume that (1) holds and that ν is
σ-finite. Let (E_n)_n ∈ be a monotone sequence
of elements of covering X and such that
ν (E_n) < ∞ for every n ∈. Let also U ∈
be such that μ (U)=0. For every n ∈, U ∩ E_n is
included in U, and μ (U ∩ E_n) = 0, so by absolute
continuity, for every ϵ > 0, ν (U ∩ E_n) < ϵ.
Since ϵ is arbitrary, ν (U ∩ E_n)=0. Then
ν (U) = ν (U ∩_n ∈ E_n) = _n ∈ν (U ∩ E_n)=0.
We will use the following often.
Let (X, ) be a Pervin space, μ be a
valuation on (X, ) and g ∈ (X, ).
For every U ∈, (g ·μ) (U) = ∫_0^∞μ (U ∩ g^-1 (]t, ∞])) dt.
Let ν g ·μ. For every U ∈, we write
ν (U) ∫χ_U g dμ as
∫_0^∞μ ((χ_U g)^-1 (]t, ∞])) dt. For every
t ∈,
(χ_U g)^-1 (]t, ∞]) = U ∩ g^-1 (]t, ∞]),
whence the result.
Let μ and ν be two valuations on a Pervin space
(X, ). If ν = g ·μ for some function
g ∈ (X, ),
then ν is
absolutely continuous with respect to μ.
Let us fix ϵ∈{0} and U_0 ∈ such
that ν (U_0) < ∞.
Let h (t) μ (U_0 ∩ g^-1 (]t, ∞])), and
h_N (t) be defined as h (t) if t ≤ N, 0 otherwise. The
maps h_N, N ∈, are antitonic, and their pointwise
supremum is h. Using Lemma <ref>, with
U U_0, and Fact <ref>,
ν (U_0) = ∫_0^∞ h (t) dt = _N ∈∫_0^∞ h_N (t) dt. Since ν (U_0) < ∞, for some
N ∈{0},
∫_0^∞ h_N (t) dt > ν (U_0) - ϵ/2. Then
∫_N^∞ h (t) dt < ϵ/2.
Let ηϵ / (2N). For every U ∈ such
that U ⊆ U_0 and μ (U) < η,
we show that
ν (U) < ϵ as follows.
ν (U)
= (g ·μ) (U)
= ∫_0^∞μ (U ∩ g^-1 (]t, ∞])) dt
by Lemma <ref>
= ∫_0^N μ (U ∩ g^-1 (]t, ∞])) dt + ∫_N^∞μ
(U ∩ g^-1 (]t, ∞])) dt
≤ N μ (U) + ∫_N^∞ h (t) dt
< N μ (U) + ϵ/2 < N η + ϵ/2 = ϵ.
§.§ Absolute continuity is not enough
Given a topological space (X, ), let be its
Borel σ-algebra. A Borel measure, namely a measure on
(X, ), induces a valuation on (X, ) by restriction
to the open sets. The Borel measures for which the induced valuation
is continuous are traditionally called τ-smooth. By
Adamski's theorem <cit.>, it is
equivalent to require all Borel measures on (X, ) to be
τ-smooth, or to require (X, ) to be hereditarily
Lindelöf; a space is hereditarily Lindelöf if and only if every
family (U_i)_i ∈ I of open subsets has a countable subfamily
with the same union. All second-countable spaces are hereditarily
Lindelöf.
There has been quite some literature on the converse question, among
which <cit.>: given a
continuous valuation ν on (X, ), does ν extend to a
(necessarily τ-smooth) Borel measure? One of the most general
theorems of this kind is the following <cit.>:
every continuous valuation on an LCS-complete space extends to a Borel
measure; an LCS-complete space is a homeomorph of a G_δ
subset of a locally compact sober space. The class of LCS-complete
spaces includes all locally compact sober spaces, Matthew de Brecht's
quasi-Polish spaces <cit.>, and therefore also all
Polish spaces.
Additionally, a standard use of the πλ-theorem
<cit.> shows that any σ-finite
continuous valuation ν on (X, ) extends to a unique
Borel measure. That Borel measure μ is such that there exists a
monotone sequence (U_n)_n ∈ of open sets covering X and
such that μ (U_n) < ∞ for every n ∈. This is a
stricter condition than simply being σ-finite, since U_n is
required to be open, and Borel measures having this property are
sometimes called moderated.
Since quasi-Polish spaces are second-countable
<cit.>, hence hereditarily Lindelöf,
it follows that σ-finite continuous valuations are in
one-to-one correspondence with moderated τ-smooth measures on
quasi-Polish spaces.
We use this to transport the classical Radon-Nikodým theorem over to
the world of continuous valuations.
In one direction, given any σ-finite continuous valuation μ
on an LCS-complete space (X, ), let μ be its
unique extension to a Borel measure.
For every measurable map (not just any lower semicontinuous map) g,
in (X, ), we can form the measure
g ·μ on (X, ). This induces an
ω-continuous valuation by restriction to , which we
write as g ·μ, extending Definition <ref> to a
larger class of density functions. With this definition, we have the
following.
For any two σ-finite continuous valuations on an LCS-complete
space (X, ), the following are equivalent:
* ν is absolutely continuous with respect to
μ;
* there is a measurable map g ∈ (X, ) such
that ν = g ·μ.
Additionally, g is unique up to μ-null sets.
The condition ν = g ·μ is equivalent to
ν= g ·μ, by our (re)definition of
g ·μ. We conclude by invoking the classical
Radon-Nikodým theorem.
Although this is a positive result, this gives us a recipe to show
that absolute continuity is not enough for two σ-finite
ω-continuous valuation to have a density
g ∈ (X, ): find measurable maps that are equal to no
lower semicontinuous map up to a μ-null set.
We provide two counter-examples. The first one relies on the
existence of non-trivial specialization orderings in non-T_1 spaces.
The second one takes place in with its standard metric
topology.
Let μ a δ_x + b δ_y, where a, b > 0 and x
and y are two points of an LCS-complete space (X, ) with
x < y. (We let x ≤ y if and only if every U ∈
containing x contains y; this is the specialization
preordering of (X, ). A space is T_0 if and only if
≤ is antisymmetric, and every LCS-complete space is T_0. We
write x < y if x ≤ y and y ≰x.) Next, consider any
g ∈ (X, ) such that g (x) > g (y). For
example, taking g 1-h fits, where h is any lower
semicontinuous map from (X, ) to [0, 1] ⊆
such that h (x) ≠ h (y); indeed, every lower semicontinuous map
is monotonic. We note that g is equal to no lower semicontinuous
map up to any μ-null set, because g is antitonic,
lower semicontinuous maps are monotonic, and the
μ-null sets are the Borel sets that contain neither
x nor y. Therefore g ·μ has no lower semicontinuous
density with respect to μ. For a concrete instance of this
construction, consider Sierpiński space for (X, ), namely
({0, 1}, {∅, {1}, {0, 1}}), x 0,
y 1, g (0) 1, g (1) 0.
Let μ be the bounded discrete valuation
δ_0 + ∑_n ∈1/2^nδ_1/2^n on
with its standard topology. Let g map every non-zero real
number to 0, and 0 to 1. This is a measurable map. The
μ-null sets are the Borel sets that do not contain 0
or any point 1/2^n, n ∈. If g were equal to some
h ∈ () up to some μ-null set, then we
would have h (0)=1 and h (1/2^n)=0 for every n ∈. But
then h^-1 (]1/2, ∞]) would contain 0, hence 1/2^n for
n large enough, and that is impossible since h (1/2^n)=0. It
follows that g ·μ has no lower semicontinuous density with
respect to μ.
We will therefore look for additional conditions imposed by the
existence of g ∈ (X, ) such that ν = g ·μ.
§.§ The Smiley-Horn-Tarski theorem
Let 𝒜 () be the smallest algebra of subsets of X
containing . Its elements are the unions of finite collections
of pairwise disjoint crescents. A crescent is a difference
U V of elements U, V ∈; we can assume
V ⊆ U without loss of generality.
The Smiley-Horn-Tarski theorem
<cit.> states that every bounded
valuation ν on (X, ) extends to a unique (bounded)
valuation on (X, 𝒜 ()). In general, every valuation
ν on (X, ) (not necessarily bounded) extends to a valuation
on (X, 𝒜 ()), but that extension may fail to be unique
<cit.>. We will usually write an
extension of ν on (X, 𝒜 ()) with the same letter
ν, although one should be careful that such extensions may fail to
be unique when ν is not bounded. Still, some uniqueness remains:
if C ∈𝒜 () can be written as a disjoint union of
crescents U_i V_i with V_i ⊆ U_i (1≤ i ≤ n),
and if ν (U_i) < ∞ for every i, then necessarily
ν (C) = ∑_i=1^n (ν (U_i) - ν (V_i)).
Let (X, ) be a Pervin space, μ be a bounded valuation on
(X, ) and g ∈ (X, ). The function ν that
maps every C ∈𝒜 () to
∫_0^∞μ (C ∩ g^-1 (]t, ∞])) dt is a
valuation on 𝒜 () that extends g ·μ to
(X, 𝒜 ()).
We call the valuation ν above the canonical extension of
g ·μ to (X, 𝒜 ()). There may be others:
while μ is bounded, g ·μ may fail to be.
The definition of ν (C) makes sense, since the extension of
μ to 𝒜 (), which is required to make sense of
μ (C ∩ g^-1 (]t, ∞])) is unique, owing to the fact
that μ is bounded. It is clear that ν (∅)=0. The
modularity and the monotonicity of ν on 𝒜 ()
follow from the modularity and the monotonicity of μ. Hence
ν is a valuation, and it extends g ·μ by
Lemma <ref>.
Since extensions to 𝒜 () are unique for bounded
valuations, we obtain the following.
Let (X, ) be a Pervin space, μ be a bounded valuation on
(X, ) and g ∈ (X, ). If ν g ·μ is bounded, then its unique extension to 𝒜 ()
is such that ν (C) = ∫_0^∞μ (C ∩ g^-1 (]t,
∞])) dt for every C ∈𝒜 ().
§.§ Signed valuations
In order to state the Hahn decomposition property, we need to
introduce signed valuations.
A signed valuation is a map ς→
(not ) that is strict (ς (∅) = 0) and
modular (for all U, V ∈,
ς (U ∪ V) + ς (U ∩ V) = ς (U) +
ς (V)).
Typical examples of signed valuations are given by maps
ν - r ·μ→, where ν and μ are
bounded valuations on (X, ), and every r ∈.
We have the following analogue of the bounded form of the
Smiley-Horn-Tarski theorem. The proof uses ingredients similar to
Proposition <ref>, and can also be used to derive the
classical Smiley-Horn-Tarski theorem.
Let be a lattice of subsets of a set X, and ς be
a signed valuation on (X, ). Then ς extends to a
unique signed valuation on (X, 𝒜 ()). The
extension—still written ς—satisfies
ς (U V) = ς (U) - ς (U ∩ V) =
ς (U ∪ V) - ς (V) for all U, V ∈.
If ς^% is any signed valuation extending ς on
(X, 𝒜 ()), then it is defined uniquely on crescents
by the fact that ς^% (U V) must be equal to
ς (U) - ς (U ∩ V) and also to
ς (U ∪ V) - ς (V) for all U, V ∈, by
modularity and the fact that
ς^% ((U ∩ V) ∩ (U V)) and
ς^% (V ∩ (U V)) must both be equal to
ς^% (∅)=0; then ς^% is uniquely
determined on finite disjoint unions of crescents by additivity.
We proceed as follows to prove the existence of ς^%. Let
M^+ be the set of functions h ∈ (X, ) taking their
values in . One can write any such h as
∑_i=1^∞χ_U_i in a unique way, where
U_1 ⊇⋯⊇ U_n ⊇⋯ form an
antitone sequence of elements of , with U_n = ∅ for
n large enough. Indeed, U_i is determined uniquely as
h^-1 ([i, ∞]) (which is equal to
h^-1 (]i-ϵ, ∞]) for any ϵ∈]0,1[,
hence is in ). On every such h ∈ M^+, let
F (h) ∑_i=1^∞ς (U_i). This is a finite
sum, because ς is strict.
With h as above and U ∈, h+χ_U is equal to
∑_i=1^∞χ_V_i where V_i = (h+ χ_U)^-1 ([i,
∞]) = h^-1 ([i, ∞]) ∪ (h^-1 ([i-1, ∞]) ∩
U) = U_i ∪ (U_i-1∩ U); when i=1, we use the convention
that U_0=X. Hence:
F (h+χ_U) = ∑_i=1^∞ς (U_i ∪ (U_i-1∩ U))
= ∑_i=1^∞(ς (U_i) + ς (U_i-1∩ U)
- ς (U_i ∩ U))
by modularity; note that U_i ∩
U_i-1∩ U simplifies to U_i ∩ U
= F (h) + ς (U),
by canceling the telescoping terms ς (U_i-1∩ U) and
ς (U_i ∩ U), so that only
ς (U_0 ∩ U) = ς (U) remains.
For every h' ∈ M^+, written as ∑_j=1^∞χ_V_j
where V_1 ⊇⋯⊇ V_n ⊇⋯ form an
antitone sequence of elements of , with V_n = ∅ for
n large enough, we obtain that F (h+h') = F (h) + F (h') by
induction on the number of non-empty sets V_n.
We can therefore extend F to an additive map from M to ,
where M is the collection of differences f-g of two elements of
M^+, by F (f-g) F (f) - F (g). This is unambiguous: if
f-g=f'-g', then f+g' = f'+g, so F (f)+F(g')=F(f')+F(g), or
equivalently F (f)-F(g) = F(f')-F(g').
Let us define ς^+ (C) F (χ_C) for every subset
C of X such that χ_C ∈ M. Amongst those, we find the
crescents U V (with U, V ∈ and V ⊆ U),
since χ_U V = χ_U - χ_V. We also find the finite
disjoint unions of crescents C_1, …, C_n, since their
characteristic map is ∑_i=1^n χ_C_i. Now ς^+
is strict since F (0)=0, and modular on (X, 𝒜 ()).
The latter rests on the fact that for any sets C and C',
χ_C ∪ C' + χ_C ∩ C' = χ_C + χ_C': then
ς (C ∪ C') + ς (C ∩ C') = F (χ_C ∪ C'
+ χ_C ∩ C') (since F is additive)
= F (χ_C + χ_C') = ς (C) + ς (C') (since
F is additive, once again).
§.§ The Hahn decomposition property
Let (X, ) be a Pervin space. A signed valuation
ς→ has the Hahn decomposition
property if and only if there is an element U of such
that:
* for every crescent C included in U, ς (C) ≥ 0;
* for every crescent C disjoint from U, ς (C) ≤ 0.
In this definition, we extend ς implicitly to a valuation on
(X, 𝒜 ()), using Proposition <ref>,
in order to make sense of ς (C). We will call the set U
given above a witness to the Hahn decomposition property.
Let μ and ν be two bounded valuations on a Pervin space
(X, ). If ν = g ·μ for some
g ∈ (X, ), then for every r ∈, the signed
valuation ν - r ·μ has the Hahn decomposition
property—and one can take U g^-1 (]r, ∞]) as a
witness to the latter.
We take U g^-1 (]r, ∞]). For every crescent
C ⊆ U, C ∩ g^-1 (]t, ∞]) = C for every
t ∈ [0, r], since g^-1 (]t, ∞]) contains U in that
case. Hence:
ν (C) = ∫_0^∞μ (C ∩ g^-1 (]t, ∞])) dt
by Corollary <ref>
= ∫_0^r μ (C ∩ g^-1 (]t, ∞])) dt + ∫_r^∞μ (C ∩ g^-1 (]t, ∞])) dt
≥∫_0^r μ (C ∩ g^-1 (]t, ∞])) dt
= ∫_0^r μ (C) dt = r ·μ (C).
For every crescent C disjoint from U,
C ∩ g^-1 (]t, ∞]) is empty for every t ≥ r, since
g^-1 (]t, ∞]) is included in U in that case. Hence:
ν (C) = ∫_0^∞μ (C ∩ g^-1 (]t, ∞])) dt
= ∫_0^r μ (C ∩ g^-1 (]t, ∞])) dt
≤∫_0^r μ (C) dt = r ·μ (C).
Given any valuation ν on a Pervin space (X, ), and any
U_0 ∈, we can define a valuation ν_|U_0 by letting
ν_|U_0 (U) ν (U ∩ U_0) for every U ∈; then
ν_|U_0 is an ω-continuous (resp., continuous) valuation
if ν is. We also note that ν_|U_0 is a bounded valuation if
and only if ν (U_0) < ∞. We use this in the proof of the
following corollary, and we will use the notion again later.
Let μ and ν be two valuations on a Pervin space
(X, ). If ν = g ·μ for some
g ∈ (X, ), then for every U_0 ∈ such that
ν (U_0) < ∞ and μ (U_0) < ∞, for every
r ∈, the signed valuation ν_|U_0 - r ·μ_|U_0
has the Hahn decomposition property.
If ν (U_0) < ∞ and μ (U_0) < ∞, then
ν_|U_0 = g ·μ_|U_0, since for every U ∈,
ν_|U_0 (U) = ν (U ∩ U_0) = ∫_0^∞μ (U ∩ U_0
∩ g^-1 (]t, ∞])) dt (by Lemma <ref>)
= ∫_0^∞μ_|U_0 (U ∩ g^-1 (]t, ∞])) dt = (g
·μ_|U_0) (U). We conclude by using
Proposition <ref>.
§ THE EXISTENCE OF DENSITY MAPS
We now show that absolute continuity and the Hahn decomposition
property suffice to guarantee the existence of a density function.
The following are the two key lemmata. We write for the set of
dyadic numbers, namely rational numbers of the form p/2^n with
p ∈ and n ∈. We also use the Smiley-Horn-Tarski
theorem in order to make sense of ν (C) below, and the canonical
extension given in Lemma <ref> in order to make sense
of (g ·μ) (C).
Let (X, ) be a Pervin space, g ∈ (X, ), and
ν, μ be two bounded valuations on (X, ). Let us
assume that for every non-negative dyadic number
r ∈∩, for every crescent
C ⊆ g^-1 (]r, ∞]), ν (C) ≥ r ·μ (C).
Then for every C ∈𝒜 (),
ν (C) ≥ (g ·μ) (C). In particular,
ν≥ g ·μ on (X, ).
It suffices to show the claim for every crescent C. Once this is
done, the claim that ν (C) ≥ (g ·μ) (C) for every C
∈𝒜 () follows from the fact that C is a disjoint union
of crescents, and that ν and g ·μ are additive.
We fix a crescent C. By definition of canonical extensions
(Lemma <ref>),
(g ·μ) (C) = ∫_0^∞μ (C ∩ g^-1 (]t, ∞]))
dt.
The main ingredient of the proof is summarized in
Figure <ref>: the sum of the areas of the
vertical bands on the left is equal to the sum of the areas of the
horizontal bands on the right. We will rely on that figure in what
follows.
Let f (t) μ (C ∩ g^-1 (]t, ∞])), and
f_N (t) f (t) if t ≤ N, 0 otherwise. In the figure,
f is shown as the solid decreasing curve, both on the left-hand
side and on the right-hand side. Since f is the pointwise
supremum of (f_N)_N ∈,
(g ·μ) (C) = _N ∈∫_0^∞ f_N (t) dt
by Fact <ref>.
We fix an arbitrary r ∈ such that r < (g ·μ) (C).
For N ∈ large enough,
r ≤∫_0^∞ f_N (t) dt = ∫_0^N μ (C ∩ g^-1
(]t, ∞])) dt ≤∑_k=1^N2^N1/2^Nμ (C ∩
g^-1 (](k-1)/2^N, ∞]). The latter is the sum of the areas
of the vertical bands on the left of
Figure <ref>.
Reorganizing the summation, that is also equal to the sum of the
areas of the horizontal bands on the right, so:
r ≤∑_k=1^N2^Nk/2^Nμ (C ∩ g^-1 (](k-1)/2^N,
∞]) g^-1 (]k/2^N, ∞]))
+ N μ (C ∩ g^-1 (]N, ∞])).
The final term in the sum is the area of the bottommost band. The
sum of the terms with 1≤ k≤ N is bounded from above by
∑_k=1^N k/2^Nμ (C) ≤N(N+1)/2^N+1μ (C). For every k between N+1 and N2^N, the crescent
C' C ∩ g^-1 (](k-1)/2^N, ∞]) g^-1 (]k/2^N,
∞]) is included in g^-1 (](k-1)/2^N, ∞]), so
ν (C') ≥ (k-1)/2^N μ (C') by assumption. Similarly,
ν (C ∩ g^-1 (]N, ∞])) ≥ N μ (C ∩ g^-1 (]N,
∞])).
It follows that:
r ≤N(N+1)/2^N+1μ (C) + ∑_k=N+1^N2^Nk/k-1ν (C ∩ g^-1 (](k-1)/2^N, ∞]) g^-1
(]k/2^N, ∞]))
+ ν (C ∩ g^-1 (]N, ∞])).
In the
middle sum, k/(k-1) is smaller than or equal to (N+1)/N. We
also have
ν (C ∩ g^-1 (]N, ∞])) ≤N+1/Nν (C ∩
g^-1 (]N, ∞])), because N+1/N≥ 1. Hence:
r ≤N(N+1)/2^N+1μ (C) + N+1/N∑_k=N+1^N2^Nν (C ∩ g^-1 (](k-1)/2^N, ∞])
g^-1 (]k/2^N, ∞]))
+ N+1/Nν (C ∩ g^-1 (]N, ∞]))
By the additivity of ν, the right-hand side is equal to
N(N+1)/2^N+1μ (C) + N+1/Nν (C ∩ g^-1
(]N/2^N, ∞])). Since C ∩ g^-1 (]N/2^N, ∞]) is
included in C, and ν is monotonic,
r ≤N(N+1)/2^N+1μ (C) + N+1/Nν (C).
We let N tend to ∞, and we obtain that r ≤ν (C).
Taking suprema over all r < (g ·μ) (C),
(g ·μ) (C) ≤ν (C).
We have a somewhat symmetric situation in the following lemma, except
that we cannot conclude that (g ·μ) (C) ≥ν (C) without
further assumptions. Once again, we use the canonical extension of g
·μ to make sense of (g ·μ) (C).
Let (X, ) be a Pervin space, g ∈ (X, ), and
ν, μ be two bounded valuations on (X, ). Let us
assume that for every non-negative dyadic number
r ∈∩, for every crescent C disjoint from
g^-1 (]r, ∞]), ν (C) ≤ r ·μ (C). Then there
is an directed countable family (U_N)_N ∈ of elements
of with the following properties:
* ν (X _N ∈ U_N)=0;
* for every C ∈𝒜 (), for every N ∈,
(g ·μ) (C) ≥N/N+1ν (C ∩ U_N) + N (μ (C
∩ U_N) - 1/N+1ν (C ∩ U_N));
By definition of canonical extensions (Lemma <ref>),
(g ·μ) (C) = ∫_0^∞μ (C ∩ g^-1 (]t, ∞]))
dt ≥∑_k=1^N2^N1/2^Nμ (C ∩ g^-1
(]k/2^N, ∞])) dt. The latter is the area of the vertical
bands on the left of Figure <ref>, which
rewrites as the area of the horizontal bands on the right, namely:
∑_k=1^N2^N-1k/2^Nμ (C ∩ g^-1 (]k/2^N,
∞]) g^-1 (](k+1)/2^N, ∞]))
+ N μ (C ∩ g^-1 (]N, ∞])).
The last term is the area of the bottommost band.
For each k, the crescent
C' C ∩ g^-1 (]k/2^N, ∞]) g^-1 (](k+1)/2^N,
∞]) is disjoint from g^-1 (](k+1)/2^N, ∞]), so by
assumption, ν (C') ≤k+1/2^N·μ (C').
Therefore:
(g ·μ) (C)
≥∑_k=1^N2^N-1k/k+1ν (C
∩ g^-1 (]k/2^N, ∞]) g^-1 (](k+1)/2^N,
∞]))
+ N μ (C ∩ g^-1 (]N, ∞])).
Keeping only the terms from the summation with k ≥ N and
observing that k/k+1≥N/N+1 for all such k,
(g ·μ) (C)
≥∑_k=N^N2^N-1N/N+1ν (C
∩ g^-1 (]k/2^N, ∞]) g^-1 (](k+1)/2^N,
∞]))
+ N μ (C ∩ g^-1 (]N, ∞]))
= N/N+1ν (C ∩ g^-1 (]N/2^N, ∞])
g^-1 (]N, ∞]))
+ N μ (C ∩ g^-1 (]N, ∞]))
= N/N+1ν (C ∩ g^-1 (]N/2^N, ∞]))
+ N (μ (C ∩ g^-1 (]N, ∞])) - 1/N+1ν (C ∩ g^-1 (]N, ∞]))).
Let U_N g^-1 (]N/2^N, ∞]) for every N ∈: we
have just proved (ii). The family (U_N)_N ∈ is
directed: given any i, j ∈, there is an N ∈ such
that N/2^N ≤ i/2^i, j/2^j because N/2^N tends to 0 as N
tends to ∞; and then U_N contains both U_i and U_j.
Finally, _N ∈ U_N = g^-1 (]0, ∞]). Let C
be the crescent X _N ∈ U_N. This is disjoint
from g^-1 (]r, ∞]) for every r ∈∩, so ν
(C) ≤ r ·μ (C) for every r ∈∩ by
assumption. As a consequence, ν (C)=0, and this is (i).
The role of absolute continuity
is as follows.
Let (X, ) be a Pervin space, and ν and μ be two
bounded valuations on (X, ). Let (U_N)_N ∈ be a
countable family of elements of .
If ν is absolutely continuous with respect to μ, then for
every U ∈, for every ϵ > 0, there is an
N_0 ∈ such that for every N ≥ N_0,
N (μ (U ∩ U_N) - 1/N+1ν (U ∩ U_N)) ≥
-ϵ.
Let us fix an arbitrary ϵ > 0. Using
Remark <ref>, since ν and μ are bounded,
we can find η > 0 such that for every V ∈ such that
μ (V) < η, ν (V) < ϵ. Since ν is bounded once
again, there is an N_0 ∈ such that N_0 η≥ν (X).
For every N ≥ N_0, either μ (U ∩ U_N) < η, in which
case ν (U ∩ U_N) - ϵ < 0 ≤ N μ (U ∩ U_N), or
μ (U ∩ U_N) ≥η, in which case
N μ (U ∩ U_N) ≥ N_0 η≥ν (X) ≥ν (U ∩ U_N)
- ϵ. Whatever the alternative, we have
N μ (U ∩ U_N) - ν (U ∩ U_N) ≥ -ϵ, and
therefore
N (μ (U ∩ U_N) - 1/N+1ν (U ∩ U_N)) ≥
-ϵ, for every N ≥ N_0.
The following is the only place in this section where we need our
valuations to be ω-continuous.
Let ν be an ω-continuous bounded valuation on an
ω-topological space (X, ), and let
(U_N)_N ∈ be a countable directed family of elements of
such that ν (X _N ∈ U_N)=0. For
every C ∈𝒜 (),
_N ∈N/N+1ν (C ∩ U_N) ≥ν (C).
Let U_∞_N ∈ U_N. For every
C ∈𝒜 (), the family
(ν (C ∩ U_N))_N ∈ is directed. This is because
(U_N)_N ∈ is directed and
U ∈↦ν (C ∩ U) is monotonic. Indeed,, if
U ⊆ V, then
ν (C ∩ V) = ν (C ∩ U) + ν (C ∩ (V U)) ≥ν
(C ∩ U).
We claim that
_N ∈ν (C ∩ U_N) ≥ν (C ∩ U_∞) for
every C ∈𝒜 (). (The equality follows by
monotonicity of U ↦ν (C ∩ U).) By additivity of
ν, and since + is Scott-continuous, it is enough to show this
when C is a crescent, say U' V', where U', V' ∈
and V' ⊆ U'. For every ϵ > 0, there is an
N ∈ such that
ν (U' ∩ U_N) ≥ν (U' ∩ U_∞) - ϵ, since
ν is ω-continuous. Since ν is monotonic,
ν (V' ∩ U_N) ≤ν (V' ∩ U_∞), and therefore
ν (C ∩ U_N) = ν (U' ∩ U_N) - ν (V' ∩ U_N) ≥ν
(U' ∩ U_∞) - ν (V' ∩ U_∞) - ϵ = ν (C ∩
U_∞) - ϵ.
Now, since ν (X U_∞)=0, we have
ν (C ∩ U_∞) = ν (C). (Formally,
ν (C U_∞) ≤ν (X U_∞)=0, and then
ν (C) = ν (C ∩ U_∞) + ν (C U_∞) = ν (C
∩ U_∞).) Therefore
_N ∈ν (C ∩ U_N) ≥ν (C). Since
multiplication is Scott-continuous on and
_N ∈N/N+1=1, we conclude.
Let μ and ν be two bounded ω-continuous valuations on
an ω-topological space (X, ). If ν is absolutely
continuous with respect to μ and if for every non-negative
dyadic number r ∈∩, for every crescent C disjoint
from g^-1 (]r, ∞]), ν (C) ≤ r ·μ (C), then
g ·μ≥ν on (X, ).
Let U_N be as in Lemma <ref>. For every
U ∈, for every N ∈, (g ·μ) (U)
is larger than or equal to the sum of
N/N+1ν (U ∩ U_N) and of
N (μ (U ∩ U_N) - 1/N+1ν (U ∩ U_N)). For every
ϵ > 0, the latter is larger than or equal to -ϵ
for N large enough by Lemma <ref>, and the
former is larger than or equal to ν (U) - ϵ for N large
enough by Lemma <ref>. Hence
(g ·μ) (U) ≥ν (U) - ϵ. We conclude since
ϵ > 0 is arbitrary.
We now go beyond bounded valuations, and on to σ-finite valuations.
Let μ and ν be two ω-continuous valuations on an
ω-topological space (X, ). If both ν and μ
are σ-finite, there is a monotone sequence (E_n)_n ∈ of elements of such that _n ∈ E_n=X
and ν (E_n), μ (E_n) < ∞ for each n ∈.
Let (F_n)_n ∈ be a monotone sequence of elements of
such that ν (F_n) < ∞ and
_n ∈ F_n=X, and let (G_n)_n ∈ play the
same rôle with μ. Then let E_n F_n ∩ G_n for each
n ∈.
We will call any monotone sequence (E_n)_n ∈ satisfying
the conclusion of Lemma <ref> a witness of
the joint σ-finiteness of ν and μ.
Let (X, ) be an ω-topological space, and μ and
ν be two σ-finite ω-continuous valuations on
(X, ). Let (E_n) be any witness of joint
σ-finiteness of ν and μ. Then the following
properties are equivalent:
* there is a density function g ∈ (X, ) such
that ν = g ·μ;
* the following two conditions are met:
(2a) ν is absolutely continuous with respect to μ;
(2b) for every n ∈, for every r ∈,
ν_|E_n - r ·μ_|E_n has the Hahn decomposition
property.
The implication (1) (2) is by Proposition <ref>
and Corollary <ref>.
In the converse direction, let (E_n)_n ∈ be as given in
Lemma <ref>. For each n ∈ and for each
non-negative rational number q, ν_|E_n - q ·μ_|E_n
has the Hahn decomposition property, so there is an element
U_nq∈ such that every crescent C included in U_nq
satisfies ν (C ∩ E_n) ≥ q ·μ (C ∩ E_n) and every
crescent C disjoint from U_nq satisfies
ν (C ∩ E_n) ≤ q ·μ (C ∩ E_n).
Since is an ω-topology,
V_q ⋃_q' ∈, q' ≥ q
n ∈
(E_n ∩ U_nq) is in for every q ∈, q ≥ 0.
Moreover, (V_q)_q ∈, q ≥ 0 forms an antitonic chain:
if q ≤ q' then V_q ⊇ V_q'.
Given n ∈ and q ∈, q ≥ 0, we claim that for
every crescent C ⊆ V_q,
ν (C ∩ E_n) ≥ q ·μ (C ∩ E_n), and that every
crescent C disjoint from V_q satisfies
ν (C ∩ E_n) ≤ q ·μ (C ∩ E_n). The second
property is clear: if C is disjoint from V_q, then it is
disjoint from E_n ∩ U_nq, so C ∩ E_n is a crescent
disjoint from U_nq, whence
ν ((C ∩ E_n) ∩ E_n) ≤ q ·μ ((C ∩ E_n) ∩
E_n). For the first property, where C ⊆ V_q, let us
write C as U V where U, V ∈. We enumerate the
rational numbers larger than or equal to q as
(q_m)_m ∈. Since C ⊆ V_q,
ν (C ∩ E_n) = ν (C ∩ E_n ∩ V_q). Now
E_n ∩ V_q = _p, p' ≥ n W_pp', where
W_pp'⋃_0 ≤ j≤ p
0≤ k≤ p'
(E_j ∩ E_n ∩ U_jq_k). Therefore
ν (C ∩ E_n) = ν (C ∩ E_n ∩ V_q) = ν (U ∩ E_n ∩
V_q V) = ν ((U ∩ E_n ∩ V_q) ∪ V) - ν (V) =
_p, p' ∈ν ((U ∩ W_pp') V) - ν (V) =
_p, p' ∈ν (C ∩ W_pp'). Similarly,
μ (C ∩ E_n) = _p, p' ∈μ (C ∩ W_pp').
We can write W_pp' as the finite disjoint union of crescents
C_jk with 0 ≤ j≤ p and 0 ≤ k≤ p', where
C_jk (E_j ∩ E_n ∩ U_jq_k) ⋃_0 ≤ j'≤ j
0≤ k'≤ k
(j',k') ≠
(j,k) (E_j'∩ E_n ∩ U_j'q_k'). Then
C_jk⊆ U_jq_k, hence also
C ∩ C_jk⊆ U_jq_k, so
ν (C ∩ C_jk∩ E_j) ≥ q_k ·μ (C ∩ C_jk∩
E_j). Since C_jk⊆ E_j, this simplifies to
ν (C ∩ C_jk) ≥ q_k ·μ (C ∩ C_jk). Then
ν (C ∩ W_pp') = ∑_0≤ j≤ p
0≤ k≤
p'ν (C ∩ C_jk) ≥∑_0≤ j≤
p
0≤ k≤ p' q_k ·μ (C ∩ C_jk). Since
q_k ≥ q for every k, this is larger than or equal to
q ·∑_0≤ j≤ p
0≤ k≤ p'μ (C ∩
C_jk) = q ·μ (C ∩ W_pp'). Taking suprema over
p, p' ∈, we obtain that
ν (C ∩ E_n) ≥ q ·μ (C ∩ E_n), as desired.
We define g (x) as
{t ∈|∃ q ∈, q > t and x ∈
V_q}. Then g (x) > t if and only if x ∈ V_q for some
q ∈, q > t. Hence
g^-1 (]t, ∞]) = ⋃_q ∈, q > t V_q, which is
in since is an ω-topology.
Therefore g is in (X, ).
Let us fix n ∈. For every non-negative dyadic number
r ∈∩,
g^-1 (]r, ∞]) = ⋃_q ∈, q > r V_q ⊆
V_r, so for every crescent C ⊆ g^-1 (]r, ∞]),
ν (C ∩ E_n) ≥ r ·μ (C ∩ E_n). By
Lemma <ref>, ν_|E_n≥ g ·μ_|E_n.
For every crescent C disjoint from g^-1 (]r, ∞]), C is
disjoint from every V_q with q > r, so
ν (C ∩ E_n) ≤ q ·μ (C ∩ E_n) for every rational
q > r; therefore ν_|E_n (C) ≤ r ·μ_|E_n (C), and
by Corollary <ref>,
g ·μ_|E_n≥ν_|E_n.
It follows that ν_|E_n = g ·μ_|E_n for every
n ∈. Then, using the fact that
X = _n ∈ E_n and the ω-continuity of ν,
for every U ∈,
ν (U) = _n ∈ν_|E_n (U) = _n ∈ (g
·μ_|E_n) (U) = _n ∈∫_0^∞μ (U ∩
E_n ∩ g^-1 (]t, ∞])) dt (by Lemma <ref>),
and this is equal to
∫_0^∞μ (E ∩ g^-1 (]t, ∞])) dt by
Fact <ref> and the ω-continuity of μ, namely
to (g ·μ) (U). Therefore ν = g ·μ.
In the special case where is not only an ω-topology,
but is also closed under complements, namely when is a
σ-algebra, we have seen that ω-continuous valuations
and measures are the same thing. Then, for every n ∈ and
for every r ∈, ν_|E_n - r ·μ_|E_n is a signed
measure. The Hahn decomposition theorem
<cit.> states that every signed
measure has the Hahn decomposition property, and therefore property
(2b) is simply true in the case of measures.
Hence Theorem <ref> implies the classical
Radon-Nikodým theorem.
abbrv
10
Adamski:measures
W. Adamski.
τ-smooth Borel measures on topological spaces.
Mathematische Nachrichten, 78:97–107, 1977.
alvarez-manilla00
M. Alvarez-Manilla, A. Edalat, and N. Saheb-Djahromi.
An extension result for continuous valuations.
Journal of the London Mathematical Society, 61:629–640, 2000.
Billingsley:probmes
P. Billingsley.
Probability and Measure.
Wiley series in probability and mathematical statistics. John Wiley
and Sons, 3rd edition, 1995.
Bochner:rn
S. Bochner.
Additive set functions on groups.
Annals of Mathematics, 40:769–799, 1939.
Choquet:capacities
G. Choquet.
Theory of capacities.
Annales de l'Institut Fourier, 5:131–295, 1953–54.
deBrecht:qPolish
M. de Brecht.
Quasi-Polish spaces.
Annals of Pure and Applied Logic, 164(3):356–381, 2013.
dBGLJL:LCS
M. de Brecht, J. Goubault-Larrecq, X. Jia, and Z. Lyu.
Domain-complete and LCS-complete spaces.
Electronic Notes in Theoretical Computer Science, 345:3–35,
2019.
Proc. 8th International Symposium on Domain Theory (ISDT'19).
GHKLMS:contlatt
G. Gierz, K. H. Hofmann, K. Keimel, J. D. Lawson, M. Mislove, and D. S. Scott.
Continuous Lattices and Domains, volume 93 of Encyclopedia
of Mathematics and its Applications.
Cambridge University Press, 2003.
JGL-topology
J. Goubault-Larrecq.
Non-Hausdorff Topology and Domain Theory—Selected Topics in
Point-Set Topology, volume 22 of New Mathematical Monographs.
Cambridge University Press, 2013.
HornTarski48:ext
A. Horn and A. Tarski.
Measures in boolean algebras.
Transactions of the American Mathematical Society, 64, 1948.
Jones:proba
C. Jones.
Probabilistic Non-Determinism.
PhD thesis, University of Edinburgh, 1990.
Technical Report ECS-LFCS-90-105.
jones89
C. Jones and G. Plotkin.
A probabilistic powerdomain of evaluations.
In Proceedings of the 4th Annual Symposium on Logic in Computer
Science, pages 186–195. IEEE Computer Society Press, 1989.
KL:measureext
K. Keimel and J. Lawson.
Measure extension theorems for T_0-spaces.
Topology and its Applications, 149(1–3):57–83, 2005.
KR
D. A. Klain and G.-C. Rota.
Introduction to Geometric Probability.
Lezioni Lincee. Cambridge University Press, 1997.
Lawson:valuation
J. D. Lawson.
Valuations on continuous lattices.
In R.-E. Hoffmann, editor, Mathematische Arbeitspapiere,
volume 27, pages 204–225, Universität Bremen, 1982.
Nikodym:radon
O. M. Nikodým.
Sur une généralisation des intégrales de M. J.
Radon.
Fundamenta Mathematicae, 15, 1930.
Pettis:ext
B. J. Pettis.
On the extension of measures.
Annals of Mathematics, 54(1):186–197, 1951.
Pin:pervin
J.-É. Pin.
Dual space of a lattice as the completion of a pervin space.
In P. Höfner, D. Pous, and G. Struth, editors, Relational
and Algebraic Methods in Computer Science, pages 24–40, Cham, 2017.
Springer International Publishing.
Radon:nikodym
J. Radon.
Theorie und Anwendungen der absolut additiven Mengenfunktionen.
Sitzungsberichte der Kaiserlichen Akademie der Wissenschaften,
Mathematisch-Naturwissenschaftliche Classe, Wien, Kl. IIa, 122:1295–1438,
1913.
saheb-djahromi:meas
N. Saheb-Djahromi.
Cpo's of measures for nondeterminism.
Theoretical Computer Science, 12:19–37, 1980.
smiley44
M. F. Smiley.
An extension of metric distributive lattices with an application to
general analysis.
Transactions of the American Mathematical Society,
56(3):435–447, 1944.
Tix:bewertung
R. Tix.
Stetige Bewertungen auf topologischen Räumen.
Diplomarbeit, TH Darmstadt, June 1995.
|
http://arxiv.org/abs/2307.02104v1
|
20230705082321
|
Molecular outflow in the reionization-epoch quasar J2054-0005 revealed by OH 119 $μ$m observations
|
[
"Dragan Salak",
"Takuya Hashimoto",
"Akio K. Inoue",
"Tom J. L. C. Bakx",
"Darko Donevski",
"Yuma Sugahara",
"Yoichi Tamura",
"Nario Kuno",
"Yusuke Miyamoto",
"Seiji Fujimoto",
"Suphakorn Suphapolthaworn"
] |
astro-ph.GA
|
[
"astro-ph.GA"
] |
Dragan Salak
dragan@oia.hokudai.ac.jp
0000-0002-3848-1757]Dragan Salak
Institute for the Advancement of Higher Education, Hokkaido University, Kita 17 Nishi 8, Kita-ku, Sapporo, Hokkaido 060-0817, Japan
Department of Cosmosciences, Graduate School of Science, Hokkaido University, Kita 10 Nishi 8, Kita-ku, Sapporo, Hokkaido 060-0810, Japan
Tomonaga Center for the History of the Universe (TCHoU), Faculty of Pure and Applied Science, University of Tsukuba, Ibaraki, 305-8571, Japan
Department of Physics, School of Advanced Science and Engineering, Faculty of Science and Engineering, Waseda University, 3-4-1, Okubo, Shinjuku, Tokyo 169-8555, Japan
Waseda Research Institute for Science and Engineering, Faculty of Science and Engineering, Waseda University, 3-4-1 Okubo, Shinjuku, Tokyo 169-8555, Japan
Department of Space, Earth, & Environment, Chalmers University of Technology, Chalmersplatsen 4 412 96 Gothenburg, Sweden
National Centre for Nuclear Research (NCBJ), Pasteura 7, 02-093 Warsaw, Poland
SISSA, ISAS, Via Bonomea 265, Trieste I-34136, Italy
IFPU, Institute for fundamental physics of the Universe, Via Beirut 2, I-34014 Trieste, Italy
Division of Particle and Astrophysical Science, Graduate School of Science, Nagoya University, Aichi 464-8602, Japan
Waseda Research Institute for Science and Engineering, Faculty of Science and Engineering, Waseda University, 3-4-1 Okubo, Shinjuku, Tokyo 169-8555, Japan
National Astronomical Observatory of Japan, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan
Tomonaga Center for the History of the Universe (TCHoU), Faculty of Pure and Applied Science, University of Tsukuba, Ibaraki, 305-8571, Japan
Department of Electrical, Electronic and Computer Engineering, Fukui University of Technology, 3-6-1 Gakuen, Fukui, Fukui 910-8505, Japan
Department of Astronomy, The University of Texas at Austin, Austin, TX 78712, USA
Department of Cosmosciences, Graduate School of Science, Hokkaido University, Kita 10 Nishi 8, Kita-ku, Sapporo, Hokkaido 060-0810, Japan
Molecular outflows are expected to play a key role in galaxy evolution at high redshift. To study the impact of outflows on star formation at the epoch of reionization, we performed sensitive ALMA observations of OH 119 toward J2054-0005, a luminous quasar at z=6.04. The OH line is detected and exhibits a P-Cygni profile that can be fitted with a broad blue-shifted absorption component, providing unambiguous evidence of an outflow, and an emission component at near-systemic velocity. The mean and terminal outflow velocities are estimated to be v_out≈670 km s^-1 and 1500 km s^-1, respectively, making the molecular outflow in this quasar one of the fastest at the epoch of reionization. The OH line is marginally resolved for the first time in a quasar at z>6, revealing that the outflow extends over the central 2 kpc region. The mass outflow rate is comparable to the star formation rate (Ṁ_out/SFR∼1), indicating rapid (∼10^7 yr) quenching of star formation. The mass outflow rate in a sample of star-forming galaxies and quasars at 4<z<6.4 exhibits a near-linear correlation with the total infrared luminosity, although the scatter is large. Owing to the high outflow velocity, a large fraction (up to ∼50%) of the outflowing molecular gas may be able to escape from the host galaxy into the intergalactic medium.
§ INTRODUCTION
Quasar feedback is one of the fundamental processes that regulate galaxy evolution. Galaxies acquire gas via accretion from the intergalactic medium (IGM) and through merging, and lose a fraction of their gas via galactic outflows powered by feedback from starbursts and/or active galactic nuclei (AGNs). The baryon cycle is believed to regulate how much molecular gas is available for star formation and the growth of the central supermassive black holes (SMBHs) (e.g., ).
Recent observations have revealed the existence of massive (stellar mass M_⋆∼10^11M_) galaxies with diminished star formation activity already at z≳6, indicating that these objects experienced a vigorous starburst episode at an earlier epoch followed by quenching (e.g., ). When and how this quenching occurred is still debated, but quasar feedback is one of the possible mechanisms considered as a leading internal process to explain the rapid (<1 Gyr) inside-out quenching of star formation in massive galaxies <cit.>. Quasars at z∼6, powered by AGN and star formation, are thus believed to be an important evolutionary phase in massive galaxy evolution (e.g., ). To understand how massive galaxies evolved, it is important to reveal the physical conditions of the interstellar medium (ISM) in quasars at the epoch of reionization (EoR; 6≲ z≲20), and this has been at the focus of recent research (e.g., ). Many of these quasars exhibit high far-infrared luminosities (L_FIR≳10^13 L_) that indicate dust heating by extreme star formation and AGN radiation (e.g., ). Extrapolating from the known properties of local galaxies <cit.>, it is expected that energy released in such nuclear activity is sufficient to generate galactic outflows. Since molecular gas is the primary fuel for star formation and SMBH growth, it is important to investigate the molecular phase, which has been largely untraced, at the EoR.
While AGN-driven outflows have been observed extensively at low/moderate redshifts, and the relations between the outflows and host galaxies investigated (e.g., ), our understanding of quasar feedback in the early Universe has been limited, because it is difficult to detect outflows using the standard tracers such as CO. Most high-z outflow studies are based on searches of broad wings in the emission spectra of [C2] 158 and CO lines (e.g., ) and extended halos of cold gas <cit.>, but unambiguous detections of outflows (distinguished from inflows) using these lines are still rare at the EoR and beyond <cit.>, making it difficult to evaluate the quasar-driven feedback.
With a relative abundance of [OH]/[H_2]∼1×10^-7 to ∼5×10^-6 in nearby galaxies, hydroxyl (OH) is one of the important molecular species in the ISM <cit.>. Recent observations have shown that the OH ^2Π_3/2 J=5/2←3/2 absorption line at λ_rest=119 has proved to be a robust tracer of outflows in nearby ultraluminous infrared galaxies (ULIRGs) including AGNs <cit.>. The line is a doublet (rest wavelengths 119.23 and 119.44) with near-equal intensities due to the Λ-doubling of rotational energy levels. Each of these is further split due to hyperfine structure, although these usually remain unresolved in extragalactic observations.
The 119 doublet can unambiguously reveal the presence of cold molecular outflows and/or inflows through its P-Cygni profile. Since the energy required for the excitation of OH ^2Π_3/2 from the rotational ground state J=3/2 to the state J=5/2 is E/k≈120 K, where k is the Boltzmann constant, cold gas (≲100 K) is observed in absorption against a bright continuum source. On the other hand, the gas density required to thermalize the rotational transitions of OH is very high (n_H_2≳10^9 cm^-3), so the transition can be observed in J=5/2→3/2 emission in environments where molecular gas is highly excited (dense warm gas, either through shocks, or because it is exposed to strong far-infrared continuum radiation), such as those in AGNs <cit.>.
At high redshift, previous works have showed that OH outflows can readily be detected in strongly lensed, dusty star-forming galaxies up to z≈5 <cit.>; there are also reports of two OH detections in quasars at z≈6 <cit.>, and one tentative <cit.>. Interestingly, the results in <cit.> suggest that OH 119 may be a more reliable tracer of line-of-sight outflows at high z than [C2] 158 and CO lines. It is therefore of great interest, and motivation of this work, to investigate whether the 119 line can provide a good probe of outflows at the EoR.
To search for molecular outflows in EoR quasars, we observed OH 119 toward J2054-0005 using the Atacama Large Millimeter/submillimeter Array (ALMA). The quasar was previously detected by ALMA in continuum as well as [C2] 158 , [O3] 88 , and CO lines <cit.>. The measurements of these lines have determined its redshift to be z=6.0391±0.0002. The bolometric luminosity of the source is L_bol≈1.2×10^47 erg s^-1, corresponding to 3.2×10^13 L_ <cit.>. The total IR luminosity of L_IR≈1.3×10^13 L_ implies a star formation rate (SFR) of ≈1900 M_ yr^-1 <cit.>, where a Kroupa initial mass function (IMF) is assumed, although this is an upper limit because of possible AGN contribution to dust heating <cit.>. Nonetheless, the central SFR density is comparable to the Eddington limit of ∼1000 M_ yr^-1 kpc^-2 and exceeds the average SFR densities found in massive dusty star-forming galaxies at redshifts up to z≈5 <cit.>. However, despite the intense star formation and the presence of an AGN, neither [C2] 158 , [O3] 88 , nor CO lines have revealed outflows in previous observations. Is there no outflow, or is it difficult to detect it with these tracers? Establishing a reliable tracer of molecular outflows is essential for future studies of galaxies at highest redshifts.
The paper is organized as follows. In Section <ref>, we describe the ALMA observations and data reduction. The resulting continuum image and OH 119 spectrum is presented in Section <ref>. This is followed by an analysis of the OH gas outflow in Section <ref>, discussion on the outflow's driving mechanism and imprint on galaxy evolution in Section <ref>, and a summary in Section <ref>.
We adopt an ΛCDM cosmology with parameters H_0=70 km s^-1 Mpc^-1, Ω_m=0.3, Ω_Λ=0.7, and flat geometry.
§ OBSERVATIONS AND DATA REDUCTION
The observations were conducted between August 4 and 12 in 2022 during ALMA cycle 8. The antennas of the 12 m array observed toward a single field centered at (α,δ)_ICRS=(20^h54^m06503,-00051443), which corresponds to the central position of the ALMA 87 continuum <cit.>. In most observing runs, 44 antennas were used, but the number ranged from 41 to 46. The array was in configuration C-5 with baselines from 15 m to 1301 m.
The Band 7 receivers were tuned to cover the OH doublet at an observing frequency for the adopted redshift z=6.0391. Two spectral windows (upper sideband; USB) were centered at the observing frequencies 356.315 GHz and 358.090 GHz for the line observations. Since the bandwidth of each of them is 1.875 GHz, this setup makes the two spectral windows next to each other with an overlap of 100 MHz. The 119.23 line of the doublet was set to lie in this overlap region, whereas the 119.44 line is separated in velocity by ≈521 km s^-1. With a frequency resolution of 7.813 MHz, the achieved velocity resolution (average over the bandwidth) is 6.56 km s^-1. To improve the signal-to-noise ratio for the analysis, we smoothed the data cubes to a resolution of 35 km s^-1. There are 44 velocity channels in each window, with one perfectly overlapping channel, and the total effective velocity coverage of the two adjacent windows is 3045 km s^-1 with the 119.23 line at the center.
The other two spectral windows (lower sideband; LSB) were dedicated to continuum observations. The central observing frequencies were 344.2 GHz and 346.1 GHz, and the bandwidth of each of them was 2 GHz.
The scheduling block was executed 9 times. J2253+1608 (8 data sets) and J1924-2914 (1 data set) were observed for the flux and bandpass calibration, whereas J2101+0341 (all data) was observed for the phase calibration. The total on-source time was 7.3 hours, whereas the total time including calibrator observations and other overheads was 12.7 hours. The uncertainties in the paper are only statistical errors; the absolute flux accuracy in Band 7 is reported to be 10% <cit.>.
The data were reduced using the Common Astronomy Software Applications (CASA) package <cit.>. Basic calibration was performed with a CASA pipeline resulting in 9 calibrated measurement sets. The calibrated data were then combined and imaged using the CASA task .
The continuum image was reconstructed using the line-free LSB spectral windows in the multi-frequency synthesis mode with the deconvolver and standard gridding. The weighting was set to with the robust parameter equal to 0.5 (compromise between resolution and sensitivity). To conduct unbiased mask-based image reconstruction, we employed the tool <cit.>. We also tried in interactive mode, but there was no obvious difference in the result, so we adopted the automatically-created image. The threshold for iterations was set to 2 σ, where σ was calculated using the CASA task on a first-generation clean image masked for the central region where the source is located. The rms sensitivity in the final continuum image is σ=13 μ Jy beam^-1. The synthesized beam size (full-width half maximum; FWHM) is (b_maj,b_min)=(0205,0176), corresponding to ≈1 kpc. For the adopted cosmological parameters, 1 is equivalent to 5.689 kpc and the luminosity distance to the source is D_L=58.1465 Gpc. These are the highest resolution observations of OH toward a quasar at z>6.
The OH image was reconstructed from USB spectral windows. The visibilities of two spectral windows were processed by separately and continuum was not subtracted. The deconvolver and weighting setups were the same as for the continuum, and the tool was used. The mean sensitivity over two spectral windows is σ=0.13 mJy beam^-1 in a channel of 35 km s^-1, and the synthesized beam size is (b_maj,b_min)=(0204,0174). The two spectral windows were merged using the task .
The velocity is expressed in radio definition with respect to the rest frame of the source (z=6.0391). The frequency that corresponds to zero velocity is ν=ν_rest(1+z)^-1=357.193±0.010 GHz, where ν_rest=2514.31640360 GHz is the rest frequency of the OH ^2Π_3/2 J=3/2-5/2, F=3^--2^+ transition,[All spectral line frequencies are taken from the database Splatalogue (https://splatalogue.online/).] where F is the quantum number for the total angular momentum of the molecule (including nuclear spin), and “+" and “-" denote the Λ-doubling of energy levels.
The final images were corrected for the primary beam attenuation. The basic parameters of the resulting images are summarized in Table <ref>.
§ RESULTS
We begin this section by a presentation of the continuum image. It is followed by a description of the OH spectrum and derivation of its basic properties.
§.§ 123- continuum emission
The continuum emission (λ_rest=123), extracted from the LSB, is detected toward the quasar with a high signal-to-noise ratio of S/N=260 (Figure <ref>). The emission is spatially resolved, although strongly concentrated in the center. The flux density in the region within a radius of 05 of the brightest pixel is S(r<05)=5.723±0.009 mJy.
We estimated the peak coordinates by two-dimensional gaussian fitting of a region of radius 05 centered at the brightest pixel. Using CASA's , we obtained (α,δ)_ICRS=(20^h54^m06501,-0051444). Since S/N is very high, the positional uncertainty is determined by the absolute astrometric accuracy of ALMA observations in Band 7, which is 5% of the synthesized beam size (≈0010). The peak intensity obtained from the gaussian fitting is 3.308±0.014 mJy beam^-1, and the size (FWHM) of the central region where the emission is concentrated, deconvolved from the beam, is estimated to be (d_maj,d_min)=(01567±00017,01321±00022) at a position angle of 171, corresponding to 890±10 pc for the major axis.
We also found an additional continuum source positioned 24 west of the quasar and detected at S/N=8.9 (Figure <ref>). Since OH is not detected there, it is not clear at this point whether the source is a physical companion or is at different redshift and happens to lie within the solid angle subtended by the primary beam. The projected separation from J2054-0005 corresponds to ≈14 kpc if they are at the same redshift. The peak intensity is measured to be 116±13 μ Jy beam^-1 at (α,δ)_ICRS=(20^h54^m06344,-0051483), and the flux density is S(r<05)=377±9 μ Jy.
§.§ OH gas
The OH 119 line is robustly detected toward the quasar. Figure <ref> shows an integrated OH spectrum (total flux density) extracted from the region where the 123- (LSB) continuum is detected at >3 σ. We selected this broad region because there is a possibility that OH is distributed throughout the galactic disk traced by dust. The OH profile is dominated by a broad absorption feature at negative velocities, and emission at near-systemic velocities, exhibiting a typical P-Cygni profile, such as the one observed toward the local ULIRG Mrk 231 <cit.>. The line is very broad: what appears to be either OH absorption or emission extends over a continuous velocity from -1500 km s^-1 to +1000 km s^-1.
We can estimate the optical depth τ as long as the line is not completely opaque. If emission at velocities of the absorption line is negligible, the observed flux density is S(v)=S_conte^-τ, where S_cont is the continuum flux density, hence
τ(v)=-ln[S(v)/S_cont]
Although there are reasons to assume that OH is not optically thin, e.g., if the gas distribution is clumpy and the absorbing gas does not cover the continuum entirely, the apparent optical depth at the line center, where absorption is maximum, is τ≈0.36.
In the vicinity of the OH doublet, there are CH^+ (J=3-2) at the sky frequency of 355.364 GHz, and ^18OH (J=5/2-3/2) at 355.021 GHz, that may be responsible for the decrease in flux that appears at the offset velocity v≳1000 km s^-1. A similar feature attributed to the ^18OH line is found in Mrk 231, indicating an enhanced [^18OH]/[OH] abundance due to processing by star formation <cit.>. However, the lines are sufficiently separated from OH and unlikely to significantly affect the analysis of the line profile described below.
§.§ OH line fitting
To investigate the kinematics of OH gas, we performed a least-squares fitting of the line profile using Python (). Since the line is a doublet, and there may be multiple components (systemic, outflow, or inflow), fitting was conducted using two double-gaussian functions. Although the spectrum is likely to be more complicated than this simple structure, we aimed at limiting the number of free parameters while extracting key quantities related to the analysis of outflows. The fitting constraints were the following: the separation between the doublet lines of a double gaussian is fixed to 521 km s^-1, and their peak values and FWHMs are set to be equal (e.g., ). The continuum intensity within the velocity range covered by the two adjacent USB spectral windows was assumed to be constant. On the other hand, the line intensity, continuum intensity, and the velocity separation of the double gaussians of different components (e.g., systemic and outflow) were set as free parameters.
The results of fitting are shown in Figure <ref> and listed in Table <ref>. We find OH emission, traced by a double gaussian with an FWHM line width of 306±55 km s^-1 near the systemic velocity, and a broad absorption feature with FWHM=1052±234 km s^-1 with the peak absorption velocity of v_cen=-669±87 km s^-1 relative to the systemic velocity. The FWHM of the emission line obtained from fitting is comparable to those of the emission lines of [C2] 158 (243±10 km s^-1; ), [O3] 88 (282±17 km s^-1; ), and CO (J=6→5) (360±110 km s^-1; ). The terminal velocity (maximum extent of the blue-shifted wing of the absorption) is at least v_max≈-1500 km s^-1 but may be beyond the spectral coverage. Another indicator of terminal velocity is v_98, the velocity above which 98% of absorption takes place. This quantity is found to be v_98=-1574±35 km s^-1. On the other hand, the velocity above which 84% of absorption takes place, is v_84=-1104±35 km s^-1. The fitted emission components are red-shifted relative to the systemic by 65±15 km s^-1. This is not unusual, as such positive shifts in emission components have been observed in the majority of nearby galaxies that exhibit P-Cygni profiles and likely arise from outflows on the opposite side of the continuum (receding relative to the observer) <cit.>. The uncertainties above include only those from fitting. The redshift uncertainty expressed in velocity is ≈8 km s^-1.
The continuum flux density in the USB spectral windows was found by fitting to be S_cont=6.175±0.092 mJy. This is ≈8% higher than the LSB continuum (see Section <ref>), but not unexpected, because the continuum emission at this frequency is in the Rayleigh-Jeans domain. Also, the regions where the fluxes were extracted (LSB continuum within r<05, USB continuum where LSB continuum is detected at >3 σ) are not equal, but are similar in size. Given the fact that there are almost no line-free channels in the USB spectrum and that fitting was done under simple assumptions, we find this to be a reasonable estimate.
Figure <ref> shows a continuum-subtracted OH spectrum, including the best fit and fitting residuals.
lcc[ht!]
OH Line Fit Parameters
0pt
Absorption (outflow) Emission (systemic)
Peak value S_max [mJy] -1.039±0.078 1.35±0.27
Integrated flux 𝒮_OH [Jy km s^-1] -1.16±0.27 0.44±0.12
Center velocity v_cen [km s^-1] -669±87 65±15
v_84 [km s^-1] -1104±35 ...
v_98 [km s^-1] -1574±35 ...
Standard deviation σ_v [km s^-1] 446±99 129±23
FWHM line width [km s^-1] 1052±234 306±55
Equivalent width W [km s^-1] 200±44 -66±19
All quantities are calculated from gaussian fits. Here, FWHM=√(8ln2)σ_v, 𝒮_OH=√(2π)σ_vS_max, and W is calculated according to Equation (<ref>). The quantities 𝒮_OH and W are given for a single line of the doublet (the total equivalent width is twice the value). The continuum flux density is S_cont=6.175±0.092 mJy.
§ MOLECULAR GAS OUTFLOW
The absorption line is tracing the OH gas between the continuum source and the observer. Since this is observed in blueshift relative to the host galaxy, the line reveals unambiguous signature of an outflow, as it has been observed toward AGNs and star-forming galaxies at lower z (e.g., ). For example, the nearby ULIRG Mrk 231 exhibits a similar terminal velocity of ≈-1500 km s^-1 and a P-Cygni line profile, which are attributed to an AGN-driven outflow <cit.>. Since we do not detect red-shifted absorption, there is no significant molecular gas inflow in J2054-0005 along the line of sight. In the analysis below, we consider the absorption to be tracing an outflow, and derive the basic properties, such as mass, mass outflow rate, and kinetic energy.
§.§ Outflow mass
Assuming that the outflow has the shape of a thin spherical shell of radius r_out, the molecular gas mass in the outflow (including helium and heavier elements) can be expressed as <cit.>
M_out=μ m_HN_HΩ r_out^2,
where μ=1.36 is the mean particle mass per hydrogen nucleus, m_H is the hydrogen atom mass, N_H=2N_H_2 is the column density of hydrogen nuclei, and Ω is the solid angle (“opening angle”) subtended by the shell <cit.>. Thus, the term Ω r_out^2 is the area of the outflowing shell, and the opening angle is given by Ω=4π f, where f is the covering factor, i.e., the fraction of the sphere covered by the outflow as seen from its origin.
The column density N_H_2 cannot be measured directly, but we can find the column density of OH molecules (N_OH) and then apply a [OH]/[H_2] abundance ratio to get N_H_2. N_OH can be calculated from the measured absorption line under the approximation of local thermodynamic equilibrium (LTE). The OH column density can be expressed as (e.g., )
N_OH=8π/λ_ul^3A_ulQ/g_ue^E_l/kT_ex/1-e^-Δ E/kT_ex∫τ(v) dv,
where λ_ul=119.23 is the rest-frame wavelength, A_ul=0.1388 s^-1 is the Einstein coefficient for the u→ l transition J=5/2→3/2 <cit.>, g_u=6 is the statistical weight of the J=5/2 level, Δ E=E_u-E_l is the energy difference of the two levels (E_l=0), T_ex is the excitation temperature, and Q is the partition function. If τ≪1, the integral is approximately equal to the equivalent width, defined as W=∫(1-e^-τ)dv. Assuming T_ex=50 K, which is equal to the dust temperature (T_d=50±2 K; ), we get Q≃19 <cit.>. The term Q(1-e^-Δ E/kT_ex)^-1 increases by a factor of ≈3 from 50 to 150 K. Note that the equivalent width in Equation (<ref>) is calculated for a single line of the doublet (W in Table <ref>) because Q accounts for the Λ-doubling.
The OH abundance has been measured for the Milky Way galaxy (e.g., ) and a number of nearby galaxies where multiple OH lines were detected <cit.>, and it has been analyzed in theoretical work employing simulations <cit.>. A relatively high value of [OH]/[H_2]=5×10^-6 is reported for Sgr B2 <cit.>. Although this value is often used to derive H_2 mass from OH, it may be lower at sub-solar metallicities that may be applicable to high-z sources. This is, however, not necessarily the case for evolved systems, as near-solar metallicities have been found in some quasars at z>6 <cit.>. On the other hand, a low abundance of [OH]/[H_2]≈1×10^-7 has been reported recently even for various regions in the Galaxy <cit.>. We derive the outflow properties using this value (Table <ref>), but note that a higher abundance would yield lower outflow mass.
The radius of the outflow (r_out) is estimated from the continuum size. The FWHM size of the compact nucleus was found to be only 890±10 pc in diameter (Section <ref>). We adopt a radius of r_out=0.5 kpc, which is reasonable as it is shown below (Section <ref>) that OH is concentrated in the central 1 kpc.
For the covering factor, we use f=0.3 in the analysis below. This value is also adopted in <cit.> for a sample of star-forming galaxies at redshift z=4-5, although it can be as large as f≈0.8 <cit.>. Even if we adopt f=1, the main results of the scaling relations discussed in Section <ref> do not change, albeit the mass outflow rates would be larger by a factor of ≈3.
The integral in Equation (<ref>) is calculated by inserting τ from Equation (<ref>),
W=-∫ln[S(v)/S_cont]dv,
where S(v) is the profile obtained from gaussian fitting and S_cont is a constant (Table <ref>). The equivalent width of a single line of the doublet is W=200 km s^-1, yielding M_out≈1.5×10^9 M_⊙. The obtained W is larger than that in almost all nearby ULIRGs <cit.> and dusty star-forming galaxies at z=4-5 <cit.>, and is largest in a quasar at z>6 reported to date. The calculated outflow gas mass and other dynamical quantities (derived below) are listed in Table <ref>. The outflow mass is ∼2-5% of the total molecular gas mass <cit.>.
lccc[ht!]
Molecular Outflow Properties
0pt
Quantity LTE Empirical relation
Column density N_OH [cm^-2] 7.5×10^15 ...
OH abundance [OH]/[H_2] 1×10^-7 ...
Column density N_H_2 [cm^-2] 7.5×10^22 ...
Mass M_out [M_⊙] 1.5×10^9 1×10^9
Mass outflow rate Ṁ_out [M_⊙ yr^-1] 2100 1500
Mass loading factor η 1.1 0.8
Depletion time t_dep [yr] (1.4-2.9)×10^7 (2-4)×10^7
Kinetic energy E_out [erg] 6.8×10^57 5×10^57
Power Ė_out [L_⊙] 7.8×10^10 6×10^10
The LTE values are calculated using the covering factor f=0.3, outflow radius r_out=0.5 kpc, excitation temperature T_ex=50 K, and outflow velocity v_out=669 km s^-1. The empirical relation for the mass outflow rate is given in Equation (<ref>).
§.§ Mass outflow rate
§.§.§ Optically-thin outflow model
Assuming that the outflow is expanding at velocity v_out as a spherical shell, the mass outflow rate averaged over the outflow lifetime can be expressed as
Ṁ_out=M_outv_out/r_out.
This equation gives a conservative estimate <cit.>. For the outflow velocity, we adopt v_out=669 km s^-1, based on the center velocity (v_cen) of the absorption feature (see Table <ref> and Section <ref>). Although this is the mean value along the line of sight, it is equal to the outflow velocity in the case of a spherically-symmetric outflow. Taking W=200 km s^-1 (equivalent width of the absorption line), [OH]/[H_2]=5×10^-6 (abundance in Sgr B2), r_out=0.5 kpc, and f=0.3, we obtain a strict lower limit of Ṁ_out>42 M_⊙ yr^-1. Note that if we adopt a low abundance of [OH]/[H_2]=1×10^-7 and f=1, the upper limit of the mass outflow rate becomes Ṁ_out≈7000 M_ yr^-1. Clearly, the uncertainty of the mass outflow rate is dominated by the poorly constrained OH abundance. We adopt the low abundance of [OH]/[H_2]=1×10^-7 and f=0.3 in the analysis below.
The dynamical age of the outflow is defined as the time needed for outflowing gas to travel a distance r_out at a constant velocity v_out. Using the derived quantities above, the outflow age is t_out=r_out/v_out∼7×10^5 yr. Even if we adopt a four times larger radius, as the total size of the region where the continuum was detected, the timescale is much shorter than the depletion time due to star formation (∼2-3×10^7 yr), implying that the quasar feedback is young or that the outflow size is larger than the adopted value so that the dynamical age is underestimated.
§.§.§ Empirical relation
Alternatively, we can circumvent the above assumptions and use an empirical formula for the mass outflow rate discussed in the literature, hoping that it is applicable to the EoR quasar. The “recipe” formula from <cit.> modified by <cit.> takes the form
(Ṁ^emp_out/M_⊙ yr^-1)=1.4(W_v<-200/km s^-1)(L_IR/10^12L_)^1/2+180,
where W_v<-200 is the OH 119 equivalent width at v<-200 km s^-1. This equivalent width, derived from the observed spectrum, is W_v<-200≈268 km s^-1, yielding a mass outflow rate of Ṁ^emp_out≈1500 M_. The value is significantly larger compared to those for the star-forming galaxies at redshift z=4-5 reported in <cit.>, which have values between 220 and 1290 M_⊙ yr^-1.
Using the mass outflow rate from Equation (<ref>), we calculate the outflow mass M_out^emp=(r_out/v_out)Ṁ_out^emp and other dynamical quantities. All outflow properties derived from this empirical relation are listed in Table <ref>.
Some caveats of this approach include the fact that Equation (<ref>) only incorporates the equivalent width at v<-200 km s^-1 (to exclude the systemic component assuming that it does not exceed this velocity) regardless of the line width of the outflow-tracing absorption line and the mean outflow velocity. None the less, the mass outflow rate obtain using Equation (<ref>) is comparable (a factor of 2/3) to that obtained under LTE and low OH abundance (Table <ref>).
§.§ Resolved OH absorption and emission
The high angular resolution (≈1 kpc) and sensitivity allow us to probe the spatial distribution of OH gas velocity for the first time in a quasar at z>6. We extracted OH spectra from adjacent rectangular regions of area 01×01, corresponding to approximately one half of the synthesized beam, and performed double-gaussian fitting in each region using the procedure described in Section <ref>. A moment 1 image of OH that shows the positions of the regions as pixels is shown in Figure <ref>. The spectra in Figure <ref> were extracted from the 12 pixels in Figure <ref>, labelled (179,178) in the bottom left corner, (181, 181) in the top right corner, etc., and shown as a 3×4 profile map. All spectra that could yield reasonable fits are plotted in Figure <ref> together with the best fits. The OH doublet absorption was successfully fitted in 11 rectangular regions (Figure <ref> and Table <ref>). The profile could not be well-fitted in the surrounding regions, where the continuum intensity is weaker and OH is not significantly detected.
The absorption line is marginally spatially resolved, which can be inferred from a north-south shift in the peak absorption and emission velocities obtained by fitting. This implies that the distribution of OH gas is not confined to the AGN, which is too compact to be resolved, but extends over a broader (2 kpc) central region. The FWHM line widths (Table <ref>) are relatively comparable throughout the region (≈800-1300 km s^-1). The peak velocity is not minimum (most negative) at the continuum center, as may be expected from a spherically symmetric outflow emerging from the center, but retains comparable values throughout the map. Note that we fit all regions with only one double-gaussian for the outflow if emission is also present. In the center of J2054-0005, where OH column density is largest, it is possible that there is also absorption at near-systemic velocity. For the sake of simplicity we did not attempt to fit this additional component. Although the fitting uncertainties are large, the fitted absorption peak velocities appear to be more negative on the south side compared to the north side, though region (180,181) seems to deviate from this trend. The mean absorption velocities in each row in Figure <ref> from north to south are (-648±80, -565±61, -667±47, -806±70) km s^-1, excluding (181,180). These results suggest that the outflow is not uniform and might have the shape of a cone whose axis is inclined with respect to the line of sight. Observations at higher resolution are needed to get a clearer picture of the outflow geometry.
OH is detected in emission at >3 σ in some regions and exhibits relatively comparable FWHM line widths throughout the region (≈200-330 km s^-1), consistent with reported [C2] 158 and [O3] 88 line widths <cit.>. This suggests that highly excited (warm or dense, shocked) molecular gas is distributed in the central 2 kpc region, either in the host galaxy or in the outflowing gas. The positive peak velocities of the emission lines are generally lower in the north compared to the south. The exception is at (181,178), though the line is only marginally detected there. The mean emission velocities in each row in Figure <ref> from north to south are (15±11, 67±12, 101±11, 72±13) km s^-1.
In order to investigate the origin of the apparent velocity shift in the OH emission line, we compare the OH data with [C2] 158 data. The moment 1 image in <cit.> shows that the [C2] 158 line exhibits a velocity gradient in the northwest-southeast direction, that is generally consistent with the north-south velocity increase in the fitted OH emission line spectra, albeit with an offset: the fitted OH emission is systematically redder than [C2] 158 . Thus, it is possible that the OH emission follows the velocity field of the bulk gas in the host galaxy traced by [C2] 158 .
Figure <ref> shows a comparison of the OH 119 and [C2] 158 spectra (the [C2] data are from #2019.1.00672; S. Fujimoto, in prep.), extracted from the central pixel (maximum 123 continuum intensity; pixel size 0034). These are the highest-resolution [C2] data of this source and therefore best for comparison. In addition to the main emission profile, the [C2] line profile exhibits a secondary component on the red-shifted side (up to +500 km s^-1), and a blue-shifted component at velocities comparable to the OH outflow velocity (-600 km s^-1), although it seems to be weak at this choice of aperture. This is consistent with recent findings that OH 119 is a more robust tracer of outflows compared to [C2] 158 <cit.>.
More details of the [C2] observations and results will be presented in Fujimoto et al. (in prep.).
lcccc[ht!]
Fitting Results for Resolved Emission and Absorption Components
0pt
Region v_cen^emi [km s^-1] v_cen^abs [km s^-1] FWHM^emi [km s^-1] FWHM^abs [km s^-1]
179,178 98±19 -827±147 202±56 1000±557
179,179 101±15 -665±108 295±62 1368±338
179,180 61±16 -579±83 278±67 1057±212
179,181 41±19 -512±148 282±80 849±350
180,178 75±22 -814±75 230±64 842±269
180,179 102±15 -588±85 327±60 1224±223
180,180 74±19 -551±89 330±82 1095±206
180,181 -11±12 -785±61 96±31 1084±243
181,178 44±28 -778±132 240±93 963±436
181,179 ... -747±36 ... 933±155
181,180 ... -559±50, -1296±75 ... 621±125, 468±134
Regions are designated by image pixel numbers (x,y). Each pixel has area 01×01, which is approximately one half of the synthesized beam. The spectrum at (181,180) could not be fitted with emission.
§ DISCUSSION
In this section, we discuss the possible mechanisms that drive the outflow, the fate of the outflowing gas, and its impact on the host galaxy.
§.§ Driving mechanism
Is the outflow driven by star formation or does the AGN feedback from the accretion onto the supermassive black hole (SMBH) play a role? One way to address this problem is to investigate the energy and momentum of the outflow and compare them to the expected input from star formation.
Using the outflow mass and velocity derived in Section <ref>, we calculate the bulk kinetic energy of the outflowing gas. Adopting the molecular gas mass derived under LTE (Table <ref>), the kinetic energy is
E_out=1/2M_outv_out^2≈6.8×10^57erg.
and the power required to drive the molecular outflow (kinetic power) is
Ė_out=1/2Ṁ_outv_out^2≈3.0×10^44 erg s^-1.
This is equivalent to ≈7.8×10^10 L_, which is ∼0.6% of the total infrared luminosity of the quasar. If other ISM phases (atomic and ionized gas) are present in the outflow, the energy and power are larger.
The total momentum of the molecular outflow is
p_out=M_outv_out≈1.0×10^12 M_⊙ km s^-1.
By comparison, the momentum injection by a typical core-collapse supernova (SN; mass m_0≈10 M_⊙, velocity v_0≈3000 km s^-1) is of the order of p_0≈3×10^4 M_⊙ km s^-1, and the total momentum of an outflow driven by SN explosions is p_SN≈ p_0R_SNt_SN, where R_SN is the SN rate, and t_SN is the time interval measured from the onset of SN explosions. Thus, R_SNt_SN is the total number of SN explosions over the starburst episode. Adopting a relation between the SN rate and star formation rate, R_SN≈α_SN(SFR/M_⊙ yr^-1) [yr^-1], where α_SN≈0.02 for a Salpeter IMF and continuous star formation <cit.>, and SFR=1900/0.67 M_⊙ yr^-1, we obtain R_SN≈57 yr^-1 as an upper limit. The factor 0.67 is applied to convert from Kroupa to Sapleter IMF. If the time interval of the SN feedback is equal to the dynamical age of the outflow (t_SN= t_dyn), the total SN momentum becomes p_SN≈1.2×10^12 M_⊙ km s^-1, produced by R_SNt_SN≈4×10^7 SN explosions. Theoretical works suggest that the final momentum input per SN may be even higher (e.g., ). Moreover, the total momentum could be larger by a factor of 2 if stellar winds (radiation pressure) from massive stars play a significant role (e.g., ). Taking into account the possibility that other gas phases also participate in the outflow, so that the total momentum (molecular, neutral atomic, and ionized gas) is somewhat larger, it seems that star formation activity alone may be approximately sufficient to explain the observed outflow. Although we obtain this result based on a low relative OH abundance, a similar conclusion is reached if the empirical relation for the mass outflow rate is used.
On the other hand, the mean outflow velocity of 670 km s^-1 and the terminal velocity of 1500 km s^-1 exceed the outflow velocities typically measured in star-forming galaxies, which are found to be 100-500 km s^-1 and <1000 km s^-1, respectively (e.g., ), although the outflow velocities are found to be higher in more extreme systems <cit.>. By contrast, terminal velocities of outflows in AGN-dominated systems are often found to ≳1000 km s^-1 and as large as 1500 km s^-1 (e.g., ). Theoretical studies also reproduce the velocities of ≳1000 km s^-1 and mass outflow rates of ∼10^3 M_ yr^-1 in AGN-driven outflows (e.g., ). The results indicate that the AGN feedback (radiation pressure) may play a role in boosting the velocity in J2054-0005.
In Figure <ref>, we show the mean line-of-sight outflow velocity plotted against the total IR luminosity (L_IR) for 8 dusty star-forming galaxies (DSFGs) at z=4-5 and 3 quasars at z>6. Here, L_IR is obtained by integrating the flux over λ_rest=8-1000 <cit.>, except P183+05, for which L_IR=1.41L_FIR <cit.>. The outflow in J2310+1855 was also detected in OH^+ (1_1←0_1) absorption <cit.> and the absorption velocity is comparable to that of OH plotted here.
Generally, there is an increase in v_out with L_IR, though the relation is not clear for the quasars. This may be because the AGN contribution in driving the outflow in J2054-0005 is larger than that in the other two quasars, making the velocity higher than what it would be if star formation were the only driving mechanism. On the other hand, as discussed in <cit.>, if the outflows in these quasars are anisotropic (e.g., conical), it is possible that the random orientation results in no correlation because OH in most high-z sources is unresolved. Further observations including emission lines at higher resolution are necessary to clarify the outflow geometry.
§.§ Suppression of star formation
The mass loading factor, defined as the ratio of the mass outflow rate to star formation rate, is an indicator of the outflow impact on star formation activity. In J2054-0005, we find
η=Ṁ_out/SFR∼1,
implying efficient suppression of star formation. The total molecular gas mass in this quasar was estimated to be M_mol=(3-6)×10^10M_ by <cit.> from a variety of tracers (dust, CO, [C1] 609 , and [C2] 158 emission). Using these values, we calculate the depletion time, i.e., the time it takes for the outflow to remove molecular gas from the galactic center region, as
t_dep=M_mol/Ṁ_out≈(2-4)×10^7 yr.
The relatively short timescale, as compared to the time required for star formation to consume molecular gas in ordinary star-forming galaxies (∼10^9 yr), implies that J2054-0005 is undergoing an episode of rapid quenching of star formation. The depletion time is shorter by an order of magnitude compared to that in nearby ULIRGs where OH outflows have been detected <cit.>. On the other hand, since molecular gas is being evacuated from the galactic center region, the outflow is also quenching the gas reservoir available to feed the SMBH (M_BH≈0.9×10^9 M_⊙; ), thereby limiting its growth.
The result has important implications for the evolution of the central stellar component and its coevolution with the SMBH, which is believed to be the origin of the relation between the SMBH mass and bulge velocity dispersion found in a large sample of galaxies from the local Universe to high redshifts (e.g., ). The short depletion timescales inferred from this work, as well as other dynamical properties, agree with recent predictions from models of massive galaxy formation at z>6 <cit.>. The models predict that the quasar outflow phase is accompanied by significant stellar component increase up to within ∼30 Myr, and support the making of quiescent galaxies recently observed at redshifts z≈3-5 <cit.>. Our result and other recent OH and OH^+ observations <cit.> indicate that molecular outflows may be common in quasars at z>6, but further observations of larger samples are needed to investigate their statistical properties.
§.§ Ṁ_out-L_IR relation
Figure <ref> shows a comparison of mass outflow rates and the total IR luminosity (L_IR) in high-z quasars and dusty star-forming galaxies with OH outflow detections. Since the uncertainty of the mass outflow rate is dominated by the OH abundance, we plot the product Wv_outr_out in addition to Ṁ_out, because it is the directly measured quantity that is proportional to Ṁ_out in the expanding-shell model under LTE, whereas Ṁ_out depends on OH abundance, f, and T_ex. Here, W is the equivalent width of the outflow component, v_out is the center velocity, and r_out is the continuum radius (Section <ref>). W was calculated using Equation (<ref>), and we assumed T_ex=50 K and [OH]/[H_2]=1×10^-7 for all sources in calculating Ṁ_out. The plot shows that there is a near-linear relation between the mass outflow rate with L_IR, and the law is logy=klogx+m, where k=1.11±0.37 and m=-10.1±4.8, although the scatter is large (R^2=0.50).
As discussed in <cit.> and <cit.>, the correlation can also be found if the mass outflow rate is set to be proportional to Ṁ_out∝ W√(L_IR). This relation circumvents the uncertainty of the quantities such as the outflow radius and OH optical depth. We reproduce a similar relation in Figure <ref>, although W here is the equivalent width of the outflow (absorption) component instead of W_v<200 as in their works. The relation is a power law (plotted as solid line) with k=1.12±0.36 and intercept m=-6.2±4.7 (R^2=0.51).
Although the above analysis yields a near-linear relation between Ṁ_out and L_IR, it should be noted that L_IR yields an upper limit to the SFR, because the AGN fraction is not subtracted and it differs among the sources. Therefore, we conclude that the relation implies that the outflows in quasars are likely driven by the combined contribution of the starburst and AGN (e.g., ) and the scatter may be caused by nonuniform outflow geometry and different covering factors.
§.§ Gas escaping into the IGM
Previous observations have revealed the presence of atomic and molecular gas in the halos and circumgalactic medium of high-redshift galaxies (e.g., ). This suggests that the gas was ejected from host galaxies by powerful outflows with terminal velocities that may even exceed the escape velocity. Here, we make a simple analysis to investigate whether the molecular outflow in J2054-0005 is fast enough to transport OH gas into the IGM.
The dynamical mass of J2054-0005, derived from [C2] 158 data for a disk inclination angle of 24, is estimated to be M_dyn≈7.2×10^10M_ <cit.> within the [C2]-emitting region of radius R≈1 kpc. Assuming a spherically symmetric mass distribution, this yields an escape velocity of v_esc(R)≈√(2GM_dyn/R)≈780 km s^-1 at R. The estimate corresponds to a rotational velocity of 560 km s^-1, implying a very massive host galaxy. The escape velocity is similar to the velocities reported for dusty star-forming galaxies at z=4-5 <cit.>. Since the obtained value is comparable to the mean velocity of the outflow along the line of sight, a significant fraction (up to ∼50%) of the outflowing molecular gas may be able to escape the gravitational potential well and inject metals and dust into the IGM. This is in agreement with recent observations of enriched gas in the circumgalactic medium at z∼6 <cit.>.
§.§ [O3]88/[C2]158 luminosity ratio
Recently, the luminosity ratio of [O3] 88 to [C2] 158 has drawn attention as it is found to be higher at high-redshift compared to local star-forming galaxies (e.g.,) and local dwarf galaxies <cit.>. One of the scenarios proposed to explain the observations is the possibility of outflows affecting the covering factor of photodissociation regions (PDRs) <cit.>. In sources with powerful outflows, low-ionization PDRs traced by [C2] 158 may be cleared so that their covering factor is decreased relative to that of H2 regions traced by [O3] 88 . If that is the case, we may expect to see more powerful outflows in sources with a high luminosity ratio.
So far, only two reionization-epoch quasars (J2054-0005 and J2310+1855) with OH outflow detections have been detected also in [C2] 158 and [O3] 88 <cit.>. A comparison of their properties shows the following characteristics. (1) The luminosity ratio L_[OIII]/L_[CII] is ∼7 times larger in J2054-0005 (2.1±0.4) compared to J2310+1855 (0.3±0.1). (2) The mass outflow rate in J2054-0005 is ∼4 times larger than in J2310+1855 (Figure <ref>); the mean outflow velocity is also higher (669±87 km s^-1 in J2054-0005 compared to 334±14 km s^-1 in J2310+1855). On the other hand, J2310+1855 has slightly larger total IR luminosity (1.9×10^13 L_⊙) compared to J2054-0005 (1.3×10^13 L_⊙). Although we cannot draw conclusions from only two sources, J2054-0005, with a more powerful outflow, also has a significantly higher luminosity ratio, which supports the scenario of a low PDR covering factor. Previous [C2] 158 studies suggest that the inclination angle of a rotating host galaxy in J2054-0005 is relatively low (∼24; ). If that is the case, and if the outflow is predominantly propagating perpendicular to a rotating disk, it is close to the line of sight, hence the observed velocity is higher compared to that in J2310+1855.
§.§ OH emission: highly excited molecular gas
The OH 119 line has been detected in emission toward a number of nearby AGNs <cit.>. Recently, <cit.> reported a detection of OH emission in one quasar at z≈6. However, the line has been detected only in absorption toward a sample of dusty star-forming galaxies at z=4-5 <cit.>.
Based on multi-line OH observations, <cit.> argue that collisional excitation dominates the population of the ^2Π_3/2 J=5/2 level that leads to radiative decay and 119 emission in the active nucleus of the local Seyfert galaxy NGC 1068. On the other hand, <cit.> found that the strength of the OH 119 absorption relative to emission is correlated with the 9.7 silicate strength, an indicator of the obscuration of the nucleus, in their ULIRG sample. <cit.> argue that the OH emission arises from dust-obscured central regions and that, except in two outliers, radiative excitation may be dominant. Since the peak of the spectral energy distribution of dust thermal emission in J2054-0005 is close to λ_rest=53, the wavelength that corresponds to the energy difference between the levels ^2Π_1/2 J=3/2 and ^2Π_3/2 J=3/2, absorption of the continuum from dust emission could excite the level which would radiatively decay into the ground state. In that case, it is expected that the ^2Π_1/2 J=3/2→1/2 line at λ_rest=163 would also be observed in emission. Given that the 120 continuum emission is spatially extended, and the fact that OH emission is marginally spatially resolved (Section <ref>), it is likely that the excited OH gas is not confined to the compact AGN, but distributed in a broader (∼2 kpc) region. Further multi-line observations are necessary to constrain the excitation mechanism of OH molecules in EoR quasars.
§ SUMMARY
We have presented the first ALMA observations of the OH 119 (^2Π_3/2 J=5/2-3/2) line toward the reionization-epoch quasar J2054-0005 at redshift z≈6.04 at the resolution of 020×017. The main findings reported in the paper are summarized below.
* The OH 119.23, 119.44 doublet line and the 120 continuum are detected toward the quasar. The continuum is detected at high signal-to-noise ratio of 260. The OH line exhibits a P-Cygni profile with absorption and emission components.
* We fitted the OH profile with two double-gaussian functions using a least-squares fitting tool. The fits reveal a blue-shifted absorption component (absorption depth τ≈0.36), that unambiguously reveals as outflowing molecular gas, and emission component at near-systemic velocity. The absorption peak velocity is v_cen=-669±87 km s^-1, the FWHM line width is 1052±234 km s^-1, and the terminal velocity is v_98=-1574±35 km s^-1, indicating a fast molecular outflow. This is the first quasar with such high molecular outflow velocity discovered at z>6.
* The mass outflow rate, calculated under LTE approximation, OH abundance [OH]/[H_2]=1×10^-7, and assuming an expanding spherical-shell model with a covering factor f=0.3, is Ṁ_out≈2100 M_⊙ yr^-1. Using an empirical relation from the literature, we obtain Ṁ_out^emp≈1500 M_⊙ yr^-1. The mass outflow rate is comparable to the star formation rate in the host galaxy (Ṁ_out/SFR∼1); it is higher compared to other two quasars with OH detections at z>6 and among the highest at high redshift. At the current mass loss rate, molecular gas is expected to be depleted after only t_dep∼10^7 yr, implying rapid quenching of star formation. The dynamical age of the outflow is t_out=r_out/v_out∼7×10^5 yr, implying that the quasar feedback is very young or that we have underestimated the radius of the outflow (r_out).
* The OH line is marginally resolved in the central 2 kpc, suggesting that the outflow extends over this region. Since the critical density and excitation energy for the upper rotational levels of OH are relatively high (n_cr≳10^9 cm^-3, E/k=120 K), the detection of OH emission implies that molecular gas is highly excited (warm or dense, shocked), possibly by far-IR radiation pumping from dust grains and by collisions with H_2 (e.g., shocks). The OH line is significantly broader compared to [C2] 158 in the central 1-kpc region.
* An analysis of the outflow momentum, kinetic energy, and terminal velocity indicates that the outflow is likely powered by the combined effects of the AGN and star formation. This is supported by the fact that we find a near-linear correlation between Ṁ_out and total luminosity L_IR using a sample of 8 star-forming galaxies at z=4-5 and 3 quasars at z>6.
* The mean outflow velocity is comparable to the estimated escape velocity. This implies that as much as ∼50% of the outflowing molecular gas may be able to escape from the host galaxy and enrich the intergalactic medium with heavy elements.
* We report the discovery of a companion at a projected separation of 24. This source is detected only in continuum at the significance of 8.9σ.
This paper makes use of the following ALMA data: ADS/JAO.ALMA#2021.1.01305.S, ADS/JAO.ALMA#2019.1.00672.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. TH was supported by Leading Initiative for Excellent Young Researchers, MEXT, Japan (HJH02007) and by JSPS KAKENHI Grant Number 22H01258. DD acknowledges support from the National Science Center (NCN) grant SONATA (UMO-2020/39/D/ST9/00720). This study is supported by JSPS KAKENHI No. 17H06130, 20H01951, 22H04939, and NAOJ ALMA Scientific Research Grant No. 2018-09B.
CASA <cit.>
99
[Arata et al.(2020)]Ara20 Arata, S., Yajima, H., Nagamine, K., Abe, M., & Khochfar, S. 2022, , 498, 5541
[Bakx et al.(2020)]Bak20 Bakx, T. J. L. C., Tamura, Y., Hashimoto, T., et al. 2020, , 493, 4294
[Barai et al.(2018)]Bar18 Barai, P., Gallerani, S., Pallottini, A., et al. 2017, , 473, 4003
[Bischetti et al.(2021)]Bis21 Bischetti, M., Feruglio, C., D'Odorico, V., et al. 2021, , 605, 244
[Braatz et al.(2021)]Bra21 Braatz, J., et al. 2021, ALMA Cycle 8 2021 Proposer's Guide, ALMA Doc. 8.2 v1.0
[Butler et al.(2023)]But23 Butler, K. M., van der Werf, P. P., Topkaras, T., et al. 2023, ApJ, 944, 134
[Calderón et al.(2016)]Cal16 Calderón, D., Bauer, F. E., Veilleux, S., et al. 2016, , 460, 3052
[Carilli & Walter(2013)]CW13 Carilli, C. L., & Walter, F. 2013, , 51, 105
[Carnall et al.(2020)]Car20 Carnall, A. C., Walker, S., McLure, R. J., et al. 2020, , 496, 695
[Carnall et al.(2023)]Car23 Carnall, A. C., McLeod, D. J., McLure, R. J., et al. 2023, , 520, 3974
[Carniani et al.(2020)]Carn20 Carniani, S., Ferrara, A., Maiolino, R., et al. 2020, , 499, 5136
[Casey et al.(2014)]Cas14 Casey, C. M., Narayanan, D., & Cooray, A. 2014, Physics Reports, 541, 45
[Cicone et al.(2014)]Cic14 Cicone, C., Maiolino, R., Sturm, E., et al. 2014, , 562, A21
[Cicone et al.(2021)]Cic21 Cicone, C., Mainieri, V., Circosta, C., et al. 2021, , 654, L8
[Costa et al.(2018)]Cos18 Costa, T., Rosdahl, J., Sijacki, D., & Haehnelt, M. G. 2018, , 479, 2079
[Decarli et al.(2018)]Dec18 Decarli, R., Walter, F., Venemans, B. P., et al. 2018, , 854, 97
[Decarli et al.(2022)]Dec22 Decarli, R., Pensabene, A., Venemans, B., et al. 2022, , 662, A60
[Decarli et al.(2023)]Dec23 Decarli, R., Pensabene, A., Diaz-Santos, T., et al. 2023, arXiv:2302.04312
[Di Mascia(2023)]DiM23 Di Mascia, F., Carniani, S., Gallerani, S., et al. 2022, , 518, 3667
[Donevski et al.(2020)]Don20 Donevski, D., Lapi, A., Malek, K., et al. 2020, , 644, 144
[Emonts et al.(2016)]Emo16 Emonts, B. H. C., Lehnert, M. D., Villar-Martín, M., et al. 2016, Science, 354, 1128
[Falstad et al.(2015)]Fal15 Falstad, N., González-Alfonso, E., Aalto, S., et al. 2015, , 580, A52
[Farina et al.(2022)]Far22 Farina, E. P., Schindler, J.-T., Walter, F., et al. 2022, , 941, 106
[Feruglio et al.(2010)]Fer10 Feruglio, C. Maiolino, R., Piconcelli, E., et al. 2010, , 518, L155
[Fiore et al.(2017)]Fio17 Fiore, F., Feruglio, C., Shankar, F., et al. 2017, , 601, A143
[Fischer et al.(2010)]Fis10 Fischer, J., Sturm, E., González-Alfonso, E., et al. 2010, , 518, L41
[Fleutsch et al.(2019)]Fle19 Fleutsch, A., Maiolino, R., Carniani, S., et al. 2019, , 483, 4586
[Förster Schreiber et al.(2019)]FS19 Förster Schreiber, N. M., Übler, H., Davies, R. L., et al. 2019, , 875, 21
[Forrest et al.(2020)]For20 Forrest, B., Marsan, Z. C., Annunziantella, M., et al. 2020, , 903, 47
[Fujimoto et al.(2019)]Fuj19 Fujimoto, S., Ouchi, M., Ferrara, A., et al. 2019, , 887, 107
[Fujimoto et al.(2020)]Fuj20 Fujimoto, S., Silverman, J. D., Bethermin, M., et al. 2020, , 900, 1
[Ginolfi et al.(2020)]Gin20 Ginolfi, M., Jones, G. C., Béthermin, M., et al. 2020, , 633, A90
[George et al.(2014)]Geo14 George, R. D., Ivison, R. J., Smail, I., et al. 2014, , 442, 1877
[Girelli et al.(2019)]GBC19 Girelli, G., Bolzonella, M., & Cimatti, A. 2019, , 632, A80
[Glazebrook et al.(2017)]Gla17 Glazebrook, K., Schreiber, C., Labbé, I., et al. 2017, , 544, 7648
[Goicoechea & Cernicharo(2002)]Goi02 Goicoechea, J. R., & Cernicharo, J. 2002, , 576, L77
[Goicoechea et al.(2006)]Goi06 Goicoechea, J. R., Cernicharo, J., Lerate, M. R., et al. 2006, , 641, L49
[González-Alfonso et al.(2014)]GA14 González-Alfonso, E., Fischer, J., Garciá-Carpio, J. et al. 2014, , 561, A27
[González-Alfonso et al.(2017)]GA17 González-Alfonso, E., Fischer, J., Spoon, H. W. W., et al. 2017, , 836, 11
[Gowardhan et al.(2018)]Gow18 Gowardhan, A., Spoon, H., Riechers, D. A., et al. 2018, , 859, 35
[Harikane et al.(2020)]Har20 Harikane, Y., Ouchi, M., Inoue, A. K., et al. 2020, , 896, 93
[Hashimoto et al.(2019a)]Has19 Hashimoto, T., Inoue, A. K., Tamura, Y., et al. 2019, , 71, 109
[Hashimoto et al.(2019b)]Has19b Hashimoto, T., Inoue, A. K., Mawatari, K., et al. 2019, , 71, 71
[Herrera-Camus et al.(2020)]HC20 Herrera-Camus, R., Sturm, E., Garciá-Carpio, J., et al. 2020, , 633, L4
[Hopkins et al.(2012)]HQM12 Hopkins, P. F., Quataert, E., & Murray, N. 2012, , 421, 3522
[Inoue et al.(2016)]Ino16 Inoue, A. K., Tamura, Y., Matsuo, H., et al. 2016, Science, 352, 6293
[Ishibashi et al.(2018)]IFM18 Ishibashi, W., Fabian, A. C., & Maiolino, R. 2018, , 476, 512
[Izumi et al.(2019)]Izu19 Izumi, T., Onoue, M., Matsuoka, Y., et al. 2019, , 71, 111
[Izumi et al.(2021)]Izu21 Izumi, T., Matsuoka, Y., Fujimoto, S., et al. 2021, , 914, 36
[Katz et al.(2022)]Kat22 Katz, H., Rosdahl, J., Kimm, T., et al. 2022, , 510, 5603
[Kepley et al.(2020)]Kep20 Kepley, A. A., Tsutsumi, T., Brogan, C. L., et al. 2020, , 132, 024505
[Kim & Ostriker(2015)]KO15 Kim, C.-G., & Ostriker, E. C. 2015, , 802, 99
[Kormendy & Ho(2013)]KH13 Kormendy, J., Ho, L. C. 2013, , 51, 511
[Labbe et al.(2023)]Lab23 Labbe, I., van Dokkum, P., Enslon, E., et al. 2023, , 616, 266
[Lapi et al.(2018)]Lap18 Lapi, A., Pantoni, L., & Zanisi, L. 2018, , 857, 22
[Laporte et al.(2019)]Lap19 Laporte, N., Katz, H., Ellis, R. S., et al. 2019, , 487, L81
[Leipski et al.(2014)]Lei14 Leipski, C., Meisenheimer, K., Walter, F., et al. 2014, , 785, 154
[Leitherer et al.(1999)]Lei99 Leitherer, C., Schaerer, D., Goldader, J. D., et al. 1999, , 123, 3
[Li et al.(2020a)]Li20a Li, J., Wang, R., Riechers, D., et al. 2020a, , 889, 162
[Li et al.(2020b)]Li20b Li, J., Wang, R., Cox, P., et al. 2020b, , 900, 131
[Looser et al.(2023)]Los23 Looser, T. J., D'Eugenio, F., Maiolino, R., et al. 2023, arXiv:2302.14155
[Lupi et al.(2020)]Lup20 Lupi, A., Pallottini, A., Ferrara, A., et al. 2020, , 496, 5160
[Lutz et al.(2020)]Lut20 Lutz, D., Sturm, E., Janssen, A., et al. 2020, , 633, A134
[Maiolino et al.(2012)]Mai12 Maiolino, R., Gallerani, S., Neri, R., et al. 2012, , 425, L66
[Mangum & Shirley(2015)]MS15 Mangum, J. G., & Shirley, Y. L. 2015, , 127, 266
[Mercedes-Feliz et al.(2023)]MF23 Mercedes-Feliz, J., Anglés-Alcázar, D., Hayward, C. C., et al. 2023, arXiv:2301.01784
[Merlin et al.(2019)]Mer19 Merlin, E., Fortuni, F., Torelli, M., et al. 2019, , 490, 3309
[Meyer et al.(2022)]Mey22 Meyer, R. A., Walter, F., Cicone, C., et al. 2022, , 927, 152
[Murray et al.(2005)]MQT05 Murray, N., Quataert, E., & Thompson, T. A. 2005, , 618, 569
[Murray et al.(2010)]MQT10 Murray, N., Quataert, E., & Thompson, T. A. 2010, , 709, 191
[Nanayakkara et al.(2023)]Nan23 Nanayakkara, T., Glazebrook, K., Jacobs, C., et al. 2023, arXiv:2212.11638
[Nguyen et al.(2018)]Ngu18 Nguyen, H., Dawson, J. R., Miville-Deschenes, M.-A., et al. 2018, , 862, 49
[Neeleman et al.(2021)]Nee21 Neeleman, M., Novak, M., Venemans, B. P., et al. 2021, , 911, 141
[Novak et al.(2019)]Nov19 Novak, M., Bañados, E., & Decarli, R. 2019, , 881, 63
[Novak et al.(2020)]Nov20 Novak, M., Venemans, B. P., Walter, F., et al. 2020, , 904, 131
[Onoue et al.(2020)]Ono20 Onoue, M., Banados, E., Mazzucchelli, C., et al. 2020, , 898, 105
[Pallottini et al.(2019)]Pal19 Pallottini, A., Ferrara, A., Decataldo, D., et al. 2019, , 487, 1689
[Pantoni et al.(2019)]Pan19 Pantoni, L., Lapi, A., Massardi, M., Goswami, S., & Danese, L. 2019, , 880, 129
[Pensabene et al.(2022)]Pen22 Pensabene, A., van der Werf, P., Decarli, R., et al. 2022, , 667, A9
[Pickett et al.(1998)]Pic98 Pickett, H. M., Roynter, R. L., Cohen, E. A., et al. 1998, J. Quant. Spectrosc. & Rad. Transfer, 60, 883
[Ren et al.(2023)]Ren23 Ren, Y. W., Fudamoto, Y., Inoue, A. K., et al. 2023, , 945, 69
[Richings & Faucher-Giguere(2018)]RFG18 Richings, A. J., & Faucher-Giguere, C.-A. 2018, , 474, 3673
[Roberts-Borsani(2020)]RB20 Roberts-Borsani, G. W. 2020, , 494, 4266
[Rugel et al.(2018)]Rug18 Rugel, M. R., Beuther, H., Bihr, S., et al. 2018, , 618, A159
[Runco et al.(2020)]Run20 Runco, J. N., Malkan, M. A., Fernández-Ontiveros, J. A., Spinoglio, L., & Pereira-Santaella, M. 2020, , 905, 57
[Rupke et al.(2005)]Rup05 Rupke, D. S., Veilleux, S., & Sanders, D. B. 2005, , 632, 751
[Rupke et al.(2021)]RTD21 Rupke, D. S. N., Thomas, A. D., & Dopita, M. A. 2021, , 503, 4748
[Salak et al.(2020)]Sal20 Salak, D., Nakai, N., Sorai, K., & Miyamoto, Y. 2020, , 901, 151
[Santini et al.(2021)]San21 Santini, P., Castellano, M., Merlin, E., et al. 2021, , 652, A30
[Schneider et al.(2015)]Sch15 Schneider, R., Bianchi, S., Valiante, R., Risaliti, G., & Salvadori, S. 2015, , 579, A60
[Scholtz et al.(2023)]Sch23 Scholtz, J., Maiolino, R., Jones, G. C., & Carniani, S. 2023, , 519, 5246
[Schöier et al.(2005)]Sch05 Schöier, F. L., van der Tak, F. F. S., van Dishoeck, E. F., & Black, J. H. 2005, , 432, 369
[Shao et al.(2022)]Sha22 Shao, Y., Wang, R., Weiss, A., et al. 2022, , 668, A121
[Spilker et al.(2018)]Spi18 Spilker, J. S., Aravena, M., Béthermin, M., et al. 2018, Science, 361, 1016
[Spilker et al.(2019)]Spi19 Spilker, J. S., Bezanson, R., Weiner, B. J., Whitaker, K. E., & Williams, C. C. 2019, , 883, 81
[Spilker et al.(2020a)]Spi20a Spilker, J. S., Phadke, K. A., Aravena, M., et al. 2020a, , 905, 85
[Spilker et al.(2020b)]Spi20b Spilker, J. S., Aravena, M., Phadke, K. A., et al. 2020b, , 905, 86
[Spinoglio et al.(2005)]Spi05 Spinoglio, L., Malkan, M. A., Smith, H. A., González-Alfonso, E., & Fischer, J. 2005, , 623, 123
[Spoon et al.(2013)]Spo13 Spoon, H. W. W., Farrah, D., Lebouteiller, V., et al. 2013, , 775, 127
[Straatman et al.(2014)]Str14 Straatman, C. M. S., Labbé, I., Spitler, L. R., Spoon, H., & Sturm, E. 2014, , 783, L14
[Stone et al.(2016)]Sto16 Stone, M., Veilleux, S., Meléndez, M., et al. 2016, , 826, 111
[Stone et al.(2018)]Sto18 Stone, M., Veilleux, S., González-Alfonso, E., et al. 2018, , 853, 132
[Storey et al.(1981)]Sto81 Storey, J. W. V., Watson, D. M., & Townes, C. H. 1981, , 244, L27
[Sturm et al.(2011)]Stu11 Sturm, E., González-Alfonso, E., Veilleux, S., et al. 2011, , 733, L16
[Sugahara et al.(2019)]Sug19 Sugahara, Y., Ouchi, M., Harikane, Y., et al. 2019, , 886, 29
[Sugahara et al.(2022)]Sug22 Sugahara, Y., Inoue, A. K., Fudamoto, Y., et al. 2022, , 935, 119
[Tamura et al.(2019)]Tam19 Tamura, Y., Mawatari, K., Hashimoto, T., et al. 2020, , 874, 27
[Tacchella et al.(2015)]Tac15 Tacchella, S., Carollo, C. M., Renzini, A., et al. 2015, Science, 348, 314
[Tripodi et al.(2022)]Tri22 Tripodi, R., Feruglio, C., Fiore, F., et al. 2022, , 665, A107
[Tumlinson et al.(2017)]Tum17 Tumlinson, J., Peeples, M. S., & Werk, J. K. 2017, , 55, 389
[The CASA team et al.(2022)]TCT22 The CASA team, Bean, B., Bhatnagar, S., et al. 2022, arXiv:2210.02276
[Ura et al.(2023)]Ura23 Ura, R., Hashimoto, T., Inoue, A. K., et al. 2023, , 948, 3
[Valentino et al.(2020)]Val20 Valentino, F., Tanaka, M., Davidzon, I., et al. 2020, , 889, 93
[Vallini et al.(2021)]Val21 Vallini, L. Ferrara, A., Pallottini, A., Carniani, S., & Gallerani, S. 2021, , 505, 5543
[Veilleux et al.(2005)]Vei05 Veilleux, S., Cecil, G., & Bland-Hawthorn, J. 2005, , 43, 769
[Veilleux et al.(2013)]Vei13 Veilleux, S., Meléndez, M., Sturm, E., et al. 2013, , 776, 27
[Veilleux et al.(2020)]Vei20 Veilleux, S., Maiolino, R., Bolatto, A. D., & Aalto, S. 2020, The Astronomy & Astrophysics Review, 28, 2
[Venemans et al.(2017)]Ven17 Venemans, B. P., Walter, F., Decarli, R., et al. 2017, , 845, 154
[Venemans et al.(2018)]Ven18 Venemans, B. P., Decarli, R., Walter, F., et al. 2018, , 866, 159
[Venemans et al.(2020)]Ven20 Venemans, B. P., Walter, F., Neeleman, M., et al. 2020, , 904, 130
[Walch & Naab(2015)]WN15 Walch, S., & Naab, T. 2015, , 451, 2757
[Walter et al.(2009)]Wal09 Walter, F., Riechers, D., Cox, P., et al. 2009, Methods, 457, 699
[Walter et al.(2018)]Wal18 Walter, F., Riechers, D., Novak, M. 2018, , 869, L22
[Wang et al.(2010)]Wan10 Wang, R., Carilli, C. L., Neri, R., et al. 2010, , 714, 699
[Wang et al.(2013)]Wan13 Wang, R., Wagg., J., Carilli, C. L., et al. 2013, , 773, 44
[Weinreb et al.(1963)]Wei63 Weinreb, S., Barrett, A. H., Meeks, M. L., & Henry, J. C. 1963, 200, 829
[Witstok et al.(2022)]Wit22 Witstok, J., Renske, S., Maiolino, R., et al. 2022, , 515, 1751
[Wu et al.(2021)]Wu21 Wu, Y., Zheng, C., Neeleman, M., et al. 2021, Nature Astronomy, 5, 1110
[Zubovas & King(2014)]ZK14 Zubovas, K., & King, A. R. 2014, , 439, 400
aasjournal
|
http://arxiv.org/abs/2307.02710v1
|
20230706012447
|
Bulk reconstruction for normalizable and non-normalizable modes: p-forms and the graviton
|
[
"Budhaditya Bhattacharjee",
"Jyotirmoy Mukherjee"
] |
hep-th
|
[
"hep-th"
] |
Multi-Similarity Contrastive Learning
Emily Mu
Massachusetts Institute of Technology
John Guttag
Massachusetts Institute of Technology
Maggie Makar
University of Michigan
==================================================================================================================================================================================
§ INTRODUCTION
The AdS/CFT correspondence <cit.> is a duality between a theory of quantum gravity in Anti-de Sitter spacetime and a conformal field theory living on a codimension-1 surface. It is a well-studied instance of the holographic principle <cit.>. One of the primary questions is to describe bulk physics in terms of holographic boundary operators. One way to proceed with this problem is to write bulk fields that solve the bulk wave equations in terms of the boundary operators. Semi-classically, one of the ways to achieve the same is by inverting the extrapolate AdS/CFT dictionary. This is described in <cit.>. Explicit calculations were done in the seminal works of Hamilton, Kabat, Lifshitz and Lowe <cit.>, with further extensions covered in <cit.>. The general idea and some of the explicit calculations were reviewed in <cit.>.
The HKLL reconstruction is generally developed for the normalizable mode <cit.>. This is a natural thing to do since only the normalizable mode has a well-defined holographic dual in terms of CFT operators. Instead, the non-normalizable mode is best thought of as deformations of the CFT. However, from the bulk point of view, there is no a-priori reason preventing a formal construction of the HKLL kind for the non-normalizable mode. An additional motivation lies in the fact that within the Breitenlohner-Freedman mass window <cit.>, both modes are acceptable. To be explicit, it is well-known BF bound for p-forms on AdS_d+1 is m^2_BF≥ -(d/2-p)^2 <cit.>. Note that when BF bound is saturated, normalizable and the non-normalizable solutions become identical. Therefore, both solutions of
the bulk wave equation of p-forms are acceptable as genuine operators in the CFT spectrum. So it is required to develop the non-normalizable modes to get the complete picture in CFT. A similar statement holds true for graviton and symmetric higher spin fields. In <cit.>, the formal kernels of scalar were derived for both modes in Poincaré and Global coordinates. The discussions there cover some technical aspects of the prescription to evaluate the kernels via the mode-sum and Green’s function approaches.
From <cit.> and <cit.>, a natural question one can ask is to consider the reconstruction prescription for the fields with gauge symmetries. Purely from the perspective of solutions of differential equations, it is an interesting direction to pursue due to the rich technical structure. We expect to learn about the organization of solutions of such differential equations. In this work, we evaluate the kernels in Poincaré coordinates for fields with spin, focusing explicitly on the p-forms and the graviton. We derive kernels in the Poincaré patch for p-forms and gravitons for normalizable as well as for non-normalizable modes. The construction is complicated by the fact that the gauge needs to be fixed and the bulk field operators expressed in terms of gauge invariant boundary currents.
A^p(b) =∫𝐊_N(b;x) j^p_N(x)+∫𝐊_nN(b;x)j^p_nN(x),
where A^p denotes contravariant antisymmetric p-forms and j^p is the boundary current. Here b stands for the location of the bulk field. The nature of organization of the solutions of bulk equation of motion has been studied in depth <cit.>.In order to obtain the bulk field, one requires to integrate over the boundary coordinate x. Since the bulk wave equation is a second-order differential equation, it is quite natural to have two independent solutions, which we call normalizable and non-normalizable modes. Our goal is to isolate the kernels for these modes in p-forms and the graviton.
The organization of the paper is as follows. In section (<ref>), we briefly review HKLL reconstruction of the massive scalar field. We also present kernels for normalizable and non-normalizable modes in the massless limit that will be used as a cross-check for p-forms at p→ 0 limit. We derive kernels in Poincaré patch of AdS for both normalizable and non-normalizable modes in two ways – using a mode sum approach as well as
a spacelike Green’s function approach. In section (<ref>), we obtain kernels for normalizable and non-normalizable modes for p-forms. We first fix the gauge of the bulk fields using a holographic gauge fixing procedure <cit.> and obtain bulk equations for p-forms. Using the mode sum approach, we obtain kernels for both modes in even and odd AdS. In section (<ref>), we derive kernels for both modes of the graviton using the mode sum approach in the Poincaré patch of AdS in even and odd dimensions. Using some transformation properties of the hypergeometric function, we extract the covariant piece from the kernels for both modes. In section (<ref>), we present a direct method for constructing the spacelike kernel for p-forms and the graviton. We show that under suitable scaling of the bulk fields, bulk wave equation can be written as AdS covariant form. From the asymptotic expansion of the rescaled bulk fields and using Green’s theorem, we read off the spacelike kernels for both modes of p-forms and the graviton. These kernels precisely match the kernels obtained in the mode-sum approach.
§ REVIEW OF HKLL RECONSTRUCTION
The Lorentzian AdS/CFT dictionary tells us that local operators in the bulk geometry can be described in terms of non-local operators in the boundary CFT <cit.>. The explicit procedure for establishing this map is known as Bulk Reconstruction. In this work, we focus on the procedure developed by Hamilton, Kabat, Lifschytz and Lowe. The HKLL[for the authors' Hamilton, Kabat, Lifschytz and Lowe] bulk reconstruction scheme was originally developed in the series of papers <cit.>. Further work in this direction has since been carried out <cit.>. For a detailed review of HKLL bulk reconstruction, see <cit.>.
The HKLL bulk reconstruction procedure is executed at the semiclassical level, where large-N and large-t'Hooft coupling is assumed. It is then reasonable to treat bulk fields (or operators) as free. At the level of a mathematical expression, the bulk reconstruction map is represented as follows
ϕ(z, x) = ∫_∂ℳ𝐊(z,x; x')𝒪(x')d^dx'
where ϕ stands for the bulk operator and 𝒪(x) is the corresponding CFT operator on the boundary. The integration is over the boundary surface ∂ℳ, and z is the holographic direction. In the original HKLL papers, the bulk field was chosen to have a specific fall-off behavior at the boundary (z → 0) given by ϕ(z, x) ∼ z^Δϕ_0(x). Such fields are known as normalizable when Δ > d/2 - 1 (the Brietenlohner-Freedman bound <cit.>), which follows from unitarity of the boundary CFT. The exponent Δ also corresponds to the conformal dimension of the CFT operator 𝒪(x).
The HKLL reconstruction provides us with a prescription to explicitly evaluate the kernel K(z, x; x') in (<ref>). The idea is to solve the bulk wave equation for the free field and then invert the equations by choosing the appropriate boundary value of the bulk field. In the next sub-section, we review some results for the kernel for the free scalar field.
§.§ HKLL reconstruction for scalar
In this section, we briefly review the main results of <cit.>, where the bulk reconstruction kernels were explicitly evaluated for the free scalar field. In <cit.>, the non-normalizable mode was also considered aside from the usual normalizable mode. Additionally, in <cit.>, the kernel for the normalizable mode was analytically continued beyond the Brietenlohner-Freedman (BF) bound. One of the main motivations for studying the non-normalizable mode lies in the fact that the two modes are indistinguishable (by their fall-off behaviors) within the BF window d/2 - 1 ≤Δ≤d/2 + 1. Therefore both modes are identified with CFT operators on the boundary.
The HKLL bulk reconstruction procedure begins with solving the bulk wave equation (in empty AdS_d + 1) for the free massive scalar. It is written (in Lorentzian Poincaré coordinates {z, t, x}) as follows
(z^2∂^2_z + z^2∂^2_x - z(d - 1)∂_z - m^2)Φ(z,x⃗) = 0
where {x}≡{t, x⃗}. The solution to this equation is
Φ(z,x⃗) = ∫d^dq/(2π)^de^i q. xz^d/2(a(q)J_ν(q z) + b(q)Y_ν(q z))
with ν = √(d^2/4 + m^2)≡Δ - d/2. To invert this solution and determine the two undefined functions a(q) and b(q), the boundary data needs to be fixed. The natural choice for the free massive scalar field is
Φ(z, x)|_z → 0∼ z^Δϕ_0(x) + z^d - Δj_0(x)
where the two pieces of boundary data ϕ_0 and j_0 correspond to the normalisable and non-normalisable modes respectively. Thus, the bulk fields can be written as
Φ(z, x) = ∫𝐊_N(z, x; x')ϕ_0(x')d^dx' + ∫𝐊_nN(z, x; x')j_0(x')d^dx'
By comparing (<ref>) and (<ref>) (and after some algebra), the two kernels 𝐊_N and 𝐊_n N can be evaluated. The expressions <cit.> are as follows
𝐊_N(z, x; x') = Γ(d/2)/2 π^d/2z^Δ/(√(Δx⃗^2 - Δ t^2))^d _2F_1(d/2, 1; 1 - d/2 + Δ; - z^2/Δx⃗^2 - Δ t^2)
𝐊_n N(z, x; x') = Γ(d/2)/2 π^d/2z^d - Δ/(√(Δx⃗^2 - Δ t^2))^d _2F_1(d/2, 1; 1 + d/2 - Δ; - z^2/Δx⃗^2 - Δ t^2)
where Δ x^2 = |x⃗ - x⃗' |^2 and Δ t^2 = (t - t')^2. It is important to note that the kernels written here are not explicitly AdS covariant. However, directly checking the action of AdS isometries proves that the fields are AdS covariant <cit.>. Additionally, the support of the kernel in (<ref>) is over the full boundary, whereas the causal wedge of the bulk point z, x only connects it to the spacelike separated points. So ideally, the kernel should be both spacelike and AdS-covariant.
This is the mode-sum approach to evaluating the bulk reconstruction kernels for the normalizable and non-normalizable modes. An equivalent approach is via using a spacelike Green's function. In <cit.>, a Green's function is constructed for the normalizable mode, and a spacelike, AdS-covariant kernel is obtained via Green's theorem. In a similar spirit, a Green's function is considered for the non-normalizable mode <cit.>, and a spacelike, AdS-covariant kernel is obtained. The respective kernels are as follows
𝐊_N = (-1)^d-1/22^Δ - dΓ(1 + Δ - d/2)/2π^d/2Γ(1 + Δ - d)lim_z' → 0(σ z')^Δ - dθ(spacelike)
𝐊_n N = -2^-ΔΓ(Δ)tanπΔ/2π^d/2Γ(Δ - d/2)lim_z' → 0(σ z')^-Δθ(spacelike)
where σ is the (covariant) geodesic length in pure AdS. It is clear that (<ref>) and (<ref>) are not equal to (<ref>) and (<ref>) respectively. However, it is important to note that the kernel is not unique, and it is possible to add or subtract terms to it as long as those terms vanish when integrated against the boundary field(s). Using this and an antipodal map-inspired redefinition of the boundary data <cit.>, it can be shown that (<ref>) and (<ref>) are equivalent to (<ref>) and (<ref>) respectively.
The case of the massless free scalar field is of special importance to this work. For such a field, the kernels are given by
𝐊_N = (-1)^d-1/2Γ(1 + d/2)/2π^d/2θ(spacelike)
𝐊_n N = -2^-dΓ(d)tanπ d/2π^d/2Γ(d/2)lim_z' → 0(σ z')^-dθ(spacelike)
Likewise, the expressions (<ref>) and (<ref>) become
𝐊_N(z, x; x') = Γ(d/2)/2 π^d/2z^d/(√(Δ x^2 - Δ t^2))^d _2F_1(d/2, 1; 1 + d/2; - z^2/Δ x^2 - Δ t^2)
𝐊_n N(z, x; x') = Γ(d/2)/2 π^d/21/(√(Δ x^2 - Δ t^2))^d _2F_1(d/2, 1; 1 - d/2; - z^2/Δ x^2 - Δ t^2)
As a consistency check for our results of the p-form field, we consider the massless free scalar field, which is identical to the p = 0 form field.
With this, we conclude a brief review of the HKLL bulk reconstruction procedure for free massive scalar fields in empty AdS. For more details and technicalities, the reader is directed to the references mentioned at the beginning of this section. In the following sections, we extend this procedure to the free massless p-form field and the linearized graviton in empty AdS.
§ HKLL RECONSTRUCTION FOR P-FORMS
Theories with gauge symmetries do not admit local
gauge-invariant observables. One can construct gauge-invariant operators using Wilson loops or the product of Wilson loops.
W(z,x)=lim_ϵ→ 0∮_C A,
where A is p-form gauge potential. Therefore, one may wonder if the constructed bulk field operators have any physical meaning, in particular in a gravitational theory where there are no local diffeomorphism invariant observables. But it has been shown that non-local gauge invariant operators can be traded for local
operators in a fixed gauge <cit.>. However, commutators of these operators can exhibit non-locality due to the Gauss' law constraint.
In this paper, we follow a similar prescription to obtain the smearing functions, and these operators in the bulk of AdS can be mapped to operators in a boundary
theory through the extrapolate dictionary<cit.>
lim_z→ 0√(-g)F^z,μ_1⋯μ_p(z,x)=j^μ_1⋯μ_p(x).
In this section, we explicitly derive the spacelike bulk reconstruction kernel
corresponding to the normalizable and non-normalizable mode of p-forms. We focus our attention on the Poincaré patch of AdS. To derive the bulk reconstruction kernel, the first step is to write the bulk wave equation of p-form fields on the Poincaré patch.
The equation of motion of p-forms is given by
∇_MF^M,J_1,…,J_p = 0,
where F^M,J_1,…,J_p is the field strength corresponding to the antisymmetric gauge potential A^J_1,…,J_p. We use capitalized Latin indices A, B,… to indicate bulk coordinates, while Greek indices α,β,… indicate boundary coordinates. As it is known that the action has U(1) gauge symmetry, we make certain gauge choices to obtain the bulk equation corresponding to the transverse gauge potential. We choose
A^J_1,…,J_p-1,z = 0
∇_α_1A^α_1,α_2,…,α_p = 0
These two gauge conditions together imply the covariant gauge condition ∇_J_1A^J_1,…,J_p = 0. We shall use this condition to derive the equation of motion. We impose the gauge conditions in (<ref>) and write the bulk wave equation of p-forms on the Poincaré patch. [Details of this derivation are shown in the Appendix (<ref>)]
z^2∂^2_zA_J + (2 p - d + 1)z∂_zA_J + z^2 ∂_α∂^αA_J = 0, A_J_1,…,J_p≡ A_J
The solution to the wave equation in this background is obtained as
A_J(z, x) = ∫_q ≥ 0d^dq/(2 π)^dz^1/2 (d-2 p)(c_J(q) J_1/2 (d-2 p)(|q| z)+ d_J(q) Y_1/2 (d-2 p)(|q| z))e^i q. x.
Here x≡ (t,x) and q ≡ (ω, k) with q=√(ω^2-|k|^2). Let us also define a parameter ν=d/2-p. The near boundary behavior of the solution can be extracted from the expansion of the Bessel functions at z=0. For now, we focus on the case where d/2-p=ν≠Integer. This corresponds to the case where d is odd, i.e., Even AdS. The massive p-form is also briefly discussed in Appendix.<ref>.
§.§ Even AdS
We now focus on the mode
expansions for p-forms and write down the kernels in even AdS_d + 1. The odd AdS case is discussed in the Appendix <ref>.
In even dimensional AdS, there are two independent solutions to the bulk wave equation, which are J_ν(|q|z) and J_-ν(|q|z). This is due to the fact that ν∈1/2ℤ for odd d and therefore
Y_ν(|q|z)=J_ν(|q|z)cos(νπ)-J_-ν(|q|z)/sin(νπ).
The solution to the wave equation in this background can be expressed
A_J(z, x) = ∫_q ≥ 0d^dq/(2 π)^dz^ν[a_J,ν(q) J_ν(|q| z)+ b_J,ν(q) J_-ν(|q| z)]e^i q. x.
The coefficients a_J,ν and b_J,ν are given by
a_J,ν(|q|)=c_J(q)+(|q|νπ) d_J(|q|), b_J,ν(|q|)=-d_J(|q|)/sin(νπ).
The coefficient of each term in the expansion of the solution around z=0 can be expressed as a function of x.
A_J,ν(x,z) =z^-p∑_n=0^∞(z^Δ+2nϕ_J^(2 n)(x)+z^d-Δ+2nj_J^(2 n)(x)
)
= A^N_J(z, x)+ A^n N_J(z, x) ,
where Δ=d-p=ν+d/2 and two modes are given by
A^N_J(z, x) = ∑_n = 0^∞ z^2 n+2νϕ_J^(2 n)(x) ,
A^n N_J(z, x) = ∑_n = 0^∞ z^2 n j_J^(2 n)(x)
The coefficients in each order can be expressed
ϕ_J^(2n)(x) = (-1)^n/2^ν4^nΓ( n + 1)Γ(ν + n + 1)∫_q ≥ 0d^d q/(2 π)^da_J,ν(q)|q|^2n+νe^i q. x
j_J^(2n)(x) = (-1)^n/2^-ν4^nΓ( n + 1)Γ(-ν + n + 1)∫_q ≥ 0d^d q/(2 π)^db_J,ν(q)|q|^2n-νe^i q. x
§.§ Kernels as mode-sum integrals
The integrals in (<ref>) and (<ref>) can be inverted to obtain a_J,ν(|q|) and b_J,ν(|q|). In order to do that, we pick two independent pieces of data from the expansion coefficients of the two modes, i.e., ϕ_J,2n(x) and j_J,2n(x). We choose coefficients corresponding to n=0 as data, and once these data are extracted, then the rest of the terms can be evaluated in terms of them. The resultant expressions for a_J,ν(|q|) and b_J,ν(|q|) are obtained as Fourier transforms of the boundary fields.
The coefficients corresponding to n=0 of the normalizable and non-normalizable mode expansions are given by
ϕ_J^(0)(x) = 1/2^νΓ(ν + 1)∫_q ≥ 0d^d q/(2 π)^da_J,ν(q)|q|^νe^i q. x
j_J^(0)(x) = 1/2^-νΓ(-ν + 1)∫_q ≥ 0d^d q/(2 π)^db_J,ν(q)|q|^-νe^i q. x
Using Fourier transformation, we extract a_J(q) and b_J(q) in terms of these boundary data and q.
a_J,ν(|q|) = Γ(ν+1)2^ν∫ |q|^-νϕ_J^( 0)(x')e^-i q. x'd^dx'
b_J,ν(|q|) =Γ(1-ν)/2^ν∫ |q|^νj_J^( 0)(x')e^- i q. x'd^dx'
From these expressions, it is easy to read off the kernel integrals
𝐊_N(z,x;x') = 2^νΓ(ν+1)∫_q ≥ 0d^dq/(2 π)^d z^ν|q|^-νJ_ν(|q| z)e^i q. (x - x')
𝐊_nN(z,x;x') = 2^-νΓ(1-ν)∫_q ≥ 0d^dq/(2 π)^d z^ν|q|^νJ_-ν(|q| z)e^i q. (x - x')
§.§ Explicit evaluation of the Poincaré kernel integrals
In this section, we evaluate the kernels (<ref>) and (<ref>) in the Poincaré patch of the even-dimensional AdS space. We use the following integral <cit.>, where ζ stands for any Bessel function and μ, ν are real numbers.
∫_|q| ≥ 0d^dq/(2π)^d|q|^μζ_ν(q z)e^i q. (x - x') = 1/π(2π)^d/2X^d/2 - 1∫_x = 0^∞x^μ + d/2ζ_ν(x z)K_d/2-1(x X)dx
For the normalizable kernel, we find that in terms of the notation in the above expression, we have ζ_ν = J_ν and μ = -ν. Thus the integral becomes
𝐊_N(z, x; x') = 2^νΓ(ν+1)z^ν/π (2 π)^d/2 X^d/2 - 1∫_ 0^∞dx x^d/2-νJ_ν(x z)K_d/2 - 1(x X)
To evaluate this integral, we need to use the following identity
∫_0^∞x^-λK_μ(a x)J_ν(b x) = b^νΓ(ν -λ + μ + 1/2)Γ(ν -λ - μ + 1/2)/2^λ + 1Γ(ν + 1)a^-λ + ν + 1
× _2F_1(ν -λ + μ + 1/2, ν -λ - μ + 1/2; ν + 1; -b^2/a^2)
Identifying λ = ν-d/2=-p and μ = d/2 - 1(with a = X and b = z), we have the following expression of the kernel corresponding to the normalizable modes
𝐊_N(z, x; x')
= z^Δ -pΓ(d/2)/2 π^d/2 + 1X^d _2F_1(d/2, 1;1-d/2+Δ; - z^2/X^2)
For p=0 or the 0-form, the kernel reduces to the kernel of a massless scalar. Similarly, for p = 1 the kernel reduces to that of a free Maxwell field in even AdS
𝐊_N(z, x; x')|_p=0 = z^ΔΓ(d/2)/2 π^d/2 + 1X^d _2F_1(d/2, 1;d/2+1; - z^2/X^2)
𝐊_N(z, x; x')|_p=1 = z^Δ-1Γ(d/2)/2 π^d/2 + 1X^d _2F_1(d/2, 1;d/2; - z^2/X^2)
The kernel in (<ref>) provides a consistency check of our computations. Similarly, the kernel for the non-normalizable mode evaluates to
𝐊_nN(z, x; x') = Γ(d/2)/2 π^d/2 + 1X^d _2F_1(d/2, 1; 1 - Δ + d/2; - z^2/X^2)
§.§ Covariant form of the kernels
Here we consider certain transformations that cast the kernels (<ref>) and (<ref>) in terms of an AdS-covariant piece (and an extra term).
Using the following variable transformation identity of the hypergeometric _2F_1
_2F_1(a, b; c; z) = Γ(c)Γ(b - a)/Γ(b)Γ(c-a)(-z)^-a _2F_1(a, 1 - c + a; 1 - b + a; 1/z)
+ Γ(c)Γ(a - b)/Γ(a)Γ(c-b)(-z)^-b _2F_1(b, 1 - c + b; 1 - a + b; 1/z)
we rewrite (<ref>) as follows
z^p𝐊_N = Γ(d/2 - 1)Γ(1 + d/2 - p)/2π^d/2 + 1Γ(d/2 - p)z^d - p - 2/X^d - 2 _2F_1(1, 1 - d/2 + p; 2 - d/2; - X^2/z^2)
+ (-1)^d/2Γ(1 + d/2 - p)2^Δ - d/2π^d/2Γ(1 - p)lim_z'→ 0(σ z')^-p
and similarly (<ref>) is written as
z^p𝐊_n N = Γ(d/2 - 1)Γ(1 - d/2 + p)z^p-2/2π^d/2 + 1X^d-2 _2F_1(1, 1 + d/2 - p; 2 - d/2; - X^2/z^2)
+ (-1)^d/2Γ(1 - d/2 + p)2^p - d/Γ(1 - d + p)2π^d/2lim_z' → 0(σ z')^p - d
Therefore, we note that the AdS-covariant piece in the respective kernels are the second terms in (<ref>) and (<ref>). It is straightforward to see that there are no common terms between the first and second pieces of either of the above two expressions.
The first terms in (<ref>) and (<ref>) only have terms proportional to z^d - p - 2 - 2 n and z^p - 2 - 2n respectively. Such terms do not arise in the respective modes in (<ref>). This implies that these terms shall vanish when integrated against the corresponding boundary field.
A plausible prescription for dropping these extra terms was discussed in Appendix D of <cit.>. In this case, the same can be employed by noting that there is a pole/branch-point at X = 0 in the first terms of both (<ref>) and (<ref>) (since d is odd). With this, we can write the two kernels as
z^p𝐊_N = (-1)^d-1/2Γ(1 + d/2 - p)2^-p/2π^d/2Γ(1 - p)lim_z'→ 0(σ z')^-p
z^p𝐊_n N = (-1)^d-1/2Γ(1 - d/2 + p)2^p - d/2π^d/2Γ(1 - d + p)lim_z' → 0(σ z')^p - d
Therefore, we arrive at the AdS-covariant form of the bulk reconstruction kernels for the normalizable and non-normalizable modes of the massless p- form field. Formally, these kernels vanish due to the coefficient being 0 for integer values of p. That said, this term must survive since these terms possess the correct exponents of z. One can argue that since the coefficient of the kernel is not unique, there are various possible arguments to recover a non-vanishing coefficient for it. As we shall see, Green's function approach also reproduces the same results. Moreover, we wish to mention that a similar problem was encountered in <cit.>, where the denominator of the smearing function turned out to be divergent for a massive scalar with Δ≤ d-1. Note that, we also have the same issue for free p-forms with the conformal dimension Δ=d-p. Therefore, to avoid this divergence, one requires to follow the prescription given in the appendix (A) of <cit.> and obtain a lightlike smearing function which is supposed to be a delta function for p>1. We hope to investigate this in more detail in the future.
§.§ Kernels under Hodge dual transformation
In d+1 dimensions, a p-form theory is Hodge dual to a (d-p-1)-form theory. As an example,
a massless vector in three dimensions can be dualized to a scalar by the relation
F_αβ∼ϵ_αβ^ γ∂_γϕ.
This duality equivalence holds true at the classical level. However, it is not obvious to hold it at the quantum level. It is observed that the logarithmic divergent term of p-forms which are Hodge dual to each other, do not have the same logarithmic divergence in the partition function <cit.>.
Therefore, it is worth checking the properties of the kernel of p-forms under Hodge-duality. Under Hodge-dual transformation p→ (d-p-1), and Δ→ (d + 1-Δ).
Under the Hodge duality, we find that the kernels take the following form
𝐊_N →𝐊̃_N = z^2p + 2 -d Γ(d/2)/2π^d/2 + 1X^d _2F_1(d/2, 1; 2-d/2 +p; - z^2/X^2)
𝐊_nN →𝐊̃_nN = Γ(d/2)/2π^d/2 + 1X^d _2F_1(d/2, 1; - d/2 + Δ; - z^2/X^2)
Comparing these expressions to the kernels derived in (<ref>) and (<ref>), we find that the kernels are identical for d + 1 = 2Δ, which is the self-duality condition 2p = d - 1. However, in general, the kernels differ under the Hodge-dual transformation. From plot (<ref>), we observe that the difference between the kernels of 2-form and 3-form and their Hodge dual vanishes only at the self-dual point. Since the duality relation is true at the classical level and therefore it is not obvious that a similar statement will also work for the smearing functions. But it will be interesting to reproduce the mismatch in the boundary partition from the difference of the smearing functions under Hodge dual transformation.
§.§ Spacelike kernel
In this section, we restrict the kernels to the spacelike region of the bulk point at which the field is reconstructed. This is expected since the HKLL bulk reconstruction is a causal wedge reconstruction procedure. We begin by focusing on the kernel for the normalizable mode (<ref>). The bulk field constructed from this kernel at a bulk point P is given as (the lim_z'→ 0 is implicit)
Ψ_J(z, x; x') = ∫dt'dx⃗' α_d(σ z')^-pϕ_J^(0)(x')
where we denote α_d = (-1)^d/2Γ(1 + d/2 - p)2^Δ - d/2π^d/2Γ(1 - p). Here we define Ψ_J = z^pA_J. The integral is over the full AdS boundary, and we seek to restrict it to the spacelike separated region. For this purpose, we introduce the following redefinition of the boundary fields ϕ_J, 0(x')
ϕ_J^(0)(x') = -e^-i π pϕ̃_J^(0)(x') future timelike region of P(𝐈)
ϕ̃_J^(0)(x') spacelike region of P (𝐈𝐈)
-e^i π pϕ̃_J^(0)(x') past timelike region of P(𝐈𝐈𝐈)
With this redefinition, we write the bulk field as
Ψ_J(z, x; x') = ∫dt'dx⃗' α_d|σ z'|^-p -e^-i π pϕ̃_J^(0)(x') future timelike region of P(𝐈)
ϕ̃_J^(0)(x') spacelike region of P (𝐈𝐈)
-e^i π pϕ̃_J^(0)(x') past timelike region of P(𝐈𝐈𝐈)
For a diagrammatic representation of the different regions within the Poincaré patch, see Fig .<ref>.
To this field Ψ_J, one can add the following term
F = (1/2 z(z^2 + |x⃗ - x⃗'⃗|^2 - (t' - t - iϵ)^2))^-p
This term vanishes when integrated against the boundary field. In the future and past timelike regions, this term picks up the following phases
F = e^-i π p|σ z'|^-p future timelike region of P(𝐈)
|σ z'|^-p spacelike region of P (𝐈𝐈)
e^i π p|σ z'|^-p past timelike region of P(𝐈𝐈𝐈)
This term, when added to the kernel, cancels out in the past and future timelike regions and gives a factor of 2 in the spacelike region. The redefined kernel is as follows
𝐊_N→𝐊̃_N = 𝐊_N + α_dF
With this redefinition, the bulk normalizable field can be written as
Ψ_J(z, x; x') = 2∫dt'dx⃗' α_d(σ z')^-pθ(spacelike)ϕ_J^(0)(x')
Thus the kernel can be read off as
𝐊̃_N = (-1)^d/2Γ(1 + d/2 - p)2^-p/π^d/2Γ(1 - p)lim_z' → 0(σ z')^-pθ(spacelike)
where we have used the value of α_d and reintroduced the limit. In a similar way, the non-normalizable mode can be studied. The bulk non-normalizable mode is written as (with the non-normalizable kernel from (<ref>))
Ψ_J(z, x; x') = ∫dt' d^dx⃗' β_d(σ z')^p-dj_J^(0)(x')
where β_d = (-1)^d/2Γ(1 - d/2 + p)2^p - d/2π^d/2Γ(1 - d + p). The boundary field j_J, 0 can be redefined in a similar manner to the
j_J^(0) = e^iπ pj̃_J^(0)(x') future timelike region of P(𝐈)
j̃_J^(0)(x') spacelike region of P (𝐈𝐈)
e^-iπ pj̃_J^(0)(x') past timelike region of P(𝐈𝐈𝐈)
With this redefinition, the bulk non-normalizable mode is written as
Ψ_J(z, x; x') = ∫dt' d^dx⃗' β_d(σ z')^p-d e^iπ pj̃_J^(0)(x') future timelike region of P(𝐈)
j̃_J^(0)(x') spacelike region of P (𝐈𝐈)
e^-iπ pj̃_J^(0)(x') past timelike region of P(𝐈𝐈𝐈)
To this kernel, we add the following function
F' = (1/2 z(z^2 + |x⃗ - x⃗'⃗|^2 - (t' - t - iϵ)^2))^p-d
This function takes the following form in the timelike and spacelike regions
F' = -e^i π p|σ z'|^p-d future timelike region of P(𝐈)
|σ z'|^p-d spacelike region of P (𝐈𝐈)
-e^-i π p|σ z'|^p-d past timelike region of P(𝐈𝐈𝐈)
This function can be added to the kernel 𝐊_n N to redefine the kernel as follows
𝐊_n N→𝐊̃_n N = 𝐊_n N + β_dF'
This kernel, therefore, takes the following spacelike form
𝐊̃_n N = (-1)^d/2Γ(1 - d/2 + p)2^p - d/π^d/2Γ(1 - d + p)lim_z' → 0(σ z')^p-dθ(spacelike)
where we have used the value of β_d and reintroduced the limit. Therefore we restrict the mode-sum kernels to the spacelike region of the bulk point.
§ GRAVITON
We now focus our attention on deriving HKLL kernels corresponding to normalizable and non-normalizable modes of the graviton. The gauge redundancy is realized through the diffeomorphism in gravity. The Einstein-Hilbert action is given by
S =1/4G_N∫ d^d+1x(R-2Λ), Λ=-d(d-1)/2.
We work with the linearized order in metric perturbations around the AdS
background
g_MN=g̅_MN+h_MN.
The bulk equation of linearized graviton in AdS space is obtained as <cit.>
∇_L∇^Lh_MN-2∇_L∇_Mh^L_ N+∇_M∇_N h^L_ L-2dh_MN=0.
We will obtain the bulk equation of the linearized graviton on the Poincaré patch of AdS.
§.§ Bulk wave equation
We wish to evaluate the bulk equation corresponding to the transverse traceless degrees of freedom. Therefore, we impose the following gauge restrictions
h^α_α=0, ∂_μh^μ_ν=0.
These gauge conditions will imply the conservation of currents and the tracelessness of the stress tensor
∂_μT^μν=0, T^μ_μ=0.
We also work with the holographic gauge to remove residual gauge invariances <cit.>
h_zz=0, h_zμ=0.
We now impose the gauge conditions and obtain the bulk wave equation
of the graviton in the Poincaré patch of AdS
∂_α∂^α h_μν + ∂^2_zh_μν + 5-d/z∂_zh_μν - 2(d-2)/z^2h_μν = 0
At this stage it is convenient to substitute Φ_μν = z^2 h_μν and the rescaled bulk equation becomes
∂_α∂^αΦ_μν + z^d-1∂_z(z^1-d∂_zΦ_μν) = 0
We will later show that this rescaled wave equation admits AdS covariant form.
The solution to the wave equation in this background is obtained as
Φ_μν(z, x) = ∫_|q| ≥ 0a_μν(q)z^d/2J_d/2(q z)e^i q . xd^dq/(2π)^d +∫_|q| ≥ 0b_μν(q)z^d/2Y_d/2(q z)e^i q . xd^dq/(2π)^d
Therefore, the mode expansion of the bulk field h_μν(z,x) can be expressed as
h_μν(z, x)= ∫_|q| ≥ 0a_μν(q)z^d/2-2J_d/2(q z)e^i q . xd^dq/(2π)^d +∫_|q| ≥ 0b_μν(q)z^d/2-2Y_d/2(q z)e^i q . xd^dq/(2π)^d
Here x≡ (t,x) and q ≡ (ω, k) with q=√((ω^2-|k|^2). The near boundary behavior of the solution can be extracted from the expansion of the Bessel functions at z=0. For now, we focus on the case where d/2-2≠Integer. This corresponds to the case where d is odd, i.e., Even AdS.
§.§ Even AdS
In even dimensional AdS[The odd AdS case is discussed in Appendix <ref>], there are two independent solutions to the bulk wave equation, which are J_ν(|q|z) and J_-ν(|q|z) which is due to the relation given in (<ref>). Therefore the solution to the wave equation in this background can be expressed.
h_μν(z, x) = ∫_|q| ≥ 0d^dq/(2π)^dz^d/2-2[a_μν(q)J_d/2(q z)+b_μν(q)J_-d/2(qz)]e^i q . x
From the above solution, we read off the normalizable and non-normalizable mode solutions for the case of even AdS.
h_μν(z,x)|_N = ∫_|q| ≥ 0a_μν(q)z^d/2-2J_d/2(q z)e^i q . xd^dq/(2π)^d = ∑_k = 0^∞z^d + 2 k-2ϕ^(k)_μν(x)
h_μν(z,x)|_n-N = ∫_|q| ≥ 0b_μν(q)z^d/2-2J_-d/2(q z)e^i q . xd^dq/(2π)^d = ∑_k=0^∞z^2 k-2j^(k)_μν(x)
The coefficients at each order can be expressed
ϕ^(k)_μν(x) = (-1)^k2^-2k - d/2/Γ(k+1)Γ(k + d/2+1)∫_q ≥ 0a_μν(q)q^2 k + d/2e^i q. xd^dq/(2π)^d
j^(k)_μν(x) = (-1)^k2^-2k + d/2/Γ(k+1)Γ(k - d/2+1)∫_q ≥ 0b_μν(q)q^2 k - d/2e^i q. xd^dq/(2π)^d
§.§ Kernels as mode-sum integrals
The integrals (<ref>) can be inverted to obtain a_μν(|q|) and b_μν(|q|). To do that, one can pick two independent data from the expansion coefficients of the two modes. We choose coefficients corresponding to k=0
and interpret as holographic data.
ϕ^(0)_μν(x) = 2^-d/2/Γ(d/2+1)∫_|q| ≥ 0 a_μν(q)q^d/2e^i q. xd^dq/(2π)^d
j^(0)_μν(x) = 2^d/2/Γ(1-d/2)∫_|q| ≥ 0 b_μν(q)q^-d/2e^i q. xd^dq/(2π)^d
Inverting this relation, we obtain
∫ϕ^(0)_μν(x)e^-i k. xd^dx =2^-d/2/Γ(d/2+1)a_μν(k)k^d/2
Inserting this expression for a_μν(q) into the expression (<ref>), we find the following expression
h_μν(z, x)|_N = z^d/2-22^d/2Γ(d/2+1)∫_|q|≥ 0J_d/2(q z)/q^d/2e^i q. (x - x')ϕ^(0)_μν(x')d^dq d^dx'/(2π)^d
The kernel is defined as
h_μν(z, x)|_N = ∫𝐊_N(z, x; x') ϕ^(0)_μν(x')d^dx'
From this, we can read off the kernel for the normalizable mode
𝐊_N = z^d/2-22^d/2Γ(d/2+1)∫_|q| ≥ 0q^-d/2J_d/2(q z)e^i q. (x - x')d^dq/(2π)^d
Let us now focus on the non-normalizable mode.
j^(0)_μν(x) = 2^d/2/Γ(1-d/2)∫_|q| ≥ 0q^-d/2b_μν(q)e^i q.xd^dq/(2π)^d
Inverting this relation, we get
∫ j_μν(x)e^-i k. xd^dx = 2^d/2/Γ(1-d/2)k^-d/2b_μν(k)
Plugging this expression for b_μν(q) into (<ref>), we find the following expression
h_μν(z, x)|_n-N = z^d/2-22^-d/2Γ(1-d/2)∫_|q| ≥ 0 J_d/2(k z)k^d/2e^i k. (x - x')j_μν(x)d^dkd^dx/(2π)^d
From this, we can read off the kernel expression as
𝐊_nN = z^d/2-22^-d/2Γ(1-d/2)∫_|q| ≥ 0J_-d/2(k z)q^d/2e^i q.(x-x')d^dq/(2π)^d
§.§ Explicit evaluation of the Poincaré kernel integrals
In this section, we evaluate the kernels (<ref>) and (<ref>) in the Poincaré patch of the even-dimensional AdS space. We use the integral given in (<ref>)
∫_|q| ≥ 0d^dq/(2π)^d|q|^μζ_ν(q z)e^i q. (x - x') = 1/π(2π)^d/2X^d/2 - 1∫_x = 0^∞x^μ + d/2ζ_ν(x z)K_d/2-1(x X)dx
where ζ is some function of q z, while μ and ν are some complex numbers. We begin with considering (<ref>), where μ = -d/2 and ζ_ν(qz) = J_d/2(qz). This gives us the following integral
𝐊_N =2^d/2Γ(d/2+1)z^d/2-2/X^d/2 - 1∫_x = 0^∞J_d/2(z x)K_d/2-1(x X)dx
To evaluate this integral, we need to use the following identity
∫_0^∞x^-λK_μ(a x)J_ν(b x) = b^νΓ(ν -λ + μ + 1/2)Γ(ν -λ - μ + 1/2)/2^λ + 1Γ(ν + 1)a^-λ + ν + 1
× _2F_1(ν -λ + μ + 1/2, ν -λ - μ + 1/2; ν + 1; -b^2/a^2)
which holds for Re(ν + 1 -λ) > |Re(μ)| and Re(a ± i b). In the integral (<ref>), we have μ = d/2-1, ν = d/2 and λ = 0. Thus we have, with a = X and b = z
𝐊_N = Γ(d/2)/2 π^d/2 + 1z^d-2/X^d _2F_1(d/2, 1; d/2 + 1; -z^2/X^2)
We turn to evaluate the non-normalizable kernel integral (<ref>). From the same integral identity (<ref>) we identify ζ_ν (qz)= J_-d/2(qz), ν = -d/2, λ = -d and μ = d/2). Therefore, the integral becomes
𝐊_nN = 2^-d/2Γ(1-d/2)/π (2π)^d/2z^d/2-2/X^d/2 - 1∫_x = 0^∞x^dJ_-d/2(z x)K_d/2-1(z X)dx
To evaluate this integral, we use the identity (<ref>) again. The conditions are automatically satisfied. This gives the following expression (with λ = -d, μ = d/2, ν = d/2-1, a = z and b = X)
𝐊_nN = Γ(d/2)z^-2/2 π^d/2 + 1X^d _2F_1(1, d/2; 1 - d/2; -z^2/X^2)
§.§ Covariant part of the kernels
In this section, we use some transformation properties of the hypergeometric _2F_1 function in order to extract the AdS-covariant piece from the kernels for the normalizable and non-normalizable modes.
The particular transformation that we utilize is the following
_2F_1(a, b; c; z) = Γ(c)Γ(b - a)/Γ(b)Γ(c-a)(-z)^-a _2F_1(a, 1 - c + a; 1 - b + a; 1/z)
+ Γ(c)Γ(a - b)/Γ(a)Γ(c-b)(-z)^-b _2F_1(b, 1 - c + b; 1 - a + b; 1/z)
This lets us rewrite the kernel expression(<ref>) as
z^2𝐊_N = Γ(d/2 + 1)Γ(d/2 - 1)/Γ(d/2)2 π^d/2 + 1z^d-2/X^d-2 _2F_1( 1, 1 - d/2; 2 - d/2; - X^2/z^2) + Γ(d/2)Γ(d/2+1)Γ(1 - d/2)/2π^d/2 + 1
And similarly (<ref>) as
z^2𝐊_n N = Γ(1 - d/2)Γ(d/2 - 1)z^-2/2 π^d/2 + 1 X^d - 2 _2F_1(1; 1 + d/2; 2 - d/2; - X^2/z^2)+ (-1)^d/2 - 12^-dΓ(1 - d/2)/2 π^d/2Γ(1 - d)lim_z'→ 0(σ z')^-d
As can be seen from these expressions, the second terms in (<ref>) and (<ref>) are the AdS-covariant pieces with the correct exponents. The first terms in both these expressions do not contain any piece with the same exponent in z as the second term. Therefore these terms are distinct, and there is no mixing between them.
We note that the first term in (<ref>) only has terms proportional to z^d - 2 - 2n. Since such terms do not arise in (<ref>) (once the z^2 scaling is taken care of), this term should vanish when integrated against the boundary field. Similarly, the first term in (<ref>) only has terms proportional to z^-2 -2n. Terms like these do not arise in (<ref>), and therefore the first term in (<ref>) should also vanish when integrated against the boundary field.
By the prescription discussed in Appendix D of <cit.>, both of the first terms in (<ref>) and (<ref>) can be dropped since X = 0 is a pole/branch-point. Hence, with that, we are left with the following expressions of the kernels
z^2𝐊_N = Γ(d/2)Γ(d/2+1)Γ(1 - d/2)/2π^d/2 + 1
z^2𝐊_n N = (-1)^d/2-12^-dΓ(1 - d/2)/2 π^d/2Γ(1 - d)lim_z'→ 0(σ z')^-d
Therefore, we arrive at the expressions for the bulk reconstruction kernels corresponding to the normalizable and non-normalizable modes of the graviton.
§.§ Spacelike kernel
In this section, we restrict the mode-sum kernels to the spacelike region of the bulk point. We begin by considering the non-normalizable mode. The kernel reconstructing the bulk non-normalizable mode is written as (the lim_z' → 0 is implicit)
Φ_μν(z, x; x') = ∫dt' d^d - 1x⃗'⃗ a_d(σ z')^-dj^(0)_μν(x')
where a_d = (-1)^d/2 - 12^-dΓ(1 - d/2)/2 π^d/2Γ(1 - d). To restrict this kernel to the spacelike separated boundary region of the bulk point P (where the bulk field is reconstructed), we redefine the boundary field as follows
j^(0)_μν(x') = -e^-i π dj̃^(0)_μν(x') future timelike region of P(𝐈)
j̃^(0)_μν(x') spacelike region of P (𝐈𝐈)
-e^i π dj̃^(0)_μν(x') past timelike region of P(𝐈𝐈𝐈)
However, since d is odd, the overall coefficient is +1, and so the redefined field is identical to the original field j^(0)_μν(x') = j̃^(0)_μν. To the kernel z^2𝐊_n N = α_d|σ z'|^-d we add the following function
F = ( 1/2 z(z^2 + |x - x'|^2 - (t' - t - iϵ)^2))^-d
This vanishes when integrated against ϕ^(0)_μν(x').
In the timelike and spacelike region of P, this function takes the following form
F = -|σ z'|^-d future timelike region of P(𝐈)
|σ z'|^p-d spacelike region of P (𝐈𝐈)
-σ z'|^-d past timelike region of P(𝐈𝐈𝐈)
It follows from this expression that the redefined kernel
𝐊̃_n N = 𝐊_n N + a_dF
vanishes in the timelike region and gives a factor of 2 in the spacelike region of P. Hence the effective non-normalizable kernel is as follows
𝐊̃_n N = (-1)^d/2 - 12^-dΓ(1 - d/2)/π^d/2Γ(1 - d)lim_z' → 0(σ z')^-dθ(spacelike)
where we have used the value of a_d and reintroduced the limit. Now we consider the normalizable mode. The normalizable bulk field is written as
Φ_μν(z, x; x') = ∫dt'd^d - 1x⃗'b_dϕ^(0)_μν(x')
where b_d = Γ(d/2)Γ(d/2+1)Γ(1 - d/2)/2π^d/2 + 1. We redefine the boundary field as follows
ϕ^(0)_μν(x') = -ϕ̃^(0)_μν(x') future timelike region of P(𝐈)
ϕ̃^(0)_μν(x') spacelike region of P (𝐈𝐈)
-ϕ̃^(0)_μν(x') past timelike region of P(𝐈𝐈𝐈)
Therefore the bulk normalizable mode is written as
Φ_μν(z, x; x') = ∫dt' d^d - 1x⃗' b_d -ϕ̃^(0)_μν(x') future timelike region of P(𝐈)
ϕ̃^(0)_μν(x') spacelike region of P (𝐈𝐈)
-ϕ̃^(0)_μν(x') past timelike region of P(𝐈𝐈𝐈)
Now we consider the function
F' = lim_δ→ 0(1/2 z(z^2 + |x - x'|^2 - (t' - t - iϵ)^2))^δ
This function vanishes when integrated against the boundary field ϕ^(0)_μν(x'). It does not pick up any extra phase in the timelike region. Therefore, adding F' to the normalizable kernel gives us the following kernel
𝐊̃_N = 𝐊_N + b_dF'
This cancels out the kernel in the timelike region and gives a factor of 2 in the spacelike region. The effective kernel is then given by
𝐊̃_N = Γ(d/2)Γ(d/2+1)Γ(1 - d/2)/π^d/2 + 1lim_z' → 0θ(spacelike)
where we have used the value of b_d and reintroduced the limit. Thus, we attain the spacelike kernels via the mode-sum approach.
§ GREEN'S FUNCTION APPROACH
We begin with the differential equations for the p- form and the graviton. The equations are written below as
z^2∂^2_zA_J + (2 p - d + 1)z∂_zA_J + z^2 ∂_α∂^αA_J = 0
∂_α∂^α h_μν + ∂^2_zh_μν + 5-d/z∂_zh_μν - 2(d-2)/z^2h_μν = 0
One does not expect either of the kernels obtained from these equations to be AdS covariant. That is because these wave equations cannot be cast solely in terms of the AdS covariant distance σ. However, it is possible to scale the fields by some z^δ so that the resulting equation of the new field is AdS covariant. We note that for the p- form, we can choose δ = - p, and for the graviton, we choose δ = - 2. The wave equations respectively turn out to be the following
z^2 ∂^2_zΨ_J + z^2∂_α∂^αΨ_J + (1 - d)z ∂_z Ψ + p (d - p)Ψ = 0, Ψ_J = z^pA_J
z^2 ∂^2_zΦ_μν + (1 - d)z∂_zΦ_μν + z^2 ∂_α∂^αΦ_μν = 0, Φ_μν = z^2 h_μν
With this redefinition, the equation for Ψ_J and Φ_μν can be shown to be equivalent to an ODE in terms of AdS chordal length σ. To do this, we consider Euclidean AdS and note the following relations
∂𝒞/∂ z = (1/z' - σ/z)∂𝒞/∂σ
∂^2𝒞/∂ z^2 = ( 2 σ/z^2 - 1/z z')∂𝒞/∂σ + ( 1/z' - σ/z)^2∂^2𝒞/∂σ^2
∂^2𝒞/∂ x^2 = ( 2 σ/z z' - 1/z^2 - 1/z'^2) ∂^2𝒞/∂σ^2 + d/z z'∂𝒞/∂σ
where 𝒞 stands for either Ψ_J or Φ_μν. Using these relations, we find that the two equations reduce to
(σ^2 - 1)d^2 Ψ_J/d σ^2 + (1 + d)σd Ψ_J/d σ + p (d - p)Ψ_J = 0
(σ^2 - 1)d^2 Φ_μν/d σ^2 + (1 + d)σd Φ_μν/d σ = 0
The most general solution to an equation of the form
(σ^2 - 1)G_E” + (1 + d)σ G_E' + Δ ( d - Δ)G_E = 0
is given by
G_E(σ) = (σ^2 - 1)^-μ/2( c_1𝐏^μ_ν(σ) + c_2𝐐^μ_ν(σ))
where 𝐏^μ_ν and 𝐐^μ_ν are Legendre functions of type 3 <cit.>, with the constants μ = d - 1/2 and ν = Δ - (d + 1/2). To obtain the function on the spacelike cut, we demand that its real part vanishes in the timelike region. In that region, one obtains the following form for the Lorentzian Green's function, defined via the analytic continuation G_M(σ) ≡ i G_E(σ + iϵ)
G_M(σ) = i c_1(-1)^μ(1 - σ^2)^-μ/2P^μ_ν(σ) + i c_2 (-1)^μ(1 - σ^2)^-μ/2[Q^μ_ν(σ) - i π/2P^μ_ν(σ)]
This gives us the following relation for the coefficients c_1 = i π/2c_2. The functions P and Q are the ordinary Legendre functions. Therefore, the function takes the following form in the spacelike region
ℜG_M(σ) = -c_2[π/2(σ^2 - 1)^-μ/2𝐏^μ_ν(σ)]θ(spacelike)
We ignore the divergent piece that arises from 𝐐 on the light-cone σ = 1 since that only plays a role when working with interactions (which we do not consider here). The value of c_2 is fixed via the short distance behavior of G_E(σ) in the flat space limit. The result is independent of the tensor structure of the bulk field and depends only on the space-time dimensions. As an example, the same is discussed for the graviton in the following section, and the value (<ref>) is obtained. That is the value we shall substitute for c_2 in this section as well.
The next step is, then, to insert this Green's function in Green's theorem to obtain the kernel for the p- form and the graviton (which corresponds to Δ = p and Δ = 0 respectively)
To proceed with this, we shall first consider the p- form field.
§.§ p- form
We recall Green's function for the p- form normalizable mode (<ref>). In this case, we have the parameters μ = d - 1/2 and ν = p - d/2 - 1/2. The Green's theorem is given by the expression
Ψ_J(z, x) = ∫d^dx' √(γ')(Ψ(z', x')∂_z'G(σ) - G(σ)∂_z'Ψ(z', x'))|_z' → 0
With the appropriate Green's function, the theorem can be used to reconstruct any field (since it is an identity). We shall refer to the specific form (<ref>) since we are interested in chordal Green's functions and the kernels obtained thereof. The final ingredient we require to proceed with the computation is the series expansion of the field Ψ_J. For this, we use (<ref>) and use the scaling of Ψ_J with respect to A_J. This gives us the following series
Ψ_J(z, x) = ∑_n = 0^∞z^2 n - 2 ν - 1 + pϕ_J^(n)(x) + z^2 n + pj_J^(n)(x)
Now we have to break the analysis into 2 cases. These cases correspond to the range of p with respect to d. We first begin with the case where ν > 0. In terms of p and d, this condition is p > d + 1/2 i.e. above the self-dual point (which we denote by p_0 = d - 1/2). In this case, the z' → 0 limit of the Green's function and its derivative are given by
G(σ)|_z' → 0 = 2^-νσ^ν - μΓ(2 ν + 1)/Γ(1 + ν)Γ(1 - μ + ν)
∂_z'G(σ)|_z' → 0 = -2^-νσ^ν - μΓ(2 ν + 1)(ν - μ)/Γ(1 + ν)Γ(1 - μ + ν)z'
And correspondingly, the leading order contributions of the field Ψ_J are given by
Ψ_J(z',x')|_z' → 0 = z'^p - 2ν - 1ϕ_J^(0)(x') + z'^pj_J^(0)(x')
∂_z'Ψ_J(z',x')|_z' → 0 = (p - 2ν - 1) z'^p - 2ν - 2ϕ_J^(0)(x') + p z'^p-1j_J^(0)(x')
Plugging (<ref>) - (<ref>) into (<ref>), we find the following result
Ψ_J(z,x) = ∫d^dx' z'^-d + 1{ -(z'^p - 2ν - 1ϕ_J^(0)(x') + z'^pj_J^(0)(x'))2^-νσ^ν - μΓ(2 ν + 1)(ν - μ)/Γ(1 + ν)Γ(1 - μ + ν)z'
- ((p - 2ν - 1) z'^p - 2ν - 2ϕ_J^(0)(x') + p z'^p-1j_J^(0)(x'))2^-νσ^ν - μΓ(2 ν + 1)/Γ(1 + ν)Γ(1 - μ + ν)}|_z' → 0
Combining the terms proportional to ϕ_J^(0)(x') and j_J^(0)(x') respectively, we obtain
Ψ_J(z,x) = ∫d^dx'{-2^-νΓ(2 ν + 1)(ν - μ + p)/Γ(1 + ν)Γ(1 - μ + ν)}z'^p - dσ^ν - μj_J^(0)(x')|_z' → 0
From this, we can read off the kernel as follows (here ± in the index stands for the ν > 0 and ν < 0 regimes, respectively)
𝐊̃_N, +(z,x;x')|_p > p_0 = -c_2π/22^-d/2+p+1/2Γ(-d/2+p+1)/√(π)Γ (-d+p+1)lim_z' → 0(σ z')^p - dθ(spacelike)
where the coefficient c_2 is introduced since the overall coefficient of the Green's function is not fixed yet. Also, the θ(spacelike) is re-introduced for clarity. Re-inserting the value of c_2
𝐊̃_N, +(z,x;x')|_p > p_0 = (-1)^d - 1/22^p-dΓ(p - d/2 + 1)/π^d/2Γ(p-d+1)lim_z' → 0(σ z')^p - dθ(spacelike)
For the case of ν < 0, the Green's function has the parameter ν replaced by -ν - 1[This is due to the fact that 𝐏^μ_ν(x) = 𝐏^μ_-ν - 1(x)]. This implies the following limiting cases
G(σ)|_z' → 0 = 2^ν + 1σ^-ν - μ - 1Γ(-2 ν - 1)/Γ(- ν)Γ( - μ - ν)
∂_z'G(σ)|_z' → 0 = -2^ν + 1σ^-ν - μ - 1Γ(-2 ν - 1)(-ν - μ - 1)/Γ(-ν)Γ(- μ - ν)z'
Plugging (<ref>)-(<ref>) and (<ref>)-(<ref>) into (<ref>), we obtain the following result.
Ψ_J(z,x) = ∫d^dx' z'^-d + 1{ -(z'^p - 2ν - 1ϕ_J^(0)(x') + z'^pj_J^(0)(x'))2^ν + 1σ^-ν - μ - 1Γ(-2 ν - 1)(-ν - μ - 1)/Γ(-ν)Γ(- μ - ν)z'
- ((p - 2ν - 1) z'^p - 2ν - 2ϕ_J^(0)(x') + p z'^p-1j_J^(0)(x'))2^ν + 1σ^-ν - μ - 1Γ(-2 ν - 1)/Γ(- ν)Γ( - μ - ν)}|_z' → 0
This expression reduces to
Ψ_J(z,x) = -∫d^dx' z'^-d + 1z'^p - 2 ν - 1 + d2^ν + 2(d/2 - p)σ^-ν - μ - 1Γ(-2 ν - 1)/Γ(- ν)Γ( - μ - ν)ϕ_J^(0)(x')|_z' → 0
The kernel can be read from this
𝐊̃_N, -(z,x;x')|_p < p_0 = -c_2π/22^1/2 (d-2 p-1)Γ(d/2-p + 1)/√(π)Γ (1-p)lim_z' → 0(σ z')^-pθ(spacelike)
Re-inserting the value of c_2, we recover the following expression
𝐊̃_N, -(z,x;x')|_p < p_0 = (-1)^d-1/22^-pΓ(d/2 - p + 1)/π^d/2Γ(1-p)lim_z' → 0(σ z')^-pθ(spacelike)
We note that 𝐊̃_N, ± are related to the original kernel 𝐊_N, ± by a factor of z^-p.
Now, we consider the Non-normalisable mode and attempt to reconstruct the bulk field via the appropriate Green's function. It is not known what the exact procedure is for deriving this Green's function, but a natural choice is a solution complementary to (<ref>), restricted to the spacelike region. That solution is the following
𝒢_M = c_3(σ^2 - 1)^-μ/2𝐐^μ_ν(σ) = 2^-pC_p/2 p - dσ^-p _2F_1(p/2, 1 + p/2; p - d/2 + 1; 1/σ^2)
where 𝐐^μ_ν(σ) is the associated Legendre function of the 2^nd kind and is known to have the corresponding hypergeometric representation. The θ(spacelike) is implied.
In order to fix the constant C_p, we follow the Lorentzian generalization of the Euclidean argument in <cit.>. We define note that in the z' → 0 limit, 𝒢_M behaves as
𝒢_M|_z' → 0 = 2^-pC_p/2 p - d(2 z' z/z^2 + z'^2 + | x - x' |^2)^p≡z'^p/2 p - dK_p(z, x; x')
The function K_p(z, x; x') is the bulk-boundary propagator, which is not the same (but is related to) the kernel. This propagator should have a δ function normalization, which we impose via the following integral
∫_spaceliked^dx K_p(z, x; x') = -C_pz^d-pπ^d/2 - 1Γ(1 - p)Γ(p - d/2)cosπ p
Setting this integral to 1 fixes the coefficient C_p = -2^-pΓ(p)tanπ p/Γ( p -d/2)π^d/2. Using this value of C_p, the non-normalizable Green's function is given by
𝒢_M = -2^-p-1Γ(p)tanπ p/Γ( p -d/2 + 1)π^d/2σ^-p _2F_1(p/2, p + 1/2; p -d/2 + 1;1/σ^2)θ(spacelike)
The leading order behaviour for 𝒢_M(σ) and ∂_z'𝒢_M(σ) at z' → 0 is given by
𝒢_M(σ)|_z' → 0 = -2^-p-1Γ(p)tanπ p/Γ( p -d/2 + 1)π^d/2σ^-p
∂_z'𝒢_M(σ)|_z' → 0 = -2^-p-1Γ(p + 1)tanπ p/Γ( p -d/2 + 1)π^d/2z'σ^-p
Plugging this in the Green's theorem with (<ref>)-(<ref>) gives us
Ψ_J(z,x) = ∫d^dx' z'^-d + 1{ -(z'^p - 2ν - 1ϕ_J^(0)(x') + z'^pj_J^(0)(x'))2^-p-1Γ(p + 1)tanπ p/Γ( p -d/2 + 1)π^d/2 z'σ^-p
+ ((p - 2ν - 1) z'^p - 2ν - 2ϕ_J^(0)(x') + p z'^p-1j_J^(0)(x'))2^-p-1Γ(p)tanπ p/Γ( p -d/2 + 1)π^d/2σ^-p}|_z' → 0
These terms combine in an appropriate fashion to give us the following expression
Ψ_J(z,x) = -∫d^dx' z'^-p2^-p-1Γ(p)tanπ p/Γ( p -d/2 + 1)π^d/2(2 p - d)σ^-pϕ_J^(0)(x')|_z'→ 0
From where we can read off the kernel as follows
𝐊̃_nN, + (z, x; x') = -2^-pΓ(p)tanπ p/Γ( p -d/2)π^d/2lim_z' → 0(σ z')^-pθ(spacelike)
Using identities involving the gamma function, one can show that the kernel is equivalent to
𝐊̃_nN, + (z, x; x') = (-1)^d-1/2Γ(1 + d/2 - p)2^-p/π^d/2Γ(1 - p)lim_z' → 0(σ z')^-pθ(spacelike)
Now, we consider the other case where ν < 0. This corresponds to replacing the 𝐐^μ_ν with 𝐐^μ_-ν - 1. This is valid operation since 𝐐^μ_-ν - 1 is the solution that is complementary to 𝐏^μ_-ν - 1. The Green's function is written as
𝒢_M = c'_3(σ^2 - 1)^-μ/2𝐐^μ_-ν - 1(σ) = -2^p - dC'_p/2 p - dC'_pσ^p - d _2F_1(d - p/2, d - p + 1/2; 1 + d/2 - p;1/σ^2)
Normalising this function in an identical manner as the ν > 0 case gives us C'_p = -Γ(d - p)tanπ(d - p)/π^d/2Γ(d/2 - p). The full expression of the Greens' function becomes
𝒢_M = -2^p - d - 1Γ(d - p)tanπ(d - p)/π^d/2Γ(d/2 - p + 1)σ^p - d _2F_1(d - p/2, d - p + 1/2; 1 + d/2 - p;1/σ^2)θ(spacelike)
Using this, we write the limiting expression of G(σ) and its' derivative as follows
𝒢_M(σ)|_z' → 0 = -2^p - d - 1Γ(d - p)tanπ(d - p)/π^d/2Γ(d/2 - p + 1)σ^p - d
∂_z'𝒢_M(σ)|_z' → 0 = -2^p - d - 1Γ(d - p + 1)tanπ(d - p)/π^d/2Γ(d/2 - p + 1)z'σ^p - d
Using these expressions along with (<ref>)-(<ref>) we obtain the result
Ψ_J(z,x) = ∫d^dx' z'^-d + 1{ -(z'^p - 2ν - 1ϕ_J^(0)(x') + z'^pj_J^(0)(x'))2^p - d - 1Γ(d - p + 1)tanπ(d - p)/π^d/2Γ(d/2 - p + 1)z'σ^p - d
+ ((p - 2ν - 1) z'^p - 2ν - 2ϕ_J^(0)(x') + p z'^p-1j_J^(0)(x'))2^p - d - 1Γ(d - p)tanπ(d - p)/π^d/2Γ(d/2 - p + 1)σ^p - dσ^ν - μ}|_z' → 0
This expression simplifies to the following
Ψ_J(z,x) = ∫ d^dx'(2 p - d)2^p - d - 1Γ(d - p)tanπ(d - p)/π^d/2Γ(d/2 - p + 1)(σ z')^p - dj_J^(0)(x')|_z' → 0
From this, we can read off the kernel as follows
𝐊̃_n N, -(z, x; x') = -2^p - dΓ(d - p)tanπ(d - p)/π^d/2Γ(d/2 - p)lim_z' → 0(σ z')^p - dθ(spacelike)
It can be seen that, via some properties of gamma functions, this expression reduces to
𝐊̃_n N, -(z, x; x') =(-1)^d-1/2Γ(1 - d/2 + p)2^p - d/π^d/2Γ(1 - d + p)lim_z' → 0(σ z')^p - dθ(spacelike)
Therefore we recover the AdS-covariant piece in the mode sum kernel via the Greens' function approach as well.
So far, we have dealt with either ν < 0 or ν > 0. Now we turn to the special case of ν = 0, which is at p_0 + 1. The special feature of the ν = 0 point is that the z' → 0 limit of the two Green's functions differ by a factor of σ.
The bulk solution's z- dependence can be obtained by setting ν = 0 in (<ref>). The two Green's functions, respectively have the following leading order behavior
G(σ)|_z' → 0 = (-1)^d + 1/2/2^d + 1/2π^d - 1/2σ ^-μ/Γ (1-μ )θ(spacelike)
∂_z'G(σ)|_z' → 0 = (-1)^d + 1/2μ/2^d + 1/2π^d - 1/2σ ^-μ/z' Γ (1-μ )θ(spacelike)
𝒢_M(σ)|_z' → 0 = -2^-p-1Γ(p)tanπ p/Γ( p -d/2 + 1)π^d/2σ^-pθ(spacelike)
∂_z'𝒢_M(σ)|_z' → 0 = -2^-p-1Γ(p + 1)tanπ p/z'Γ( p -d/2 + 1)π^d/2σ^-pθ(spacelike)
Plugging the Green's function for the normalisable mode G(σ) into Green's theorem along with (<ref>) gives the following expression (by using p = μ + 1)
Ψ_J(z, x; x') = (-1)^d + 1/2/2^d + 1/2π^d - 1/2∫d^dx' z'^-d + 1{(z'^μϕ_J^(0)(x') + z'^μ + 1j_J^(0)(x'))μσ ^-μ/z' Γ (1-μ )
-(μ z'^μ - 1ϕ_J^(0)(x')+ (μ + 1) z'^μj_J^(0)(x'))σ ^-μ/Γ (1-μ )}|_z' → 0
This expression simplifies to
Ψ_J(z, x; x') = -(-1)^d + 1/2/2^d + 1/2π^d - 1/2∫d^dx⃗'⃗z'^-d + 1{ z'^μσ^-μ/Γ(1 - μ)j_J, 0(x')
}|_z' → 0
From this, the kernel can be read off as follows
𝐊̃_N, 0 = -(-1)^d - 1/2/2^d + 1/2π^d - 1/2Γ(3 - d/2)lim_z' → 0(σ z')^1 - d/2θ(spacelike)
where the subscript 0 is to indicate the ν = 0 point. Turning to the Non-normalisable mode, we find the following expression by inserting the Green's function 𝒢_M(σ) into Green's theorem and using (<ref>).
Ψ_J(z, x; x') = ∫d^dx' z'^-d + 1{ -(z'^μϕ_J^(0)(x') + z'^μ + 1j_J^(0)(x'))2^-p-1Γ(p + 1)tanπ p/z'Γ( p -d/2 + 1)π^d/2σ^-p
+(μ z'^μ - 1ϕ_J^(0)(x') + (μ + 1) z'^μj_J^(0)(x'))2^-p-1Γ(p)tanπ p/Γ( p -d/2 + 1)π^d/2σ^-p}|_z' → 0
This expression simplifies to
Ψ_J(z, x; x') = ∫d^dx' z'^-d + 1{ z'^μ - 12^-p-1Γ(p + 1)tanπ p/z'Γ( p -d/2 + 1)π^d/2σ^-pϕ_J^(0)(x')
}|_z' → 0
From this, we can read off the kernel
𝐊̃_n N, 0 = i^d+1 (2 π )^1/2-d/2/(d+1) Γ(-d/2-1/2)lim_z' → 0(σ z')^-d + 1/2θ(spacelike)
And so we derive the expressions for the normalizable and non-normalizable mode kernels at the ν = 0 point.
§.§ Graviton
To study the Green's function approach for the graviton, we consider the series expansion in z
Φ_μν(z,x) = ∑_k = 0^∞z^d + 2 kϕ^(k)_μν(x) + ∑_k=0^∞z^2 kj^(k)_μν(x)
The solution of the differential equation is
Φ_μν(σ) = c_1 + c_2 (σ^2 - 1)^-μ/2𝐐^μ_μ(σ)
Again, following the arguments of <cit.> we find that the Green's function can be made spacelike for c_1 = -π c_2/2(2μ - 1)!!. Therefore, we note that the Green's function for the non-normalizable mode will simply turn out to be
G_M(σ) = -π c_2/2(2μ - 1)!!θ(spacelike)
Using this expression and (<ref>)
in Green's theorem, we obtain the following result
Φ_μν(z,x) = ∫d^dx' z'^-d + 1{ - c_1(d z'^d - 1ϕ^(0)_μν(x'))}
From which the kernel can be read off as
𝐊_N(z, x; x') = π c_2 d/2(2μ - 1)!!θ(spacelike)
Inserting the value of c_2 we find that
𝐊_N(z, x; x') = (-1)^d+1/2Γ(d/2+1)/π ^d/2θ(spacelike)
Now, we turn to the non-normalizable mode. As in the previous sections, we pick the following Green's function
𝒢_M(σ) = c_2 (σ^2 - 1)^-μ/2𝐐^μ_μ(σ) = -2^-d-1Γ(d)tanπ d/Γ(d/2 + 1)π^d/2σ^-d _2F_1(d/2, d + 1/2; d/2 + 1; 1/σ^2)
And in the z' → 0 limit, the Green's function and its derivative takes the form
𝒢_M(σ)|_z' → 0 = -2^-d-1Γ(d)tanπ d/Γ(d/2 + 1)π^d/2σ^-d
∂_z'𝒢_M(σ)|_z' → 0 = -2^-d-1Γ(d + 1)tanπ d/Γ(d/2 + 1)π^d/2 z'σ^-d
Plugging this into the Green's theorem with (<ref>), we find the expression
Φ_μν(z,x) = c_2∫d^dx' z'^-d + 1{
-(z'^d - 1ϕ^(0)_μν(x') + z'^-1j^(0)_μν(x') )2^-d-1Γ(d + 1)tanπ d/Γ(d/2 + 1)π^d/2σ^-d
+ d z'^d - 1ϕ^(0)_μν(x')2^-d-1Γ(d)tanπ d/Γ(d/2 + 1)π^d/2σ^-d}|_z' → 0
Upon simplification, this reduces to the following expression
Φ_μν(z,x) = -∫d^dx'2^-d-1Γ(d + 1)tanπ d/Γ(d/2 + 1)π^d/2 z'(σ z')^-d|_z' → 0j^(0)_μνx'
And from this, the kernel can be read off as follows
𝐊_n N(z,x;x') = -2^-d-1Γ(d + 1)tanπ d/Γ(d/2 + 1)π^d/2 z'lim_z' → 0(σ z')^-dθ(spacelike)
And thus, we derive the expressions for the bulk reconstruction kernels for the normalizable and non-normalizable modes up to a constant factor. The constant can be determined from the short-distance behavior of Euclidean Green's function. Note that we require the transverse traceless modes at the boundary, which is given by
⟨ h_μν(r) h_ρσ(0) =ℐ_μνρσG_E(r).
Here G_E(r) is the Euclidean scalar Green's function, and ℐ_μνρσ obeys the following properties.
ℐ_μνρσ =ℐ_ρσμν
ℐ^μ_ μρσ =ℐ_μνρ^ ρ=0
∂^μℐ_μνρσ =∂^νℐ_μνρσ=∂^ρℐ_μνρσ=∂^σℐ_μνρσ=0.
It is obvious that the tensor structure ℐ_μνρσ enforces to have transverse and traceless degrees of freedom at the boundary. The short-distance behavior of the Euclidean scalar Green's function is well known
lim_r→ 0 G_E(r)∼-1/(d-1)vol(S^d)r^d-1.
Here r is the Euclidean radial coordinate and vol(S^d)=π^d+1/2/Γ(d+1/2). At a short-distance, AdS chordal length becomes σ∼ 1+r^2/2 R^2. Therefore, the Euclidean scalar Green's function is determined with the appropriate delta function source at the origin, and the constant c_2 is evaluated <cit.>
c_2=(-1)^μ+1/2^μ-1(d-1)vol(S^d)Γ(μ)R^d-1 = (-1)^d + 1/2/2^d - 1/2π^d + 1/2.
Note that the constant is entirely determined from the scalar Green's function, and the tensor transformation properties of the graviton are encoded in ℐ_μνρσ.
§ SUMMARY OF THE MAIN RESULTS
At this point, it is useful to summarise the results of the p- form fields. So far, we have used the terms “normalizable” and “non-normalizable” as placeholders without really referring to their interpretations. The normalizable and non-normalizable modes are identified via the fall-off behavior of the holographic coordinate (z in Poincaré coordinates) near the boundary z = 0. We recall that the two modes in mode expansion of the bulk p- form scale as (<ref>)
A_J(z,x) ∼ z^d - 2 pϕ^(0)_J(x) + j^(0)_J(x)
Above the self-dual point, we have p > d + 1/2. In this regime, the normalisable mode is identified as j^(0)_J while the non-normalisable mode is identified as ϕ^(0)_J(x). This is due to the fact that the fall-off exponent for z is a negative integer < -1. This behavior is also picked up by the normalizable Green’s function, as is seen from (<ref>) and (<ref>) respectively. Similarly, below the self-dual point, we have p < d + 1/2. In this regime, the normalisable mode is identified as ϕ^(0)_J(x) and the non-normalisable mode as j^(0)_J(x). Again, this is reflected in the Green’s function approach via the equations (<ref>) and (<ref>), respectively.
The final case is that of the self-dual point. At this point, we have p = d + 1/2. Therefore, the fall-off behavior of the mode corresponding to ϕ^(0)_J(x) is z^-1, which makes it non-normalizable and consequently the normalizable mode is j^(0)_J(x). The same is once again reflected by the Green’s function, as can be seen from (<ref>) and (<ref>), respectively. Due to the scenario being different above and below the self-dual point, it is convenient to represent the result in a concise table below.
p- value Normalisable mode Kernel (𝐊_N) Non-Normalisable mode Kernel (𝐊_n N)
p > p_0 j^(0)_J(x) Eqn (<ref>) & (<ref>) ϕ^(0)_J(x) Eqn (<ref>) & (<ref>)
p ≤ p_0 ϕ^(0)_J(x) Eqn (<ref>) & (<ref>) j^(0)_J(x) Eqn (<ref>) & (<ref>)
Here we have indicated the equations where the respective (Ads-covariant) kernel expressions are written via the Green’s function and mode sum approaches. Naturally, both approaches agree. In the first column we have replaced p ≥d + 1/2 by p > p_0 and p < d + 1/2 by p ≤ p_0 where p_0 = d - 1/2, i.e. the self-dual point. The rationale behind doing so is the fact that d is an odd integer, and so d + 1/2 and d - 1/2 are both integers (as is p) separated by 1.
Turning to the graviton, we recall the following mode expansion of the bulk graviton field near z = 0
h_μν(z, x) ∼ z^d - 2ϕ^(0)_μν(x) + z^-2j^(0)_μν(x)
It is evident from the fall-off behavior of the two modes (except for d = 1, which corresponds to AdS_2 which we do not consider) that the normalizable mode is the one corresponding to ϕ^(0)_μν(x) and the non-normalizable mode is j^(0)_μν(x). The same is reflected by the Green’s function approach, where the normalizable Green’s function picks ϕ^(0)_μν and vice versa. Naturally, the normalisable mode kernels obtained via both methods (equations (<ref>) and (<ref>)) match, as do the non-normalisable mode kernels (equations (<ref>) and (<ref>)).
Therefore, our results obtained via the mode sum and Green’s function methods agree for the normalizable and non-normalizable modes of the p- form and graviton fields.
§ CONCLUSIONS
In this paper, we focus our attention on deriving space-like kernels for both normalizable and non-normalizable modes of p-forms and the graviton. We derive these kernels using two different approaches- mode-sum and Green’s function method. We show that these two approaches lead to the same space-like kernels for both modes. We also study the properties of these kernels under Hodge-dual transformation, and we find a mismatch between a kernel and its Hodge-dual. A similar mismatch has also been found in the partition function of p-forms <cit.>. Since the duality holds true at the classical level, it is not obvious that a similar statement will also work for the smearing functions. However, it will be interesting to reproduce the mismatch of free energies under the Hodge-dual transformation from the smearing functions.
In this work, we restrict ourselves to the Poincaré patch of AdS. We keep the derivations of the kernels in the global coordinate for future work where we wish to establish the connections between kernels. It is important to note that the Poincaré
kernels we obtained via mode sums have a natural AdS covariant form, up to some extra terms. We wish to come up with a general argument for dropping those extra terms. In this regard, iϵ prescription given in <cit.> will be useful.
It will be nice to develop the HKLL procedure for higher derivative conformal fields and conformal higher spin fields. These theories provide essential tools to study conformal field theories in higher dimensions, including free energies and conformal anomalies <cit.>. However, these theories are non-unitary in general due to the presence of higher derivatives in the kinetic term in the action. But it will be interesting to extract the boundary data using a bulk reconstruction procedure, in particular, for Weyl graviton and conformal higher derivative gauge theories. It will be interesting to reproduce the boundary two-point functions and the central charge for these non-unitary conformal field theories from the bulk.
It will also be interesting to develop the HKLL reconstruction procedure where the scalar and fermionic field is coupled to the gauge field and gravity. A similar kind of questions have already been addressed in <cit.>. It will be nice to obtain the smearing functions by solving for Green’s functions for the spatial equations and applying Green’s theorem. Similarly, one can also obtain the smearing functions using the mode-sum approach and see the equivalence between these two methods. It is important to have a general statement about the smearing functions when the matter is coupled to gravity. Although there is significantly more gauge redundancy in the case of gravity, one can perform a perturbative analysis to have a perturbative explanation of holography.
Lastly, the underlying motivation for evaluating the HKLL kernels for these fields has been to develop a better understanding of the nature of solutions of wave equations. We expect this to teach us about bulk reconstruction in other geometries. One of the future goals would then be to extend this prescription to other geometries relevant to holography, such as Minkowski or de-Sitter spacetimes (for example, <cit.>).
We thank Chethan Krishnan and Debajyoti Sarkar for valuable discussions and comments regarding this manuscript. The authors acknowledge the hospitality of the International Center For Theoretical Physics (ICTP) for the duration of the Spring School on Superstring Theory, during which part of this work was completed. BB is partially supported by the Ministry of Human Resource Development, Govt. of India through the Prime Ministers' Research Fellowship.
§ DERIVATION OF BULK EQUATION OF P-FORMS IN POINCARÉ COORDINATES
In this appendix, we derive the bulk equation of free p-forms in the Poincaré patch.
The covariant equation of motion of p-forms is given by
∇_MF^M,J_1,…,J_p = 0
The field strength tensor F^M J_1,…,J_p can be explicitly written in terms of the completely antisymmetric gauge potential
F^M,J_1,…,J_p = ∇^MA^J_1,…,J_p + (-1)^p∇^J_1A^J_2,…,J_p,M + ⋯
⋯ + (-1)^p k∇^J_kA^J_k+1,…,J_p, M, J_1,…, J_p-1 + ⋯ + (-1)^p^2∇^J_pA^M, J_1,…,J_p-1
The first term of (<ref>) is given by ∇_M∇^MA^J_1,…,J_p. We evaluate this expression later. For now, we begin by focusing on the general term ∇_M∇^J_kA^J_k+1,…,J_p, M, J_1,…, J_k-1. To simplify this term, we consider the following identity for a general tensor T^A_1,…,A_n
[∇_M,∇_N]T^A_1,…,A_n = ∑_i = 1^nR^A_i_K M NT^A_1,…,A_i-1, K, A_i+1,…, A_n
where R^A_KMN is the Reimann tensor.
From the covariant gauge condition ∇_J_1A^J_1,…,J_p = 0 and (<ref>), we find that
∇_M∇^J_1A^J_2,…,J_p,M = g^J_1 N∇_M∇_NA^J_2,…,J_p,M
= g^J_1 N(R^J_2_K M NA^K, J_3,…,J_p, M + ⋯ + R^J_p_K M NA^J_2,…,J_p-1,K,M + R^M_K M NA^J_2,…,J_p,K)
= - d A^J_2,…,J_p,J_1 - A^J_1,J_3,…,J_p,J_2 -⋯- A^J_2,…,J_p-1,J_1,J_p
Using the antisymmetric nature of the p- form field, we can write this expression as
∇_M∇^J_1A^J_2,…,J_p,M = (-1)^p(d-(p - 1))A^J_1,…,J_p
It is straightforward to see that each term subsequent term in (<ref>), when acted upon by ∇_M, picks up additional factors of (-1)^p. Thus, the equation of motion takes the simple form
∇_M∇^MA^J_1,…,J_p + p(d- p + 1)A^J_1,…,J_p = 0
To evaluate this expression, we need to evaluate g^M N∇_M∇_NA^J_1,…,J_p. For this purpose, it is easier to introduce a new notation:
A^J ≡ A^J_1,…,J_p
A^J,{J_i, K} ≡ A^J_1,…,J_i-1,K,J_i + 1,…,J_p
A^J,{J_i, K},{J_j, Q} ≡ A^J_1,…,J_i-1,K,J_i + 1,…,J_j-1,Q,J_j+1,…,J_p
In this notation, we can write the term ∇_M∇_NA^J as follows
∇_M∇_NA^J = ∂_M∂_NA^J + ∑_iΓ^J_i_M K∂_NA^J,{J_i, K} - Γ^K_M N∂_KA^J
+ ∑_i∂_M(Γ^J_i_N KA^J,{J_i, K}) + ∑_i∑_jΓ^J_j_M QΓ^J_i_N KA^J,{J_i, K},{J_j, Q} - ∑_iΓ^Q_M NΓ^J_i_Q KA^J, {J_i, Q }
Plugging in the Christoffel symbols for empty AdS spacetime,
Γ^A_z B = -1/zδ^A_B
Γ^z_μν = 1/zη_μν
The equation (<ref>) reduces to (that is, g^M N∇_M∇_NA^J + p(d- p + 1)A^J = 0)
∂_N∂^NA^J - (d - 1 + 2 p)z∂_zA^J + 2 d p A^J = 0
which, splitting in z and boundary coordinates, becomes
z^2∂^2_zA^J + z^2∂_α∂^αA^J - (d - 1 + 2 p)z∂_zA^J + 2 d p A^J = 0
For the covariant p- form, i.e. for the field A_J_1,…,J_p≡ A_J, the equation of motion becomes
z^2∂^2_zA_J + (2 p - d + 1)z∂_zA_J + z^2 ∂_α∂^αA_J = 0
§ MASSIVE P-FORMS
In this section, we present HKLL bulk reconstruction kernels of massive p-forms in the Poincaré patch of even AdS_d+1. The bulk equation is given by
z^2∂^2_zA_J + (2 p - d + 1)z∂_zA_J + z^2 ∂_α∂^αA_J -m^2A_J= 0
The solution to this bulk equation is obtained as
A_J(z, x) = ∫_q ≥ 0d^dq/(2 π)^dz^1/2 (d-2 p)(c_J(q) J_1/2√((d-2 p)^2+4 m^2)(|q| z)+ d_J(q) Y_1/2√((d-2 p)^2+4 m^2)(q z))e^i q. x.
We use the mode sum approach to obtain kernels for normalizable and non-normalizable modes of p-forms. Given the bulk solution in (<ref>), we can have the asymptotic expansion of the solution around z=0
A_J(x,z) =z^-p∑_n=0^∞(z^Δ+2nϕ_J^(2 n)(x)+z^d-Δ+2nj_J^(2 n)(x)
)
= A^N_J(z, x)+ A^n N_J(z, x) ,
where Δ=d/2+ √((d/2- p)^2+ m^2) and two modes are given by
A^N_J(z, x) = ∑_n = 0^∞ z^2 n+Δ-pϕ_J^(2 n)(x) ,
A^n N_J(z, x) = ∑_n = 0^∞ z^2 n + d - p - Δj_J^(2 n)(x)
This consideration holds for Δ - d/2∉Integers. For the scenario of Δ - d/2∈Integers, the analysis will closely follow Appendix.<ref>.
The coefficients ϕ_J^(2 n)(x) and j_J^(2 n)(x) can be extracted from the bulk solutions and these serve as boundary data. Following the massless p-form fields, we choose coefficients corresponding to n=0 case as data. We now follow the similar steps from (<ref>) to (<ref>), we obtain kernels for the normalizable and non-normalizable modes of massive p-forms in even AdS
𝐊_N(z, x; x')
= z^Δ -pΓ(d/2)/2 π^d/2 + 1X^d _2F_1(d/2, 1;1-d/2+Δ; - z^2/X^2)
𝐊_nN(z, x; x') = z^d - Δ - pΓ(d/2)/2 π^d/2 + 1X^d _2F_1(d/2, 1; 1 - Δ + d/2; - z^2/X^2)
Note that, the functional form of the kernels depends on the conformal dimension Δ and in the massless limit one recovers the kernels (<ref>) and (<ref>). The identification of the kernels as corresponding to normalizable or non-normalizable modes depends on the exponents in series (<ref>). For Δ≥ -1, the ϕ^(0)_J(x) mode is identified as normalizable and vice-versa.
Using the hypergeometric identity (<ref>), one can rewrite the kernels as follows
z^p𝐊_N(z, x; x') = Γ(1 - d/2 + Δ)Γ(d/2 - 1)/2π^d/2 + 1Γ(Δ - d/2)z^Δ - 2/X^d-2 _2F_1(1, 1 + d/2 - Δ; 2 - d/2; - X^2/z^2)
+ 2^Δ - dΓ(1 - d/2 + Δ)(-1)^d-1/2/2π^d/2 + 1Γ(1 - d + Δ)lim_z' → 0(σ z')^Δ - d
z^p𝐊_n N(z, x; x') = Γ(1 - Δ + d/2)Γ(d/2 - 1)/2π^d/2 + 1Γ(d/2 - Δ)z^d - Δ - 2/X^d - 2 _2F_1(1, 1 + Δ - d/2; 2 - d/2; - X^2/z^2)
+ 2^-ΔΓ(1 - Δ + d/2)(-1)^d-1/2/2π^d/2 + 1Γ(1 - Δ)lim_z' → 0(σ z')^-Δ
From these expressions, one can note that the first terms in (<ref>) and (<ref>) do not contain the correct powers of z, when compared to the series (<ref>). The second terms are respectively the AdS-covariant pieces and so after utilising a prescription to drop the first terms, the second terms can be written in the following spacelike form (after an antipodal mapping)
z^p𝐊_N(z, x; x') = 2^Δ - dΓ(1 - d/2 + Δ)(-1)^d-1/2/π^d/2 + 1Γ(1 - d + Δ)lim_z' → 0(σ z')^Δ - dθ(spacelike)
z^p𝐊_n N(z, x; x') = 2^-ΔΓ(1 - Δ + d/2)(-1)^d-1/2/π^d/2 + 1Γ(1 - Δ)lim_z' → 0(σ z')^-Δθ(spacelike)
Turning to the Green's function approach, we note that the equation (<ref>) can be cast in the following form
(σ^2 - 1)d^2/dσ^2Ψ_J(σ) + (d + 1)σd/dσΨ_J(σ) - Δ(Δ - d)Ψ_J(σ) = 0
where σ is the chordal distance and Ψ_J = z^pA_J. The solution of this equation has the form
Ψ_J(σ) = c_1(σ^2 - 1)^-μ/2𝐏^μ_ν(σ) + c_2(σ^2 - 1)^-μ/2𝐐^μ_ν(σ)
where 𝐏^μ_ν and 𝐐^μ_ν stand for the associated Legendre polynomials of the third kind. The parameters are μ = d - 1/2 and ν = Δ - d + 1/2. From this solution, the normalizable mode Green's function can be readily evaluated similar to the massless case. These are again different for ν > 0 and ν < 0.
G_M(σ) = -c_2π/2(σ^2 - 1)^-μ/2𝐏^μ_ν(σ)θ(spacelike)
for ν > 0, with c_2 derived in (<ref>). For ν < 0, the only change is that the index ν in the above equation is replaced by -ν - 1.
Using this Green's function in Green's theorem, one recovers the kernels (<ref>) and (<ref>) for ν < 0 and ν > 0 respectively. The Green's function for the non-normalizable mode can be derived in a similar way as the massless case. The explicit expression is the following (for ν > 0)
𝒢_M(σ) = - 2^-Δ - 1Γ(Δ)tanπΔ/Γ(Δ - d/2 + 1)π^d/2σ^-Δ _2F_1(Δ/2, Δ + 1/2; Δ - d/2 + 1; 1/σ^2)θ(spacelike)
For the ν < 0, the replacement is Δ→ d - Δ. Once again, using this function in Green's theorem gives (<ref>) and (<ref>) for ν < 0 and ν > 0 respectively. With this, we conclude this brief section on the massive extension of the massless p-form discussed in depth in the main text. The results for both cases mirror each other appropriately.
Returning to the expression for Δ, we find that the Breitenlohner-Freedman (BF) bound for the massive p- form to be Δ > d/2 <cit.>. An approach similar to <cit.> may be used to extend the kernels beyond the BF bound, such that we consider the window d/2 - 1 ≤Δ≤d/2 + 1. A similar discussion was presented in <cit.>. Within such a BF window, the normalizable and non-normalizable modes of the scaled p-form Ψ_J can be identified with CFT operators (for the non-normalizable mode, in the Legendre transformed CFT).
§ MODE SUM KERNELS IN ODD ADS_D + 1 : P-FORM
The odd AdS case is a significantly more complicated one. The solution for the bulk wave equations is the following
A_J(z, x) = ∫_q ≥ 0z^ν(a_J(q)J_ν(q z) + b_J(q)Y_ν(q z))e^i q. xd^dq/(2 π)^d
This part has to be handled carefully since the combination Y_ν contains a J_ν piece as well. Denoting ν = n, and using the series representation of Y_n, we have
Y_n(q z) = 2/πln(q z/2)J_n(q z) - (q z)^n/2^nπ∑_k = 0^∞α_k, n(q^2 z^2)^k - (q z)^-n/2^-nπ∑_k = 0^n - 1β_k, n(q^2 z^2)^k
where α_k, n = (-1)^k/4^kΓ(k + 1)Γ(k + n + 1)(ψ(k + 1) + ψ(k + n + 1)) and β_k, n = Γ(n - k)/Γ( k + 1)4^k. Taking these terms, along with the series form of J_n into account, we find the following expression for A_J(z, x)
A_J(z, x) = ∑_k = 0^∞z^2 n + 2 kϕ_k(x) + ∑_k = 0^∞ln(z) z^2 n + 2 kϕ̃_k(x) + ∑_k = 0^n - 1z^2 kj_k(x)
where
j_k(x) = ∫_q ≥ 0b_Jβ_k, n2^nq^2 k - ne^i q. xd^dq/(2π)^d
ϕ̃_k(x) = ∫_q ≥ 02/πb_J(-1)^kq^n + 2 k/2^n4^kΓ(k + 1)Γ(k + n + 1)e^i q. xd^dq/(2π)^d
ϕ_k(x) = ∫_q ≥ 0a_J(-1)^kq^n + 2 k/2^n4^kΓ(k + 1)Γ(k + n + 1)e^i q. xd^dq/(2π)^d
+ ∫_q ≥ 0b_J(2/πln(q/2)(-1)^kq^n + 2 k/2^n4^kΓ(k + 1)Γ(k + n + 1) - α_k, n/2^nq^n + 2 k)e^i q. xd^dq/(2π)^d
These results should be inverted to obtain the expression for b_J and a_J. There is a choice to make here regarding the boundary data. For this purpose, we choose the two independent boundary pieces to be j_0 and ϕ_0. This is the natural choice since it corresponds to the z^2 k and z^2 k - 2 n fall-offs, analogous to the even AdS case.
Inverting the expressions for j_k(x) gives us
b_J(q) = 1/β_k, n∫ q^n - 2 k2^-nj_k(x')e^-i q. x'd^dx'
Inserting this in the expression for ϕ_k and using it to invert the relation to find a_J, we get
∫ϕ_k(x')e^-i q. x'd^dx' - b_J(2/πln(q/2)(-1)^kq^n + 2 k/2^n4^kΓ(k + 1)Γ(k + n + 1) - α_k, n/2^nq^n + 2 k) = a_J(-1)^kq^n + 2 k/2^n4^k k! Γ(k + n + 1)
From this, we can extract the expression for a_J(q)
a_J(q) = 2^n4^kΓ(k + 1)Γ(k + n + 1)/(-1)^kq^n + 2 k∫ϕ_k(x')e^-i q. x'd^dx' - 2/πln(q/2)1/β_k, n∫ q^n - 2 k2^-nj_k(x')e^-i q. x'd^dx'
+ α_k, n(-1)^kΓ(k + 1)Γ(k + n + 1)4^k1/β_k, n∫ q^n - 2 k2^-nj_k(x')e^-i q. x'd^dx'
These expressions are messy. But we can make it simpler by choosing k = 0, since any two pieces of boundary data are equivalent. So, we have the following expressions
b_J(q) = q^n/2^nΓ(n)∫ j_0(x')e^-i q. x'd^dx'
a_J(q) = 2^nΓ(n + 1)/q^n∫ϕ_0(x')e^-i q. x'd^dx' - ( 2 ln(q/2)/π + γ - ψ(n + 1))q^n/2^nΓ(n)∫ j_0(x')e^-i q. x'd^dx'
Thankfully, these are much simpler expressions that we can begin to evaluate carefully by inserting back into (<ref>)
A_J(z, x) = z^n∫2^nΓ(n + 1)/q^nJ_n(q z)e^i q. (x - x')ϕ_0(x')d^dq d^dx'/(2π)^d + z^n∫q^n/2^nΓ(n)Y_n(q z)e^i q. (x - x')j_0(x')d^dq d^dx'/(2π)^d
- z^n∫( 2 ln(q/2)/π + γ - ψ(n + 1))q^n/2^nΓ(n)J_n(q z)e^i q. (x - x')j_0(x')d^dq d^dx'/(2π)^d
From this, we read off the two kernel integrals. We denote them by 𝐊_N for ϕ_0 and 𝐊_n N for j_0, analogous to the even AdS case. This gives us the following integrals
𝐊_N(z,x; x') = z^n2^nΓ(n + 1)∫_q ≥ 0q^-nJ_n(q z)e^i q.(x - x')d^dq/(2π)^d
𝐊_n N(z, x; x') = z^n/2^nΓ(n)∫_q ≥ 0q^nY_n(q z)e^i q. (x - x')d^dq/(2π)^d
- z^n/2^nΓ(n)∫_q ≥ 0( 2 ln(q/2)/π + γ - ψ(n + 1))q^nJ_n(q z)e^i q. (x - x')d^dq/(2π)^d
§.§ Evaluating the integrals
This, right here, is the Herculean task: evaluating these integrals. Evaluating the first integral (<ref>) is simple. This integral was evaluated for the even AdS case as well (<ref>). Therefore, the result (<ref>) can be used directly, with the identification n = d/2 - p. Thus, we have
𝐊_N(z, x; x') = z^2 nΓ(d/2)/2 π^d/2 + 1X^d _2F_1(d/2, 1; n + 1; - z^2/X^2)
which reduces to (<ref>) by putting in the expression for n in terms of Δ and d.
Evaluating the kernel 𝐊_n N is a whole different beast altogether. We begin with the seemingly simpler integral
I_1 = ∫_q ≥ 0q^nY_n(q z)e^i q. (x - x')d^dq/(2π)^d
which is the first half of (<ref>). Using (<ref>), this integral reduces to
I_1 = 1/π (2 π)^d/2X^d/2 - 1∫_x = 0^∞x^n + d/2Y_n(x z)K_d/2 - 1(x X)dx
The final integral that we have to evaluate in order to determine I_1 is
I_2 = ∫_x = 0^∞x^n + d/2Y_n(x z)K_d/2 - 1(x X)dx
To evaluate it, we utilize the following relation
Y_n(x z) = -i^n/π((-1)^nK_n(-i x z) + K_n(i x z) )
This gives us the following type of integrals
∫_x = 0^∞x^n + d/2K_n(± i x z)K_d/2-1(x X)dx
To evaluate this, we turn to the following identity
∫_x = 0^∞x^-λK_μ(a x) K_ν(b x)dx = 2^-2 - λa^-ν + λ - 1b^ν/Γ(1 - λ)Γ(1 - λ + μ + ν/2)Γ(1 - λ - μ + ν/2)Γ(1 - λ + μ - ν/2)
×Γ(1 - λ - μ - ν/2) _2F_1(1 - λ + μ + ν/2, 1 - λ - μ + ν/2; 1 -λ; 1 - b^2/a^2)
with the constraints Re(a + b) > 0, Re(λ) < 1 - |Re(μ)| - |Re(ν) |. All of these conditions are satisfied for both the integrals that have to be evaluated. Therefore, we have
∫_x = 0^∞x^-λK_μ(a x) K_ν(b x)dx = 2^-2 + n + d/2X^d/2 - 1(± i z)^-n - d/n + d/2Γ(d/2)Γ(n + 1) _2F_1(n + d/2, d/2; 1 + n + d/2; 1 + X^2/z^2)
Thus, the integral I_2 becomes
I_2 = -2^-1 + n + d/2i^-dX^d/2 - 1(z)^-n - d/π(n + d/2)Γ(d/2)Γ(n + 1) _2F_1(n + d/2, d/2; 1 + n + d/2; 1 + X^2/z^2)
From this, we can write I_1 as
I_1 = -2^-1 + ni^-dz^-n - d/π^d/2 + 2(n + d/2)Γ(d/2)Γ(n + 1) _2F_1(n + d/2, d/2; 1 + n + d/2; 1 + X^2/z^2)
Thus the first term of the kernel (<ref>) (which we denote at T_1) becomes
T_1 = -(-1)^d/2z^-2 n - dnΓ(d/2)/ 2 π^d/2 + 2(n + d/2) _2F_1(n + d/2, d/2; 1 + n + d/2; 1 + X^2/z^2)
The next term of (<ref>) is quite the non-trivial one. For this, we consider the integral
I_3 = ∫_q ≥ 0ln(q) q^nJ_n(q z)e^i q. (x - x')d^dq/(2π)^d
This integral, upon using (<ref>), reduces to the following effective integral
I_4 = ∫_x = 0^∞ln(x)x^n + d/2J_n(x z)K_d/2-1(x X)dx
One way is to use the following parametrization of the logarithm
ln(x) = lim_ϵ→ 0x^ϵ - 1/ϵ
This splits I_4 into two parts
I_4, 1 = ∫_x = 0^∞x^n + d/2 + ϵJ_n(x z)K_d/2-1(x X)dx
I_4, 2 = ∫_x = 0^∞x^n + d/2J_n(x z)K_d/2-1(x X)dx
For the first part, the integral becomes the following, using (<ref>)
I_4, 1 = z^nΓ(n + d/2 + ϵ/2)Γ(n + 1 + ϵ/2)/2^-n - d/2 - ϵ + 1 X^2 n + d/2 +ϵ + 1Γ(1 + n) _2F_1(n + d/2 + ϵ/2, n + 1 + ϵ/2; 1 + n; -z^2/X^2)
I_4, 2 = z^nΓ(n + d/2)Γ(n + 1)/2^-n - d/2 + 1 X^2 n + d/2 + 1Γ(1 + n) _2F_1(n + d/2, n + 1; 1 + n; -z^2/X^2)
Therefore, we can write the integral I_4 as follows
I_4 = lim_ϵ→ 0z^nΓ(n + d/2 + ϵ/2)Γ(n + 1 + ϵ/2)/2^-n - d/2 - ϵ + 1 X^2 n + d/2 +ϵ + 1Γ(1 + n)ϵ _2F_1(n + d/2 + ϵ/2, n + 1 + ϵ/2; 1 + n; -z^2/X^2)
-z^nΓ(n + d/2)Γ(n + 1)/2^-n - d/2 + 1 X^2 n + d/2 + 1Γ(1 + n)ϵ _2F_1(n + d/2, n + 1; 1 + n; -z^2/X^2)
This evaluates to the following result
I_4 = 2^d/2+n-2 z^n X^-d/2-2 n-1Γ(d/2+n) (z^2/X^2+1)^-d/2-n(ψ(d/2+n)+ψ(n+1)-2 log (X)+log (4))
+ 2^d/2+n-2 z^n X^-d/2-2 n-1Γ(d/2+n) (G^1(n+1,d/2+n,n+1,-z^2/X^2)+G^2(n+1,d/2+n,n+1,-z^2/X^2))
where G^1(a, b, c) = ∂/∂ x _2F_1(x, b; c; z)|_x = a and G^2(a, b, c) = ∂/∂ x _2F_1(a, x; c; z)|_x = b.
Thus we obtain the second term of the non-normalizable integral as follows
T_2 = - z^-n/2^nΓ(n)∫_q ≥ 02 ln(q)/πq^nJ_n(q z)e^i q. (x - x')d^dq/(2π)^d
= -Γ(d/2+n)/2 π^d/2 + 2Γ(n)(z^2 + X^2)^-d/2-n(ψ(d/2+n)+ψ(n+1)-2 log (X)+log (4))
- X^-d - 2 n-1Γ(d/2+n)/2 π^d/2 + 2Γ(n)(G^1(n+1,d/2+n,n+1,-z^2/X^2)+G^2(n+1,d/2+n,n+1,-z^2/X^2))
The final remaining term is quite straightforward. The integral we need to evaluate is
∫_q ≥ 0 q^nJ_n(q z)e^i q. (x - x')d^dq/(2π)^d = 1/π (2π)^d/2X^d/2 - 1∫_x = 0^∞x^n + d/2J_n(x z)K_d/2 - 1(x X)dx
= z^nΓ(n + d/2)Γ(n + 1)/2^-n + 1π^d/2 + 1 X^2 n + dΓ(1 + n) _2F_1(n + d/2, n + 1; 1 + n; -z^2/X^2)
Thus we have the final term in the kernel
T_3 = (ψ(n + 1) + 2 ln 2/π - γ)Γ(n + d/2)/2 π^d/2 + 1 X^2 n + dΓ(n) _2F_1(n + d/2, n + 1; 1 + n; -z^2/X^2)
Using the fact that _2F_1(a,b;b,z) = (1-z)^-a, we have
T_3 = (ψ(n + 1) + 2 ln 2/π - γ)Γ(n + d/2)/2 π^d/2 + 1Γ(n)(X^2 + z^2 )^-n -d/2
The full kernel is, using (<ref>),(<ref>) and (<ref>),
𝐊_n N = z^2 n(T_1 + T_2 + T_3)
With this, we conclude this brief discussion on the normalizable and non-normalizable mode kernels for the p- form field in empty AdS_d + 1 in arbitrary odd dimensions.
§ MODE SUM KERNELS IN ODD ADS_D + 1 : GRAVITON
TThe odd AdS case for the graviton again has to be treated separately since for d + 1 ∈ Odd, d/2 is an integer. Therefore the solution of the wave equation is given by the following expression
Φ_μν = ∫_|q| ≥ 0a_μν(q)z^d/2J_d/2(q z)e^i q. xd^dq/(2π)^d + ∫_|q| ≥ 0b_μν(q)z^d/2Y_d/2(q z)e^i q. xd^dq/(2π)^d
We turn our attention to the odd AdS treatment for the p- form, which basically carries over to this case as well. This is a happy coincidence since the solutions (<ref>) and (<ref>) are identical once one makes the identification ν = d/2.
For the sake of brevity, we write down the salient relations here. The power series expansion takes the form (where we use 2 n = d, in order to be consistent with the notation)
Φ_μν(z, x) = ∑_k = 0^∞z^2 n + 2 kϕ_k(x) + ∑_k = 0^∞ln(z) z^2 n + 2 kϕ̃_k(x) + ∑_k = 0^n - 1z^2 kj_k(x)
where
j_k(x) = ∫_q ≥ 0b_μνβ_k, n2^nq^2 k - ne^i q. xd^dq/(2π)^d
ϕ̃_k(x) = ∫_q ≥ 02/πb_μν(-1)^kq^n + 2 k/2^n4^kΓ(k + 1)Γ(k + n + 1)e^i q. xd^dq/(2π)^d
ϕ_k(x) = ∫_q ≥ 0a_μν(-1)^kq^n + 2 k/2^n4^kΓ(k + 1)Γ(k + n + 1)e^i q. xd^dq/(2π)^d
+ ∫_q ≥ 0b_μν(2/πln(q/2)(-1)^kq^n + 2 k/2^n4^kΓ(k + 1)Γ(k + n + 1) - α_k, n/2^nq^n + 2 k)e^i q. xd^dq/(2π)^d
where α_k, n = (-1)^k/4^kΓ(k + 1)Γ(k + n + 1)(ψ(k + 1) + ψ(k + n + 1)) and β_k, n = Γ(n - k)/Γ( k + 1)4^k.
Note that the only major (non-technical) difference with the p- form case is that the nature of the boundary data is different. So long as we are careful of the boundary data, we can adapt the expressions derived in the previous sections. The two kernels can then be read-off as before
𝐊_N(z,x; x') = z^n2^nΓ(n + 1)∫_q ≥ 0q^-nJ_n(q z)e^i q.(x - x')d^dq/(2π)^d
𝐊_n N(z, x; x') = z^n/2^nΓ(n)∫_q ≥ 0q^nY_n(q z)e^i q. (x - x')d^dq/(2π)^d
- z^n/2^nΓ(n)∫_q ≥ 0( 2 ln(q/2)/π + γ - ψ(n + 1))q^nJ_n(q z)e^i q. (x - x')d^dq/(2π)^d
These integrals can be similarly evaluated as before to give us the following expression
U_1 = -(-1)^d/2z^-2 n - dnΓ(d/2)/ 2 π^d/2 + 2(n + d/2) _2F_1(n + d/2, d/2; 1 + n + d/2; 1 + X^2/z^2)
-(-1)^d/2dΓ(d/2)/ 4 π^d/2 + 2d _2F_1(d, d/2; 1 + d; 1 + X^2/z^2)
The next term is
U_2 = -Γ(d/2+n)/2 π^d/2 + 2Γ(n)(z^2 + X^2)^-d/2-n(ψ(d/2+n)+ψ(n+1)-2 log (X)+log (4))
- X^-d - 2 n-1Γ(d/2+n)/2 π^d/2 + 2Γ(n)(G^1(n+1,d/2+n,n+1,-z^2/X^2)+G^2(n+1,d/2+n,n+1,-z^2/X^2))
Γ(d)/2 π^d/2 + 2Γ(d/2)(z^2 + X^2)^-d(ψ(d)+ψ(d/2+1)-2 log (X)+log (4))
- X^-2 d -1Γ(d)/2 π^d/2 + 2Γ(d/2)(G^1(d/2+1,d,d/2+1,-z^2/X^2)+G^2(d/2+1,d,d/2+1,-z^2/X^2))
and the final term is
U_3 = (ψ(n + 1) + 2 ln 2/π - γ)Γ(n + d/2)/2 π^d/2 + 1Γ(n)(X^2 + z^2 )^-n -d/2
(ψ(d/2 + 1) + 2 ln 2/π - γ)Γ(d)/2 π^d/2 + 1Γ(d/2)(X^2 + z^2 )^-d
And the full kernel for the non-normalizable mode is given by
𝐊_nN (z, x; x') = z^d(U_1 + U_2 + U_3)
The kernel corresponding to the normalizable mode is the same as the even AdS case, now given by
𝐊_N = Γ(d/2)/2 π^d/2 + 1z^d/X^d _2F_1(d/2, 1; d/2 + 1; -z^2/X^2)
With this, we conclude this brief discussion on the normalizable and non-normalizable mode kernels for the graviton in empty AdS_d + 1 in arbitrary odd dimensions.
JHEP
|
http://arxiv.org/abs/2307.01914v1
|
20230704204444
|
Millimeter-Wave Reflectionless Filters Using Advanced Thin-Film Fabrication
|
[
"Matthew Morgan",
"Seng Loo",
"Tod Boyd",
"Miho Hunter"
] |
astro-ph.IM
|
[
"astro-ph.IM",
"eess.IV"
] |
IEEE Transactions on Microwave Theory and Techniques, Vol. xx, No. x, xxx 2023
Morgan et al.: Millimeter-Wave Reflectionless Filters Using Advanced Thin-Film Fabrication
0000–0000/00$00.00 2018 IEEE
Millimeter-Wave Reflectionless Filters Using Advanced Thin-Film Fabrication
Matthew A. Morgan, Senior Member, IEEE,
Seng Loo,
Tod A. Boyd,
and Miho Hunter
Manuscript received 6/20/2023
M. Morgan and T. Boyd are with the Central Development Laboratory, National Radio Astronomy Observatory, Charlottesville,
VA, 22903 USA (e-mail: matt.morgan@nrao.edu). The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.
Seng Loo and Miho Hunter are with Anritsu Company, Morgan Hill, CA. 95037 USA.
August 1, 2023
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
We report on the development of millimeter-wave, lumped-element reflectionless filters using an advanced thin-film fabrication process. Based on previously demonstrated circuit topologies capable of achieving 50Ω impedance match at all frequencies, these circuits have been implemented at higher frequencies than ever before by leveraging a thin-film process with better than 2 μ 𝐦 feature size and integrated elements such as SiN Metal-Insulator-Metal (MIM) capacitors, bridges, and TaN Thin-Film Resistors (TFRs).
filters, thin film circuits, reflectionless filters, absorptive filters
§ INTRODUCTION
The concept of an absorptive filter dates back at least to the 1920's, nearly a century ago, with the work of Zobel <cit.> and Bode <cit.>, when they were called constant-resistance networks and were based on image-impedance principles, bridged-tee's, and cascaded, first and second-order lattice sections. However, the practical limitations on the performance and realizability of such designs precluded them from becoming widely adopted. However, interest in absorptive, or reflectionless filters has experienced a resurgence in recent years, owing initially to the discovery of lumped-element topologies that could theoretically achieve perfect impedance match at both ports and at all frequencies from DC to infinity, with no added passband loss <cit.>. At first limited to a single response type, that of a third-order Chebyshev type II filter, a broader theory of symmetric and self-dual circuit topologies was built upon this foundation until it was eventually found that any transfer function whatsoever that was realizable using lumped elements could likewise be implemented in reflectionless form <cit.>.
Seeing the advantages of a filter that absorbs stop-band energy instead of reflecting it, many researchers began searching for ways to duplicate these results using other circuit elements, such as transmission lines <cit.>, coaxial resonators <cit.>, and Surface Acoustic Wave (SAW) resonators <cit.>, as well as other methodologies, most importantly coupling-routing diagrams <cit.>. These often resulted in somewhat larger filters for a given frequency with limited absorption bandwidth (theoretically as well as practically), and/or with stop-band impedance-matching only at a single port. For this work, we will concentrate on lumped-element designs, as these are generally capable of the broadest absorption bandwidth in the most compact form.
Originally fabricated at relatively low frequencies using discrete surface-mount elements, these topologies were eventually implemented in the microwave regime using an Integrated Passive Device (IPD) fabrication process on GaAs wafers <cit.>. Thus, having been reduced to practice in a form suitable for cost-effective mass manufacturing, these devices have become adopted by industry as the preferred solution in a number of commercial applications <cit.>. However, the difficulty of realizing good-quality lumped-elements at short wavelengths has limited their cutoff frequencies primarily to the cm-wave range (though passband and absorption bandwidths often extend much higher).
With the advent of sophisticated thin-film fabrication that combines integrated circuit elements (MIM caps, bridges, and TFRs) with photolithography having even better resolution than is generally achievable with commercial III-V semiconductor processes, we are now prepared to implement lumped-element circuits of this kind for the first time in the millimeter-wave regime. This capability is demonstrated by the development of two prototypes: a fifth-order Chebyshev type II low-pass filter, and a seventh-order Chebyshev type II high-pass filter.
§ SCHEMATIC FILTER DESIGN
Circuit diagrams of the two prototype filters reported in this paper are shown in Fig. <ref>.
Fig. <ref>(a) is a fifth-order low-pass filter, while Fig. <ref>(b) is a seventh-order, high-pass filter. Both are Chebyshev Type II designs, having equal stop-band ripple, and are theoretically reflectionless at both ports and at all frequencies.
The prototype parameter values used for these designs is given in Table <ref>.
The ripple factor has been selected to achieve the maximum theoretical stop-band rejection for these topologies (without requiring transformers, which are difficult to implement at such high frequencies in planar circuit technology). That is, the ripple factor, ε, is given by
ε = √(e^4tanh^-1(e^-β)-1)≈
0.2164 N=5
0.2187 N=7
where
β = 2Nsinh^-1(√(12sin(πN)tan(πN)))
and N is the order of the filter <cit.>. Consequently, the stop-band rejection here is theoretically limited to about 13.5 dB, as shown by their ideal, normalized frequency responses in Fig. <ref>.
It has become common practice to cascade multiple stages of such designs to achieve the level of stop-band attenuation desired for a given application, however single-stage designs are sufficient for the present proof-of-concept.
§ PHYSICAL ELEMENT DESIGN
Note that by selecting the minimum ripple factors, the first and last two elements in each row of Table <ref> are identical. This means that certain elements in the schematics of Fig. <ref> shall vanish in the final layout—e.g. the capacitor 2/(g_1-g_2) and the inductor 2/(g_5-g_4) in Fig. <ref>(a). Others will be split in two in favor of layout symmetry—for example, the series inductor having value 2/g_1 will be implemented with two series inductors in the physical filter, mirrored, having normalized value 1/g_1 each.
We selected a cutoff frequency of 60 GHz for both designs. This required capacitors ranging from 40–120 pF. These were implemented as Metal-Insulator-Metal (MIM) capacitors with a Silicon Nitride (SiN) dielectric. The inductors needed ranged from 99–199 pH. They were implemented as planar spiral coils having 1.5 turns each in the first metal layer. The fine lithography of the process made it possible to use trace widths and spacings of 2 μm each. The internal node of the coil was brought out using a dielectric bridge (the same SiN dielectric used for the capacitors). This extra parasitic capacitance at the bridge was accepted as a compromise between the simplicity of the fabrication process and the self-resonance of the inductor. A typical inductor layout for these filters is shown in Fig. <ref>,
as the inset on a graph showing the simulated reactance and quality factor as a function of frequency. It shows that a linear reactance curve is achieved from the lowest frequencies up to around W-band, eventually reaching a self-resonance at nearly 200 GHz. The peak Q, modelled at around 100 GHz, was about 12.
§ THIN FILM FABRICATION PROCESS
The thin-film circuits were fabricated on a quartz substrate, specifically Corning Fused Silica 7980. Resistors were fabricated using Tantalum Nitride (TaN) film with a target sheet resistance of 50 Ω/sq. The reflectionless filters are relatively insensitive to resistor value, so a tolerance of ±10% was accepted for this film. The first metalization comprised TiW/Au, plated up to a final thickness of 1.25 μm. Silicon Nitride (SiN) was then deposited to a target thickness of 1 μm, giving an expected capacitance density of 0.062 fF/μm^2. The second metal layer again used TiW/Au, plated up to a thickness of 2.5 μm.
The final metal stack was therefore TaN/TiW/Au (1.25 μm)/SiN (1 μm)/TiW/Au (2.5 μm). An MRC 943 Series Sputter and TemesCal VES-2550 E-beam Evaporator are used for the metal deposition processes. Photolithography imaging was carried out using positive tone resists and a Canon FPA2000-il 5 X Stepper. Silicon Nitride deposition was carried out using a Novellus Concept One PECVD tool. Finally, gold electroplating utilized a Gold Sulfite plating system from Tanaka.
The potential for capacitor shorts was considered a risk for this fabrication process. The initial attempt utilized a much thinner first metalization layer of 0.4 μm, in order to keep the the surface as smooth as possible for the subsequent Silicon Nitride deposition. While the capacitor yield was excellent (no shorts were discovered), the ohmic losses of the metal traces was unacceptably high. Having greater confidence then in the capacitor process, a second run was initiated having the thicker metalization stack described above.
After all other processing steps were completed, the wafer was thinned to a final thickness of 125 μm, and diced into individual die.
§ MEASUREMENTS
Both circuits were fabricated on a 100-mm diameter quartz wafer, containing over 14,000 chips, or 7,000 of each design. Microphotographs of the two circuits are shown in Fig. <ref>.
The chips were tested by wafer probe using a Keysight N5291A‐201 Vector Network Analyzer (VNA) capable of measuring two-port scattering parameters from 900 Hz–120 GHz in a single sweep. The results are plotted in Fig. <ref>.
The measured response curves for both filters, plotted in solid lines, were shifted upward in frequency 5-10% compared to the initial simulations. This was attributed to the SiN capacitor dielectric, initially assumed to have a dielectric constant of ε_r=7. For the simulations plotted in Fig. <ref> (dashed lines) the dielectric constant was decreased to account for this error, but still within the acceptable range reported in literature <cit.>. This brought the measured and simulated results within good agreement across the whole frequency range. This also partly accounted for the increased reflection coefficients in the stop-band of the low-pass filter and the transition-band of the high-pass filter.
DC resistance measurements were additionally performed at sites across the entire wafer to verify trace conductivity, and to test for shorted or leaking capacitors, which as discussed previously were considered a technical risk for this fabrication. One low-pass filter in each of 82 reticles was probe-tested for DC isolation between one of the port signal pads and ground. This is nominally expected to be an open circuit. However, if either of the first two capacitors, labeled 1/g_2 in Fig. <ref>(a), is shorted or leaky, then a finite resistance will be measured (recall that the grounded capacitor 2/(g_1-g_2) vanishes since g_1=g_2).
The leakage test results are summarized in the wafer map of Fig. <ref> (“0L” means no measurable leakage was detected).
Out of 82 reticles, 13 showed signs of some leakage, all clustered around the outer edge of the wafer. We thus estimate the global capacitor yield as Y_C = √(69/82)≈ 92% (the square root is used since two capacitors must both be good for the isolation test to pass). Closer to the center, presumably the yield is much higher (none of the devices tested in the interior, whether at DC or RF, was found to be defective).
§ CONCLUSION
This paper has presented reflectionless low-pass and high-pass filters on quartz in the millimeter-wave band. Each has a cutoff frequency at approximately 60 GHz. Furthermore, the absorptive stop-band of the low-pass filter and pass-band of the high-pass filter extend up to 120 GHz, the limit of our measurement equipment. These represent the highest operating frequencies ever reported for reflectionless filters, and a landmark achievement for strictly lumped-element-based designs.
This performance was made possible by an advanced thin-film fabrication process capable of extraordinary lithographic resolution, down to 2 μm, combined with integrated circuit elements like MIM caps, bridges, and thin-film resistors. The process shows high yield, suitable for mass manufacturing and commercialization.
The advantages of these designs and of this fabrication technology are strikingly illustrated by the plot in Fig. <ref>,
which compares this work to numerous others in the references in terms of compactness (on the vertical axis) versus frequency (on the horizontal). Note that both scales are logarithmic, spanning five orders of magnitude in frequency and three orders of magnitude in physical size.
It is interesting to see how well the different approaches to reflectionless filters naturally segregate themselves into clusters on such a plot. Lumped-element designs using surface-mount (SMT) components appear in the upper left as relatively large circuits at low frequency. In contrast, filters on microwave laminates or soft board materials, almost universally implemented using distributed elements such as transmission lines <cit.> or substrate-integrated resonators <cit.>, increase the frequency without any real reduction in size, thus grouping together in the upper-right corner.
The smallest and most high-frequency designs have all, until now, been implemented using MMIC fabrication, usually lumped-element, on either a GaAs IPD or Silicon CMOS wafer. The two filters reported here, using a thin-film process on quartz, appear in the extreme bottom right corner of the plot, achieving the highest frequencies in form-factors that are amongst the smallest ever reported.
§ ACKNOWLEDGMENT
The authors with NRAO would like to thank their commercial partner, Mini-Circuits Inc., for their continued support of the development and application of reflectionless filter technology.
IEEEtran
10
url@samestyle
zobel
O. Zobel, “Distortion correction in electrical circuits with constant resistance recurrent networks,” Bell System Technical Journal, Vol. 7, 1928.
bode
H. Bode, Network Analysis and Feedback Amplifier Design. New York: Van Nostrand, 1945.
morgan_theoretical
M. Morgan and T. Boyd, “Theoretical and experimental study of a new class of reflectionless filter,” IEEE Trans. Microwave Theory Tech., vol. 59, no. 5, pp. 1214–1221, May 2011.
morgan8392495
M. Morgan, “Reflectionless filters,” U.S. Patent 8 392 495, March 5, 2013.
morgan_artech
M. Morgan, Reflectionless Filters.1em plus 0.5em minus
0.4emNorwood, MA: Artech House, 2017.
morgan_ladder
M. Morgan, W. Groves, and T. Boyd, “Reflectionless filter topologies supporting arbitrary low-pass ladder prototypes,” IEEE Transactions on Circuits and Systems I, vol. 66, no. 2, pp. 594–604, February 2019.
morgan10263592
M. Morgan, “Optimal response reflectionless filters,” U.S. Patent 10 263 592, April 16, 2019.
morgan10530321
M. Morgan, “Deep rejection reflectionless filters,” U.S. Patent No. 10 530 321, January 7, 2020.
guilabert2019
A. Guilabert, M. Morgan, and T. Boyd, “Reflectionless filters for generalized elliptic transmission functions,” IEEE Transactions on Circuits and Systems I, vol. 66, no. 12, pp. 4606–4618, December 2019.
khalaj-amirhosseini2016
M. Khalaj-Amirhosseini and A. Khalaj-Amirhosseini, “Reflectionless filters with arbitrary transfer functions,” Journal of Telecommunication, vol. 34, no. 2, pp. 1–-3, Oct. 2016.
khalaj-amirhosseini2017
M. Khalaj-Amirhosseini and M. M. Taskhiri, “Twofold reflectionless filters of inverse-chebyshev response with arbitrary attenuation,” IEEE Trans. Microwave Theory Tech., vol. 65, no. 11, pp. 4616–4620, November 2017.
lee2020
J. Lee, B. Lee , S. Nam, and J. Lee, “Rigorous design method for symmetric reflectionless filters with arbitrary
prescribed transmission response,” IEEE Trans. Microwave Theory Tech., vol. 68, no. 6, June 2020.
psychogiou2018jan
D. Psychogiou and R. Gomez-Garcia, “Tunable reflectionless microstrip bandpass filters,” IEEE Radio Wireless Symposium, January 2018.
gomez-garcia2018sep
R. Gomez-Garcia, J. M. Munoz-Ferreras, Wenjie Feng, and D. Psychogiou, “Balanced symmetrical quasi-reflectionless single- and dual-band bandpass filters,” IEEE Microwave and Wireless Comp. Lett., vol. 28, no. 9, pp. 798–800, September 2018.
yang2020mar
L. Yang, R. Gomez-Garcia, J. M. Munoz-Ferreras, R. Zhang, D. Peroulis, and L. Zhu, “Multilayered reflectionless wideband bandpass filters with shunt/in-series resistively terminated microstrip lines,” IEEE Trans. Microwave Theory Tech., vol. 68, no. 3, pp. 877–893, March 2020.
gomez-garcia2020apr
R. Gomez-Garcia, L. Yang, and J. M. Munoz-Ferreras, “Low-reflection signal-interference single- and multipassband filters with shunted lossy stubs,” IEEE Microwave and Wireless Comp. Lett., vol. 30, no. 4, pp. 355–358, April 2020.
wu2020aug
X. Wu, Y. Li, and X. Liu, “Quasi-reflectionless microstrip bandpass filters with improved passband flatness and out-of-band rejection,” IEEE Access, vol. 8, August 2020.
fan2021jan
M. Fan, K. Song, L. Yang, and R. Gomez-Garcia, “Balanced-circuit-based dual-band bandpass filter with symmetrical reflectionless behavior,” IEEE Radio Wireless Symposium, January 2021.
lee2021dec
J. Lee and J. Lee, “Transmission-line absorptive bandpass filters with wide passband: synthesis and design,” IEEE Trans. Microwave Theory Tech., vol. 69, no. 12, pp. 5371–5380, December 2021.
psychogiou2020aug
D. Psychogiou, and R. Gomez-Garcia, “Quasi-absorptive substrate-integrated bandpass filters using capacitively-loaded coaxial resonators,” IEEE MTT-S Intl. Microwave Symp., August 2020.
zhao2022
K. Zhao, R. Gomez-Garcia, and D. Psychogiou, “Tunable quasi-reflectionless bandpass filters using substrate integrated coaxial resonators,” IEEE Trans. Circuits and Systems II, vol. 69, no. 2, pp. 379–383, February 2022.
psychogiou2019
D. Psychogiou, and R. Gomez-Garcia, “Symmetrical quasi-reflectionless SAW-based bandpass filters with tunable bandwidth”, IEEE Microwave and Wireless Comp. Lett., vol. 29, no. 7, pp. 447–449, July 2019.
gomez-garcia2018apr
R. Gomez-Garcia, J. M. Munoz-Ferreras, and D. Psychogiou, “Symmetrical quasi-reflectionless BSFs,” IEEE Microwave and Wireless Comp. Lett., vol. 28, no. 4, pp. 302–304, April 2018.
gomez-garcia2018nov
R. Gomez-Garcia, J. M. Munoz-Ferreras, and D. Psychogiou, “Split-type input-reflectionless multiband filters,” IEEE Microwave and Wireless Comp. Lett., vol. 28, no. 11, pp. 981–983, November 2018.
gomez-garcia2019apr
R. Gomez-Garcia, J. M. Munoz-Ferreras, and D. Psychogiou, “Symmetrical quasi-absorptive RF bandpass filters,'; IEEE Trans. Microwave Theory Tech., vol. 67, no. 4, pp. 1472–1482, April 2019.
gomez-garcia2019sep
R. Gomez-Garcia, J. M. Munoz-Ferreras, and D. Psychogiou, “High-order input-reflectionless bandpass/bandstop filters and multiplexers,” IEEE Trans. Microwave Theory Tech., vol. 67, no. 9, pp. 3683–3695, September 2019.
gomez-garcia2020dec
R. Gomez-Garcia, D. Psychogiou, J. M. Munoz-Ferreras, and L. Yang, “Avoiding RF isolators: reflectionless microwave bandpass filtering components for advanced RF front ends,” IEEE Microwave Magazine, vol. 21, no. 12, November 2020.
morgan2015
M. Morgan, and T. Boyd, “Reflectionless filter structures,” IEEE Trans. Microwave Theory Tech., vol. 63, no. 4, April 2015.
AN-75-007
Mini-Circuits Inc. (2015). Pairing Mixers With Reflectionless Filters to Improve System Performance. [Online]. Available: <https://www.minicircuits.com/app/AN75-007.pdf>
AN-75-008
Mini-Circuits Inc. (2015). Advantages of Cascading Reflectionless Filters. [Online]. Available: <https://www.minicircuits.com/app/AN75-008.pdf>
setty2018
R. Setty, B. Kaplan, M. Morgan, and T. Boyd, “Combining MMIC reflectionless filters to create UWB bandpass filters,” Microwave Journal, vol. 61, no. 3, March 2018.
shrotriya2019
R. Shrotriya and M. Morgan, “Filtering without reflections: flattening multiplier chain conversion efficiency & more,” Microwave Journal, vol. 62, no. 9, September 2019.
piccirillo
A. Piccirillo and A. L. Gobbi, “Physical-electrical properties of silicon nitride deposited by PECVD on III-V Semiconductors,” Journal of the Electrochemical Society, vol. 137, no. 12, pp. 3910–3917, December 1990.
simpson2021sep
D. Simpson and D. Psychogiou, “X-band quasi-reflectionless MMIC bandpass filters with minimum number of components,” IEEE Trans. Elec. Dev., vol. 68, no. 9, pp. 4329–4334, September 2021.
ge2021
Z. Ge, L. Chen, L. Yang, R. Gomez-Garcia, and X. Zhu, “On-chip millimeter-wave integrated absorptive bandstop filter in (Bi)-CMOS technology,” IEEE Electron Device Lett., vol. 42, no. 1, pp. 114–117, January 2021.
zhao2023
K. Zhao and D. Psychogiou, “X-band MMIC-based tunable quasi-absorptive bandstop filter,” IEEE Microwave and Wireless Tech, Lett., vol. 33, no. 4, pp. 391–394, April 2023.
[
< g r a p h i c s >
]Matthew A. Morgan
(M'99–SM'17) received his B.S. in electrical engineering from the University of Virginia in 1999, and his M.S. and Ph.D. from the California Institute of Technology in 2001 and 2003, respectively.
During the summers of 1996 through 1998, he worked for Lockheed Martin Federal Systems in Manassas, VA, as an Associate Programmer, where he wrote code for acoustic signal processing, mathematical modeling, data simulation, and system performance monitoring. In 1999, he became an affiliate of NASA’s Jet Propulsion Laboratory in Pasadena, CA. There, he conducted research in the development of Monolithic Millimeter-wave Integrated Circuits (MMICs) and MMIC-based receiver components for atmospheric radiometers, laboratory instrumentation, and the deep-space communication network. In 2003, he joined the Central Development Lab (CDL) of the National Radio Astronomy Observatory (NRAO) in Charlottesville, VA, where he now holds the position of Scientist/Research Engineer. He is currently the head of the CDL’s Integrated Receiver Development program, and is involved in the design and development of low-noise receivers, components, and novel concepts for radio astronomy instrumentation in the cm-wave, mm-wave, and submm-wave frequency ranges. He has authored over 60 papers and holds twenty patents in the areas of MMIC design, millimeter-wave system integration, and high-frequency packaging techniques. He is the author of Reflectionless Filters (Norwood, MA: Artech House, 2017).
Dr. Morgan is a member of the International Union of Radio Science (URSI), Commission J: Radio Astronomy. He received a Topic Editor's Special Mention in the IEEE THz Transactions Best Paper competition and the Harold A. Wheeler Applications Paper Award in 2015.
[
< g r a p h i c s >
]Seng Loo received his BS in Chemistry from the University of Windsor, in Ontario, Canada, and his Ph.D from Stanford University in 1991.
From 1991 through 1993 he worked as a Postdoc at the California Institute of Technology, where he carried out basic research in yeast genetics. Since 1994, he has been working in various industrial companies, including Seagate, Hyundai Electronics and Watkins-Johnson Company.
Since 2006, he has been the Fab Manager, and then Senior Business Unit Manager for the Anritsu Microelectronics Fabrication Center. In this capacity, he is tasked with leading and growing the manufacturing capabilities of the Anritsu fab.
[
< g r a p h i c s >
]Tod A. Boyd was born in Steubenville, Ohio, in 1962. He received the A.S.E.E. degree from the Electronic Technology Institute, Cleveland, Ohio, in 1983. From 1983 to 1985, he was with Hostel Electronics in Steubenville, Ohio. In 1985, he joined Northrop Corporation’s Electronic Countermeasures Division, in Buffalo Grove, Ill., specialized in supporting the B-1B Lancer (secret clearance.) In 1990, he joined Interferometrics, Inc., Vienna, Va., where he constructed VLBA tape recorders for the international Radio Astronomy community.
Since 1996, he has been with the National Radio Astronomy Observatory’s Central Development Lab, Charlottesville, VA, where he initially assisted with the construction of cooled InP HFET amplifiers for the NASA’s Wilkinson Microwave Anisotropy Probe (WMAP) mission. Presently as a Technical Specialist IV he provides technical support for the advanced receiver R&D initiatives. His responsibilities also include constructing low noise amplifiers for the Enhanced VLA and the Atacama Large Millimeter/sub-millimeter Array (ALMA) projects.
[
< g r a p h i c s >
]Miho Hunter joined Anritsu Company in 2009 as a Process Development Engineer, supporting the Microelectronics Fabrication Center. In 2019, she advanced to the role of Operations and Process Engineering Manager, assuming responsibility for the process engineering group tasked with designing and integrating thin film fabrication processes. Prior to joining Anritsu, Miho honed her expertise at Samsung Austin Semiconductor, specializing in the process architecture of memory devices. She holds a Master of Science degree from Stanford University.
|
http://arxiv.org/abs/2307.03300v1
|
20230706212510
|
Hydrodynamical simulations of galaxy formation with non-Gaussian initial conditions
|
[
"Clément Stahl",
"Yohan Dubois",
"Benoit Famaey",
"Oliver Hahn",
"Rodrigo Ibata",
"Katarina Kraljic",
"Thomas Montandon"
] |
astro-ph.CO
|
[
"astro-ph.CO",
"astro-ph.GA"
] |
=1
#1[ #1]
|
http://arxiv.org/abs/2307.01583v1
|
20230704092324
|
Learning Lie Group Symmetry Transformations with Neural Networks
|
[
"Alex Gabel",
"Victoria Klein",
"Riccardo Valperga",
"Jeroen S. W. Lamb",
"Kevin Webster",
"Rick Quax",
"Efstratios Gavves"
] |
cs.LG
|
[
"cs.LG",
"cs.CV"
] |
[
Learning Lie Group Symmetry Transformations with Neural Networks
equal*
Alex Gabelequal,yyy
Victoria Kleinequal,sch
Riccardo Valpergaequal,yyy
Jeroen S. W. Lambsch
Kevin Webstersch
Rick Quaxxxx
Efstratios Gavvesyyy
schDepartment of Mathematics, Imperial College London
yyyVIS Lab (Institute of Informatics), University of Amsterdam
xxxCSL (Institute of Informatics), University of Amsterdam
Victoria Kleinvictoria.klein18@imperial.ac.uk
Alex Gabela.gabel@uva.nl
Riccardo Valpergar.valperga@uva.nl
Machine Learning, ICML
0.3in
]
The problem of detecting and quantifying the presence of symmetries in datasets is useful for model selection, generative modeling, and data analysis, amongst others. While existing methods for hard-coding transformations in neural networks require prior knowledge of the symmetries of the task at hand, this work focuses on discovering and characterizing unknown symmetries present in the dataset, namely, Lie group symmetry transformations beyond the traditional ones usually considered in the field (rotation, scaling, and translation). Specifically, we consider a scenario in which a dataset has been transformed by a one-parameter subgroup of transformations with different parameter values for each data point. Our goal is to characterize the transformation group and the distribution of the parameter values. The results showcase the effectiveness of the approach in both these settings.
§ INTRODUCTION
It has been shown that restricting the hypothesis space of functions that neural networks are able to approximate using known properties of data improves performance in a variety of tasks <cit.>. The field of Deep Learning has produced a prolific amount of work in this direction, providing practical parameterizations of function spaces with the desired properties that are also universal approximators of the target functions <cit.>. In physics and, more specifically, time-series forecasting of dynamical systems, symmetries are ubiquitous and laws of motion are often symmetric with respect to various transformations such as rotations and translations, while transformations that preserve solutions of equations of motions are in one way or another associated with conserved quantities <cit.>. In computer vision, successful neural network architectures are often invariant with respect to transformations that preserve the perceived object identity as well as all pattern information, such as translation, rotation and scaling. Many of these transformations are smooth and differentiable, and thus belong to the family of Lie groups, which is the class of symmetries we deal with in this work.
Although methods that hard-code transformations are capable of state-of-the-art performance in various tasks, they all require prior knowledge about symmetries in order to restrict the function space of a neural network. A broad class of, a priori unknown, transformations come into play in the context of modelling dynamical systems and in applications to physics. On the other hand, in vision tasks, identity-preserving transformations are often known beforehand. Despite this, these transformations are expressed differently by different datasets. As a result, algorithms for not only discovering unknown symmetries but also quantifying the presence of specific transformations in a given dataset, may play a crucial role in informing model selection for scientific discovery or computer vision, by identifying and describing physical systems through their symmetries and selecting models that are invariant or equivariant with respect to only those symmetries that are actually present in the dataset under consideration.
In this work, we address the problem of qualitatively and quantitatively detecting the presence of symmetries with respect to one-parameter subgroups within a given dataset (see Figure <ref>). In particular, let ϕ(t) be a one parameter subgroup of transformations. We consider the scenario in which a dataset {x_i}_i=1^N has been acted on by ϕ(t), with a different value of the parameter t for every point x_i. Our goal is to characterise the group of transformations ϕ(t), as well as the distribution from which the parameters t have been sampled. We propose two models: a naive approach that successfully manages to identify the underlying one-parameter subgroup, and an autoencoder model that learns transformations of a one-parameter subgroup in the latent space and is capable of extracting the overall shape of the t-distributions. The cost of the latter is that the one-parameter subgroup in the latent space is not necessarily identical to that in pixel space. The work is structured as follows: Section <ref> introduces some basic tools from Lie group theory; Section <ref> outlines the method; Section <ref> provides an overview of the existing methods that are related to our own; and lastly, results are shown in Section <ref>.
§ BACKGROUND
The theoretical underpinnings of symmetries or invariance can be described using group theory <cit.>. In particular, we present the necessary theory of one-parameter subgroups <cit.> on which our method is based, following the logic of <cit.>.
§.§ One-parameter subgroups
We focus on learning invariances with respect to one-parameter subgroups of a Lie group G, which offer a natural way to describe continuous symmetries or invariances of functions on vector spaces.
A one-parameter subgroup of G is a differentiable homomorphism ϕ:ℝ→ G, more precisely, such that ϕ(t+s) = ϕ(t)ϕ(s) for all t,s∈ℝ.
Let the action of ϕ on the vector space X⊂ℝ^n be a transformation T:X×ℝ→ X that is continuous in x∈ X and t∈ℝ. Because of continuity, for sufficiently small t and some fixed x∈ X, the action is given by
T(x,t)≈ x+tA(x) where A(x):=∂ T(x,t)/∂ t|_t=0.
Note that this is equivalent to taking a first-order Taylor expansion in t around t=0.
§.§ Generators
In general, we can use A(x) in (<ref>) to construct what is known as the generator of a one-parameter subgroup ϕ of a Lie group G, that in turn will characterise an ordinary differential equation, the solution to which coincides with the action T on X.
Let C^∞(X) be the space of smooth functions from X to X. The generator of ϕ is defined as a linear differential operator L:C^∞(X)→ C^∞(X) such that
L = ∑_i=0^n(A(x))_i∂/∂ x_i
describing the vector field of the infinitesimal increment A(x)t in (<ref>), where ∂/∂ x_i are the unit vectors of X in the coordinate directions for i=1,,n. It can be shown <cit.> that, for a fixed x∈ X, that T(x,t) is the solution to the ordinary differential equation
dT(x,t)/dt=LT(x,t) where T(x,0)=x.
The solution to (<ref>) is the exponential T(x,t)=e^tLx where
e^tL:= ∑_k=0^∞(tL)^k/k!,
where L^k is the operator L applied k times iteratively.
For a one-parameter subgroup ϕ of a matrix Lie group G⊂ GL(n,ℝ) and a fixed x∈ X, it can be shown <cit.> that there exists a unique matrix A∈ℝ^n× n such that A(x)=Ax. This is a more restrictive approach as groups such as translations cannot be written as a matrix multiplication.
§ METHOD
As in <cit.> the semi-supervised symmetry detection setting that we consider consists of learning the generator L of a one-parameter subgroup ϕ from pairs of observations of the form {(x_i, x̅ = T(x_i, t_i))}_i=1^N, where N is the number of observations and each t_i∈ℝ is drawn from some unknown distribution p(t). Not only do we attempt to learn the generator L, but also the unknown distribution p(t) of the parameters {t_i}_i=1^N.
§.§ Parametrisation of the generator
Deciding how to parametrise L has an effect on the structure of the model and ultimately on what one-parameter subgroups we are able to learn. For simplicity, consider one-parameter subgroups acting on X⊂ℝ^2, although this operator can be defined for higher-dimensional vector spaces. The generator L of ϕ is given as in Eq. (<ref>) and we parametrise A(x,y) as a linear operator in the basis {1,x,y} with a coefficient matrix A=α∈ℝ^2× 3, giving
L^α :=
(α_11 + α_12x + α_23y)∂/∂ x
+(α_12 + α_22x + α_23y)∂/∂ y.
In this particular basis, for different values of α, the generator L^α is able to express one-parameter sub-groups of the affine group. This includes the “traditional" symmetries that are usually considered (translation, rotation, and isotropic scaling) and all other affine transformations[Alternatively, the constant terms can be thought of as the drift terms (i.e. translation) and the four others can be arranged into a diffusion matrix.]. This can be generalized to any functional form of the generator by augmenting the basis accordingly.
§.§ Discretisation and interpolation
The generator L^α is constructed as an operator that acts on a function f:ℝ^2→ℝ, given, in practice, by I∈ℝ^n× n such that I_ij=f(i, j) are evaluations of f on a regularly-sampled n× n grid M of points M_ij=(i, j)∈ℝ^2. We then vectorise I, obtaining a point in a vector space Ĩ∈ℝ^n^2 such that Ĩ_i+j:=I_ij and construct the matrix operator L^α∈ℝ^n^2× n^2 as
L^α :=
(α_11 + α_12X_x + α_13X_y)∂/∂ X_x
+(α_21 + α_22X_x + α_23X_y)∂/∂ X_y,
acting on Ĩ, where X_x∈ℝ^n^2× n^2 and X_y∈ℝ^n^2× n^2 are such that (X_x)_ij:=i and (X_y)_ij:=j, while ∂/∂ X_x and ∂/∂ X_y are also matrix operators in ℝ^n^2× n^2.
The exponential in (<ref>) and the action T coincides with the matrix exponential.
In order to define ∂/∂ X_x and ∂/∂ X_y as operators that transform by infinitesimal amounts at discrete locations, we require an interpolation function. The Shannon-Whittaker theorem <cit.> states that any square-integrable, piecewise continuous function that is band-limited in the frequency domain can be reconstructed from its discrete samples if they are sufficiently close and equally spaced. For sake of interpolations, we will also assume that the function is periodic.
Interpolation: 1D
In the case where M is a discrete set of n points in 1D, we have that I(i + n) = I(i) for all i=1,,n samples. Shannon-Whittaker interpolation reconstructs the signal for all x∈ℝ as
I(x) = ∑_i=0^n-1I(i)Q(x-i), where
Q(x) = 1/n[ 1+2∑_p=1^n/2-1cos(2π p x/n)]
Differentiating Q with respect to x and evaluating it at every x_i∈ M gives an analytic expression for a vector field in ℝ^n, describing continuous changes in x at all n points <cit.>. This is precisely what ∂/∂ x or ∂/∂ y in (<ref>) are.
Interpolation: 2D
In the case where M is a grid of n× n points in 2D, we construct the n× n matrices of the partial derivatives of Q with respect to x and y, analogously to the 1D case, stacking them to construct the n^2× n^2 block diagonal matrices ∂/∂ X_x and ∂/∂ X_y. It is worth noting that alternative interpolation techniques can be used to obtain the operators and the method does not depend on any specific one.
Two different architectures, the main model and the latent model, are proposed to learn L^α and, in doing so, the action T.
§.§.§ Naive model
The coefficients α of L^α are approximated by fixed coefficients that are shared across the dataset, while the parameter t_i is approximated by t̂_i that depends on the input pair (x_i, x̅_i). We learn
* the coefficients α∈ℝ^2× 3 of the generator L^α and
* the parameters θ of an MLP f_θ that returns f_θ(x_i, x̅_i)=:t̂_i as a function of every input pair,
such that the solution to (<ref>) for L^α is approximated by
T̂(x_i, x̅_i):=e^f_θ(x_i, x̅_i)L^α x_i.
The model objective is then given by the reconstruction loss
ℒ_T(x_i, x̅_i)=||T̂_ϕ(x_i, x̅_i)-x̅_i||^2.
§.§.§ Latent model
While the model described above will prove to work sufficiently well for learning the coefficients α of L^α, the matrix exponential function in T̂ in (<ref>) can be costly to compute and difficult to optimise in high dimensions; consider that the cost of the matrix exponential in a single forward pass is roughly O(n^3) using the algorithm of <cit.>.
As a result, a different version of the model is proposed that incorporates an autoencoder for reducing dimension. The concept remains the same, but x_i is now mapped to some latent space Z⊂ℝ^n_Z for n_Z≪ n, such that the exponential is taken in a significantly lower dimension. This is done by an encoder h_ψ:X→ Z and a decoder d_ψ:Z→ X such that z_i=h_ψ(x_i) and x_i≈ d_ψ(z_i).
We learn
* the parameters ψ of an MLP autoencoder,
* the coefficients α̃∈ℝ^2× 3 of the generator L^α̃ for a one-parameter subgroup ϕ_Z acting on the latent space Z,
* the parameters θ of an MLP f_θ that returns f_θ(x_i, x̅_i)=:t̂_i as a function of every original input pair (x_i, x̅_i),
such that the solution to (<ref>) for L^α, the generator in the original space, is approximated by
T̂^Z(x_i, x̅_i)=d_ψ(e^f_θ(x_i, x̅_i)L^α̃h_ψ(x_i)).
It is important to note that enforcing good reconstruction of the autoencoder alone does not enforce the commutativity of the diagram in Figure <ref>. To make it commutative, we use an objective that is a weighted sum of multiple terms. A simple reconstruction term for the autoencoder on each input example
ℒ_R(x_i):=||d_ψ(h_ψ(x_i))-x_i||^2,
a transformation-reconstruction term in the original space
ℒ^X_T(x_i, x̅_i):=||T̂^Z_ϕ(x_i, x̅_i)-x̅_i||^2,
a transformation-reconstruction term in the latent space
ℒ^Z_T(x_i, x̅_i):=||e^f_θ(x_i, x̅_i)L^α̃h_ψ(x_i) - h_ψ(x̅_i)||^2,
and a Lasso term on the generator coefficients α̃. The overall loss of the latent model is
ℒ(x_i, x̅_i) = λ_R(ℒ_R(x_i)+ℒ_R(x̅_i))
+ λ_Xℒ^X_T(x_i, x̅_i)+λ_Zℒ^Z_T(x_i, x̅_i)
+λ_L||α̃||^2,
where λ_R,λ_X,λ_Z, λ_L∈ℝ are treated as
hyperparameters.
Recovering the group
It is important to note that the one-parameter subgroup corresponding to the generator L^α̃ and the generator L^α are not necessarily the same; L^α̃ is the generator corresponding to some action on X of a one-parameter subgroup ϕ, while L^α is a different generator corresponding to some action on Z of a different one-parameter subgroup ϕ_Z.
§.§ Uniqueness
For both the naive model in Section <ref> and the latent model in <ref>, the approximations t̂_i for the values of the parameters t_i require interpretation.
Both models parameterise T̂ or T̂^Z with the products t̂_iL^α or t̂_iL^α̃ respectively, where t̂_i = f_θ(x_i, x̅_i). While both the values of t̂_iL^α and t̂_iL^α̃ are unique for a given action on X and Z respectively, their decomposition is only unique up to a constant. Therefore, L^α or L^α̃ and t̂ approximate the generators and the parameter respectively up to a constant. Consequently, the one-parameter subgroup ϕ can only be deduced by the values of the individual coefficients in α relative to one another, as opposed to in absolute, likewise for ϕ_Z and α̃ . We therefore recover a scaled approximation for the distribution of t̂_i.
§.§ The most general setting
Suppose we are given a labelled dataset 𝒟 = {(x_i, c_i)}_i=1^N and a one-parameter subgroup ϕ. Then we call 𝒟 symmetric or invariant with respect to ϕ if the action of ϕ preserves the object identity of the data points, where by object identity we mean any property of the data that we might be interested in. For example, in the case of MNIST handwritten digits, rigid transformations preserve their labels [With the exception of the number '9' that, if rotated 180 degrees becomes a '6'.] and therefore, can be considered symmetries of the dataset. Now suppose that every x_i in 𝒟 is acted on with a one-parameter subgroup ϕ_t to get T𝒟 = {(T(x_i, t_i), c_i)}_i=1^N. The most general, fully unsupervised symmetry detection setting consists of learning ϕ, and characterize the distribution of the parameter t from just 𝒟̅. The idea is that, under the assumption that points with the same label are sufficiently similar for the subgroup transformation to account for the important difference[Keeping MNIST hand-written digits as our paradigmatic example, digits with the same label differ by small transformations that account for handwriting style differences.], we can use labels to group data points, and compare those data points using methods such as the one presented in this paper. We leave the fully unsupervised symmetry detection setting for future work although we will emphasize that the proposed method can, in principle, be used in such setting without substantial changes to the architecture.
§ EXPERIMENTS
§.§ Experiment setting
In practice, we experiment with a dataset of MNIST digits transformed with either 2D rotations or translations in one direction. To test the method's ability to learn distributions of these transformations, for each one-parameter subgroup ϕ∈{SO(2),T(2)} we construct a dataset {x_i,T(x_i,t_i)}_i=1^N by sampling the parameters t_i∈ℝ from various multimodal distributions.
As in <cit.>, the dataset is composed of signals I: M ⟶ℝ regularly-sampled from a discrete grid of n^2 points (x,y)∈ℝ^2 for n=28. The signals I are vectorised into points in ℝ^784 as described in Section <ref>. The implementation of the naive model is available https://github.com/victoria-klein/learning-lie-group-symmetries.githere.
§.§ Main model experiments
The naive model architecture outlined in <ref> consists of a fully-connected, 3-layer MLP for f_θ that was trained jointly with the coefficients α_ij using Adam <cit.> with a learning rate of 0.001. Given the disproportionate number of trainable parameters in f_θ and the 6 coefficients in α, updating α_ij roughly 10 times for every update of θ in f_θ was found to be beneficial during training.
Coefficients Figure <ref> shows the evolution of α_ij during training. It can be seen that after a few hundred steps, the coefficients α_ij that do not correspond to the infinitesimal generator of the symmetry expressed by the dataset drop to zero, while those that do, settle to values compatible with those of the ground truth generator L.
§.§ Latent model experiments
The latent model outlined in <ref> consists of a fully-connected, 3-layer MLP f_θ, as in (<ref>), to approximate t̂, and two fully-connected, 3-layer MLPs with decreasing/increasing hidden dimensions for the encoder h_ψ and d_ψ. We set the latent space to n_Z=25. Similar to the naive model experiment above, f_θ was trained jointly with the coefficients α_ij using Adam <cit.> with learning rate 0.001.
Parameters
After every epoch (roughly 500 steps), the outputs of t̂=f_θ were collected in a histogram to show p(t̂). Figure <ref> shows how the distribution of t̂ changes during training and how multimodal distributions are clearly recovered, showing the same number of modes as the ground truth distribution from which the transformations were sampled.
§ RELATED WORK
Symmetries in Neural Networks
Numerous studies have tackled the challenges associated with designing neural network layers and/or models that are equivariant with respect to specific transformations <cit.>. These transformations include continuous symmetries such as scaling <cit.>, rotation on spheres <cit.>, local gauge transformations <cit.> and general E(2) transformations on the Euclidean plane <cit.>, as well as discrete transformations like permutations of sets <cit.> and reversing symmetries <cit.>. Another line of research focuses on establishing theoretical principles and practical techniques for constructing general group-equivariant neural networks. Research in such areas show improved performances on tasks related to symmetries, but nonetheless require prior knowledge about the symmetries themselves.
Symmetry Detection
Symmetry detection aims to discover symmetries from observations, a learning task that is of great importance in of itself. Detecting symmetries in data not only lends itself to more efficient and effective machine learning models but also in discovering fundamental laws that govern data, a long-standing area of interest in the physical sciences. Learned symmetries can then be incorporated after training in equivariant models or used for data augmentation for downstream tasks. In physics and dynamical systems, the task of understanding and discovering symmetries is a crucial one; in classical mechanics and more generally Hamiltonian dynamics, continuous symmetries of the Hamiltonian are of great significance since they are associated, through Noether's theorem <cit.>, to conservation laws such as conservation of angular momentum or conservation of charge.
The first work on learning symmetries of one-parameter subgroups from observations were <cit.> and <cit.>, which outline MAP-inference methods for learning infinitesimally small transformations. <cit.> propose a transformation-specific smoothing operation of the transformation space to overcome the issue of a highly non-convex reconstruction objective that includes an exponential map. These methods are close to ours in that we also make use of the exponential map to obtain group elements from their Lie algebra. Despite this, <cit.> do not consider the task of characterizing the distribution of the parameter of the subgroup nor do they consider the whole of pixel-space, using small patches instead. <cit.> focus on disentangling and learning the distributions of multiple compact “toroidal" one-parameter subgroups in the data.
Neural Symmetry Detection A completely different approach to symmetry discovery is that of <cit.>, who's model uses a group invariant function known as the bispectrum to learn group-equivariant and group-invariant maps from observations. <cit.> consider a task similar to ours, attempting to learn groups with respect-to-which the data is invariant, however, the objective places constraints directly on the network parameters as well as the distribution of transformation parameters with which the data is augmented. Alternatively, <cit.> require knowledge of the specific transformation parameter for each input pair (differing by that transformation), unlike our model, where no knowledge of the one-parameter group is used in order to find the distribution of the transformation parameter.
Latent Transformations Learning transformations of a one-parameter subgroup in latent space (whether that subgroup be identical to the one in pixel space or not) has been accomplished by <cit.> and <cit.>. Nevertheless, other works either presuppose local structure in the data by using CNNs instead of fullly-connected networks or focus on disentangling interpretable features instead of directly learning generators that can be used as an inductive bias for a new model.
In contrary to the other works mentioned above, we propose a promising framework in which we can simultaneously
* perform symmetry detection in pixel-space, without assuming any inductive biases are present in the data a priori,
* parametrize the generator such that non-compact groups (e.g. translation) can be naturally incorporated,
* and learn both the generator and the parameter distributions.
§ DISCUSSION
In this work we proposed a framework for learning one-parameter subgroups of Lie group symmetries from observations. Our method uses a neural network to predict the one-parameter of every transformation that has been applied to datapoints, and the coefficients of a linear combination of pre-specified generators. We show that our method can learn the correct generators for a variety of transformations as well as characterize the distribution of the parameter that has been used for transforming the dataset.
While the goal of learning both the coefficients of the generator and the distribution of the transformation parameter has not been accomplished by only one model in this work, modifying our existing framework to do so is a priority for future work. In addition, the proposed method lends itself well to being composed to form multiple layers, which can then be applied to datasets that express multiple symmetries. By doing so, ideally, each layer would learn one individual symmetry. We leave this study, and the more general, fully unsupervised setting described in <ref>, for future work.
§ ACKNOWLEDGEMENTS
This publication is based on work partially supported by the EPSRC Centre for Doctoral Training in Mathematics of Random Systems: Analysis, Modelling and Simulation (EP/S023925/1) and the Dorris Chen Award granted by the Department of Mathematics, Imperial College London.
langley00
icml2023
|
http://arxiv.org/abs/2307.01173v1
|
20230703173241
|
Banach function spaces done right
|
[
"Emiel Lorist",
"Zoe Nieraeth"
] |
math.FA
|
[
"math.FA",
"math.CA",
"46E30"
] |
In this survey, we discuss the definition of a (quasi-)Banach function space. We advertise the original definition by Zaanen and Luxemburg, which does not have various issues introduced by other, subsequent definitions. Moreover, we prove versions of well-known basic properties of Banach function spaces in the setting of quasi-Banach function spaces.
[2020]46E30
Patient-centric health data sovereignty: an approach using Proxy re-encryptionThis work was partially supported by the Norte Portugal Regional Operational Programme (NORTE 2020), under the PORTUGAL 2020 Partnership Agreement, through the European Regional Development Fund (ERDF), within project “Cybers SeC IP” (NORTE-01-0145-FEDER-000044).
Bruno Rodrigues10009-0001-2929-0836 Ivone Amorim20000-0001-6102-6165 Ivan Silva20009-0009-8480-9352 Alexandra Mendes30000-0001-8060-5920
August 1, 2023
=====================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
Banach spaces of measurable functions, like (weighted) Lebesgue spaces, Orlicz spaces, Lorentz spaces, Morrey spaces and tent spaces play a central role in many areas of mathematical analysis. These spaces all fall within the broader class of Banach function spaces, which have the property that the pointwise order between functions is in some sense compatible with the norm.
Various definitions of Banach function spaces exist in the literature. A popular choice can be phrased as follows: A Banach function space X is a subspace of L^0(Ω),
the space of measurable functions f Ω→ for a σ-finite measure space (Ω,μ), equipped with a norm · _X such that it satisfies the following properties:
* Ideal property: If f∈ X and g∈ L^0(Ω) with |g|≤|f| a.e., then g∈ X with g_X≤f_X;
* Fatou property: If 0≤ f_n ↑ f for (f_n)_n≥ 1 in X and sup_n≥ 1f_n_X<∞, then f ∈ X and f_X=sup_n≥ 1f_n_X;
the latter of which implies that X is complete. Moreover, to ensure that X contains a sufficient number of functions, it is assumed that, for any measurable set E ⊆Ω of finite measure, one has
_E ∈ X,
∫_E fμ<∞ for all f∈ X.
The definition of a Banach function space with these properties is often attributed to the book by Bennet and Sharpley <cit.>.
The ideal property is the most fundamental property of a Banach function space, making sure that the natural partial order on L^0(Ω) is compatible with the norm on X.
The Fatou property can be omitted in the definition of a Banach function space. In this case, one has to ensure the completeness of X separately, either by assuming it explicitly or through a notion called the Riesz–Fischer property. This is the approach which we take in this survey, see Subsection <ref>.
Originally, the Fatou property was introduced as part of the definition in the PhD thesis of Luxemburg <cit.>, but it was later removed in the series of papers by Luxemburg and Zaanen <cit.> and the subsequent book by Zaanen <cit.>. Unfortunately, it was then reintroduced again in <cit.>. To give examples where this is problematic, proper closed subspaces of Banach function spaces such as, e.g., c_0⊊ℓ^∞ do not satisfy the Fatou property (see Proposition <ref>). Nonetheless, for example in applications in harmonic analysis, this situation is somewhat pathological. Indeed, any space of functions with the ideal property, but without the Fatou property, can be continuously embedded in a space that does have the Fatou property (see Proposition <ref>). Moreover, the Fatou property ensures that the integral pairing with functions in the Köthe dual X',
i.e. the space
X':={g∈ L^0(Ω):fg∈ L^1(Ω) for all f∈ X}.
with
g_X':=sup_f_X=1fg_L^1(Ω),
recovers the norm of X. Since this allows for the use of duality arguments typical in many areas of mathematical analysis, the Fatou property is therefore highly desirable.
The main problem with the above definition of a Banach function space from <cit.>, when working in areas such as harmonic analysis, are properties
(<ref>) and (<ref>). For example, the weighted Lebesgue space L^p(^d,w) for a Muckenhoupt weight w ∈ A_p may not be a Banach function space over ^d with the Lebesgue measure in the sense of the definition stated above, see <cit.>. To circumvent this, in the work <cit.> the authors included these spaces by considering (<ref>) and (<ref>) with respect to the measure w dx rather than with respect to the Lebesgue measure. This, however, is inadequate, as it still does not include many important spaces, such as Morrey spaces. Indeed, it was shown in <cit.> that these spaces do not satisfy (<ref>), even in the unweighted setting. In recent literature, this issue has lead the authors of <cit.> to develop certain theory for Morrey spaces in Chapter 7 and afterwards prove analogous results in Banach function spaces (with assumptions (<ref>) and (<ref>)) in Chapter 8. The results in Chapter 7 could have been regarded as a special case of the results in Chapter 8 if a definition of Banach function spaces that includes Morrey spaces would have been chosen.
A second issue with (<ref>) and (<ref>) arises when one wants to treat quasi-Banach function spaces, i.e. replacing the norm on X by a quasi-norm. In this setting, the condition (<ref>) is typically far too restrictive, as can already be seen when considering L^p(^d) for 0<p<1. However, omitting only (<ref>) leads to the asymmetric situation in which the Köthe dual X'
does not necessarily contain the required indicator functions (see <cit.> for an illustration of this phenomenon). Moreover, omitting both (<ref>) and (<ref>) instead also leads to pathological situations, as · _X' may only be a semi-norm in this case, see Subsection <ref>.
Recognizing these problems with the definition of a (quasi)-Banach function space including (<ref>) and (<ref>), the authors of <cit.> proposed a solution to these issues by introducing so-called ball quasi-Banach function spaces, in which the arbitrary measurable sets E in (<ref>) and (<ref>) are replaced by metric balls. This definition has since been adopted by various authors, see, e.g., <cit.>. However, morally speaking, the definition of a (quasi)-Banach function space should be a measure theoretic one, i.e. not referencing any metric structure of Ω. This is, for example, of paramount importance when working on the intersection between harmonic analysis and probability theory, as the natural object to work with in that setting is a probability space without any metric structure.
Furthermore, there is no need to define a new notion (like a ball quasi-Banach function space) in order to solve the issue with the assumptions in (<ref>) and (<ref>). Indeed, the solution is readily available in the literature, dating all the way back to the works of Zaanen and Luxemburg <cit.>. Indeed, one should replace (<ref>) and (<ref>) by the assumption that X is saturated:
* Saturation property: For every measurable E⊆Ω of positive measure, there exists a measurable F⊆ E of positive measure with _F∈ X.
Defining quasi-Banach function spaces using the ideal and saturation properties yields a purely measure-theoretic definition, which includes all aforementioned specific function spaces as examples. Moreover, the Köthe dual of such a space automatically satisfies the ideal, Fatou, and, if X is a Banach function space, the saturation properties.
It should be noted that the saturation property has various equivalent formulations. It is, for example, equivalent to either of the following assumptions (see Proposition <ref>):
* There exists a u ∈ X with u>0 a.e.;
* There is an increasing sequence of sets F_n⊆Ω with _F_n∈ X and ⋃_n=1^∞ F_n=Ω;
The function u in assumption <ref> is called a weak order unit. Generally, its utility is in that the ideal property of X implies that u_E∈ X for all measurable sets E⊆Ω. Thus, arguments that require (<ref>) can still be done by simply multiplying each function in the space by u^-1. We detail this procedure in Section <ref>. As a matter of fact, it was an observation by Kalton that there is a weight 0<w∈ L^1(Ω) so that, with respect to the measure w dμ,
the condition (<ref>) is also satisfied by this space (see Proposition <ref>).
The assumption in <ref> is actually the assumption used in <cit.>. Notably, almost 70 years later, the authors of the recent book <cit.> seem to have independently rediscovered the exact formulation of the assumption <ref>, calling the resulting class of spaces generalized Banach function spaces. However, it would historically be more accurate to refer to this class simply as Banach function spaces, whereas the class of spaces with properties (<ref>) and (<ref>) should be called restricted Banach function spaces.
The goal of this survey is two-fold.
* First of all, we would like to advertise the definition of a (quasi)-Banach function space using the saturation property instead of (<ref>) and (<ref>) and the Fatou property as optional assumption.
* Secondly, we will provide versions of well-known basic properties of Banach function spaces in the setting of quasi-Banach function spaces.
Our claim to originality in this survey is rather humble. Most of our discussion for Banach function spaces can, for example, also be found in <cit.>. However, we are not aware of a comprehensive reference work for the quasi-Banach function space case (see <cit.> for some partial results), and hope that this survey may serve as a solid introduction for anyone working with quasi-Banach function spaces . In particular, multilinear harmonic analysis has in recent years become a very active research area, in which the quasi-range naturally makes its appearance.
§ QUASI-BANACH FUNCTION SPACES
In this section we will introduce quasi-Banach function spaces and discuss their defining properties in detail.
Let (Ω,μ) be a measure space, which will always be assumed to be σ-finite. Let L^0(Ω) denote the space of measurable functions on (Ω,μ). Let X ⊆ L^0(Ω) be a complete, quasi-normed vector space. Denote the quasi-norm by ·_X and the optimal constant K≥ 1 such that
f+g_X≤ K(f_X+g_X), f,g ∈ X,
by K_X. The space X called a quasi-Banach function space over (Ω,μ) if it satisfies the following properties:
* Ideal property: If f∈ X and g∈ L^0(Ω) with |g|≤|f|, then g∈ X with g_X≤f_X.
* Saturation property: For every measurable E⊆Ω of positive measure, there exists a measurable F⊆ E of positive measure with _F∈ X.
If · _X is a norm, i.e., if K_X=1, then X is called a Banach function space over (Ω,μ).
Since the ideal property is inherently tied to the choice of quasi-norm ·_X on the space X, we sometimes emphasize this by writing (X,·_X) rather than X.
* Instead of introducing a quasi-Banach function space as a complete quasi-normed space with the ideal and saturation properties, one can equivalently start by defining a function quasi-norm ρ L^0(Ω)_+ → [0,∞] satisfying corresponding versions of these properties and afterwards setting
X:=f ∈ L^0(Ω): ρf<∞, f_X:= ρ(f).
The equivalence of these approaches can be seen
by setting
ρ(f) := f_X, f ∈ X,
∞, f ∉ X.
* As we will show in Section <ref>, a quasi-normed vector space X⊆ L^0(Ω) is complete if and only if it has the Riesz–Fischer property:
* Riesz–Fischer property: If (f_n)_n≥ 1 in X and ∑_n=1^∞ K_X^nf_n_X<∞, then ∑_n=1^∞ f_n∈ X with ∑_n=1^∞ f_n_X≤ K_X∑_n=1^∞ K_X^nf_n_X.
In many examples, X actually satisfies the stronger Fatou property:
* Fatou property: If 0≤ f_n ↑ f for (f_n)_n≥ 1 in X and sup_n≥ 1f_n_X<∞, then f ∈ X and f_X=sup_n≥ 1f_n_X.
One readily checks that the Fatou property implies the Riesz–Fischer property and thus completeness. Indeed, by the quasi-triangle inequality and induction on N≥ 1 we have
∑_n=1^N f_n_X≤∑_n=1^N K_X^nf_n_X.
The Riesz–Fischer property then follows by using the Fatou property on the partial sums.
* In some parts of the literature, the underlying measure space (Ω,μ) is assumed to be complete. We do not assume completeness as this assumption is superfluous in the following sense. Suppose that (Ω,μ), with the σ-algebra Σ, is not complete. Denoting its completion (which is again σ-finite) by Σ^∗ with measure μ^∗, there is a natural one-to-one correspondence between the measurable functions with respect to Σ and with respect to Σ^∗. Indeed, for each f^∗ that is measurable with respect to Σ^∗ there exists an f that is measurable with respect to Σ such that f^∗=f μ-a.e. Thus, any quasi-Banach function space over (Ω,μ) may as well be considered over (Ω,μ^∗).
By the Aoki–Rolewicz theorem <cit.>
f := inf∑_k=1^n f_k_X^p^1/p: f_1,…,f_n ∈ X such that ∑_k=1^n f_k = f
is an equivalent p-norm on X for p ∈ (0,1] with 2^1/p = 2K_X, i.e. · is a quasi-norm on X such that
f+g^p ≤f^p + g^p, f,g∈ X
4^-1/pf_X ≤f≤f_X, f ∈ X,
see, e.g., <cit.>. It is a straightforward check to see that (X, · ) is again a quasi-Banach function space.
§.§ Completeness
Let us discuss the defining properties of a quasi-Banach function space X in some detail. To start, we note that the assumed completeness can be reformulated as the Riesz-Fischer property (see Remark <ref><ref> for the definition). Indeed, if a quasi-Banach space X is complete, then it satisfies the Riesz–Fischer property. To see this, note that by the quasi-triangle inequality and induction on N≥ 1 we have
∑_n=1^N f_n_X≤∑_n=1^N K_X^nf_n_X.
A standard argument then shows that the partial sums F_N:=∑_n=1^N f_n are a Cauchy sequence in X, proving that F:=∑_n=1^∞ f_n∈ X. The assertion then follows from noting that
F_X≤ K_XF-F_N_X+K_X∑_n=1^∞ K_X^nf_n_X
and letting N→∞. The converse statement is also true. For a proof, we refer the reader to <cit.>.
Let X be a quasi-normed space. Then X is complete if and only if X satisfies the Riesz–Fischer property.
For X ⊆ L^0(Ω) the Riesz–Fischer property ensures that convergence in the norm of X implies local convergence in measure, i.e., the embedding X↪ L^0(Ω) is continuous. As L^0(Ω) equipped with the topology of local convergence in measure is a Hausdorff space, this ensures uniqueness of limits. More precisely, convergence in the quasi-norm of X implies
pointwise a.e. convergence for a subsequence, so that the pointwise a.e. limit and the limit in the quasi-norm of X coincide whenever both exist.
Let X be a quasi-Banach function space over (Ω,μ), let (f_n)_n≥ 1 be a sequence in X and let f ∈ X.
* If (f_n)_n≥ 1 is Cauchy in X, then (f_n)_n≥ 1 is locally Cauchy in measure.
* If (f_n)_n≥ 1 converges to f in X, then (f_n)_n≥ 1 converges locally in measure to f.
In particular, if f_n → f in X and f_n → g pointwise a.e., then f=g a.e.
We will only prove <ref>, the proof of <ref> is similar. Fix a measurable E ⊆Ω with μ(E)<∞, let ε>0 and define for j,k ≥ 1
A_j,k = x ∈ E: f_j(x)-f_k(x)≥ε.
We need to show that lim_j,k →∞μA_j,k=0.
Since (f_n)_n≥ 1 is Cauchy in X, we know by the ideal property that
lim_j,k →∞_A_j,k_X ≤lim_j,k →∞ε^-1f_j-f_k_X = 0.
Suppose that μ(A_j,k)↛0 for j,k →∞. Then we can find a δ>0 and a sequence of measurable sets (B_n)_n≥ 1 in {A_j,k} such that μ(B_n) ≥δ for all n≥ 1 and _B_n_X → 0 for n→∞. By considering a subsequence if necessary, we may furthermore assume that _B_n_X ≤ 2^-nK_X^-n-1 for all n≥ 1.
By the Riesz–Fischer property, it follows for m≥ 1 that _⋃_n=m^∞ B_n∈ X with
_⋃_n=m^∞ B_n_X ≤ K_X∑_n=m^∞ K_X^n-m+1_B_n_X ≤∑_n=m^∞ 2^-n = 2^-m+1.
Define B = ⋂_m=1^∞⋃_n=m^∞ B_n.
Then we have, by the ideal property, that for all m≥ 1
_B_X ≤_⋃_n=m^∞ B_n_X≤ 2^-m+1
and thus _B_X = 0. This means that _B =0 a.e. and consequently μB=0. But μB_n≥δ for all n≥ 1 and therefore μ⋃_n=m^∞ B_n≥δ for all m≥ 1. Since
μ⋃_n=1^∞ B_n≤μE<∞,
we conclude that μB≥δ, a contradiction. Thus, we must have lim_j,k →∞μA_j,k=0.
We can now give a short alternative proof that a quasi-normed space X⊆ L^0(Ω) with the ideal and Riesz–Fischer property is completeNot really. This proof uses Fatou.. Indeed, let (f_n)_n≥ 1 be a Cauchy sequence in X. Then (f_n)_n≥ 1 is locally Cauchy in measure by Proposition <ref> and we can thus find a subsequence (f_n_k)_k≥ 1 and an f ∈ L^0(Ω) such that f_n_k→ f pointwise a.e. By Lemma <ref>, we know that f ∈ X and
lim_m →∞f-f_m_X ≤lim_m →∞lim inf_k →∞ f_n_k-f_m_X=0,
This means that f_m → f in X for m →∞, so X is complete.
The local Cauchy (or convergence) in measure in Proposition <ref> can not be replaced by global Cauchy (or convergence) in measure. Indeed, take Ω = equipped with the Lebesgue measure x and define w(x) := ^-x. Let X = L^1_w(), i.e. the space of all f ∈ L^0() such that
f_L^1_w() := ∫_fw x <∞.
The sequence of functions (_[n,n+1])_n≥ 1 converges to zero in X, but
{x ∈: _[j,j+1](x)-_[k,k+1](x)≥ 1} =2
for j≠ k, i.e. _[n,n+1] is not globally Cauchy in measure.
§.§ The ideal property
The first property of a quasi-Banach function space is called the ideal property, since it ensures that X is a so-called order ideal in the vector lattice L^0(Ω). Combined with the completeness of X, this implies that X
is a quasi-Banach lattice. Thus, any result for (quasi)-Banach lattices also holds for (quasi)-Banach function spaces. We refer the reader to e.g. <cit.> for a thorough study of Banach lattices.
Furthermore, let us note that it follows from the ideal property that for any f ∈ L^0(Ω) we have f∈ X if and only if |f|∈ X with |f|_X=f_X.
§.§ The saturation property
The second property of a quasi-Banach function space X, the saturation property, has already been discussed in the introduction. It is imposed to avoid trivialities. Indeed,
it ensures that there are no measurable sets E ⊆Ω on which all elements of X vanish a.e. This assumption is not restrictive, as any quasi-normed vector space of measurable functions on Ω can be made saturated by removing the part of Ω on which all functions in X vanish a.e. (cf. <cit.>).
To illustrate this, consider the space X⊆ L^0() defined as the subspace of L^1() consisting of the integrable functions supported in [0,1]. When equipped with the norm
f_X:=∫_0^1|f| dx,
the space (X, ·_X) is complete and satisfies the ideal property, but not the saturation property. We also note that any g∈ L^0() supported outside of [0,1] satisfies ∫_|fg| dx=0 for all f∈ X. These problems can easily be rectified by considering this as a space over Ω=[0,1] rather than over Ω=, in which case it is saturated, and we simply have X=L^1([0,1]).
There are various equivalent formulations of the saturation property. Especially the existence of a weak order unit is often useful in applications, see also Subsection <ref>.
Let (Ω,μ) be a σ-finite measure space and let X ⊆ L^0(Ω) be a quasi-Banach space satisfying the ideal property.
Then the following are equivalent:
* X satisfies the saturation property;
* There is an increasing sequence of sets F_n⊆Ω with _F_n∈ X and ⋃_n=1^∞ F_n=Ω;
* X has a weak order unit, i.e., there is a u∈ X with u>0 a.e.;
* If g ∈ L^0(Ω) with ∫_Ω|fg| dμ= 0 for all f ∈ X, then g = 0 a.e.
We start by proving <ref>⇒<ref>. Assume that X has the saturation property. Since Ω is σ-finite, it follows from <cit.> that there exists an increasing sequence of measurable sets F_n⊆Ω with _F_n∈ X and ⋃_n=1^∞ F_n=Ω. Note that while this result is stated for normed spaces, the proof also holds without change in quasi-normed spaces.
For <ref>⇒<ref>, we define
u:=∑_n=1^∞1/(2K_X)^n·_F_n/1+_F_n_X.
By the Riesz–Fischer property, we have u∈ X. So <ref> follows from the fact that u>0 on Ω.
For <ref>⇒<ref>, let g ∈ L^0(Ω) such that fg_L^1(Ω)=0 for all f∈ X. In particular, we have u g_L^1(Ω)=0. This means that u g=0 a.e. and hence, since u>0 a.e., we must have g=0 a.e., as desired.
It remains to show <ref>⇒<ref>. Assume that X does not have the saturation property. Then there is a set E⊆Ω of positive measure such that _F∉ X for all F⊆ E of positive measure. For f∈ X, consider the sets F_n:={x∈ E:|f(x)|>1/n} for n≥ 1. Since
_F_n≤ nf∈ X,
it follows from the ideal property of X that _F_n∈ X for all n≥ 1. But this means that μ(F_n)=0 and hence,
μ{x∈ E:|f(x)|>0}=μ⋃_n=1^∞ F_n≤∑_n=1^∞μ(F_n)=0.
As this means that every function f∈ X vanishes a.e. on E, we have
∫_Ω |f| _E dμ = 0 for all f ∈ X.
Since _E≠0, this proves the result by contraposition.
Ball quasi-Banach function spaces, as introduced in <cit.>, satisfy the saturation property. Indeed, the sets F_n=B(x,n) satisfy property <ref> in Proposition <ref>, where B(x,n) denotes the ball around a point x∈Ω with radius n. In particular, every ball quasi-Banach function space is a quasi-Banach function space. Since the measure space over which a quasi-Banach function space is defined does not necessarily need to have a metric structure, the notion of a quasi-Banach function space is more general than that of a ball quasi-Banach function space.
§ PROPERTIES OF BANACH FUNCTION SPACES
Having discussed the definition of quasi-Banach function spaces at length in the previous section, we will discuss some basic properties of quasi-Banach function spaces in this section. We start by introducing the notion of Köthe duality, which is the notion of duality within the category of Banach function spaces.
Next, we will discuss important lattice properties that a Banach function space has through analogues of the classical convergence theorems in integration theory. We will start by discussing the Fatou property, which is the replacement of the monotone convergence theorem and Fatou's lemma. Then we discuss the notion of order-continuity, which serves as a replacement of the dominated convergence theorem.
Finally, we will discuss how the theory naturally includes weighted spaces (such as weighted Lebesgue spaces) in its definition through the saturation property. We show two ways of considering weights; by adding them to the underlying measure space, or by considering them as a multiplier.
§.§ Duality
Duality arguments play an important role in mathematical analysis. For example, when working in L^p(^d), duality often allows one to translate results for 1≤ p≤ 2 to 2≤ p≤∞ and vice versa. Unfortunately, the Banach dual X^* of a quasi-Banach function space X is not necessarily isomorphic to a space of functions. For example, the Banach dual of L^∞(^d) is a space of measures.
Motivated by this phenomenon, we define the Köthe dual or associate space X' of a quasi-Banach function space X ⊆ L^0(Ω) as the the space
X':={g∈ L^0(Ω):fg∈ L^1(Ω) for all f∈ X}.
For g ∈ X' we define
g_X':=sup_f_X=1fg_L^1(Ω),
which is a norm on X'. Indeed, as shown in Proposition <ref>, the saturation property ensures (and is equivalent to the statement) that g_X' = 0 if and only if g=0 a.e. Moreover, g_X'<∞. Indeed, suppose g_X'=∞. Then, for each n≥ 1, there is an f_n∈ X with f_n_X=1 for which
f_n g_L^1(Ω)>K_X^n n^3.
By the Riesz–Fischer property, we have
F:=∑_n=1^∞|f_n|/K_X^n n^2∈ X.
However, since also
Fg_L^1(Ω)≥f_n g_L^1(Ω)/K_X^n n^2>n
for all n≥ 1, we deduce that Fg∉ L^1(Ω). By contraposition, we conclude that g_X'<∞ for all g ∈ X'.
The ideal property of L^1(Ω) implies that X' also satisfies the ideal property. Moreover, it satisfies the Riesz–Fischer property and, hence, is complete. Indeed, if (g_n)_n≥ 1 is a sequence in X' for which ∑_n=1^∞g_n_X'<∞, then, by the monotone convergence theorem, for every f∈ X we have ∑_n=1^∞ g_nf∈ L^1(Ω) with
∑_n=1^∞ g_n f_L^1(Ω)≤∑_n=1^∞g_nf_L^1(Ω)≤f_X∑_n=1^∞g_n_X'.
Thus, ∑_n=1^∞ g_n∈ X' with ∑_n=1^∞ g_n _X'≤∑_n=1^∞g_n_X', proving that X' has the Riesz–Fischer property. As a matter of fact, an analogous argument shows that X' satisfies the stronger Fatou property (see Remark <ref><ref> for the definition).
The map f ↦∫_Ω fg μ defines a bounded linear functional on X for every g ∈ X'. Thus, we can naturally identify X' with a closed subspace of X^*. Indeed, we have the following result:
Let X be a quasi-Banach function space over (Ω,μ). The embedding ι X'↪ X^∗ given by
ι(g)(f):=∫_Ωfg dμ, f ∈ X,
satisfies
ι(g)_X^∗=g_X' for all g ∈ X'.
For g ∈ X' we have
ι(g)_X^∗=sup_f_X=1|∫_Ωfg dμ|≤sup_f_X=1∫_Ω|fg| dμ=g_X',
so it remains to show g_X'≤ι(g)_X^∗.
Fix f∈ X and define f:=|fg|g^-1 where g is non-zero and zero elsewhere. Then fg=|fg| and |f|≤|f| so that by the ideal property of X we have f∈ X with
∫_Ω|fg| dx=|ι(g)(f)|≤ι(g)_X^∗f_X≤ι(g)_X^∗f_X.
Taking a supremum over all f∈ X with f_X=1 proves the result.
Next, we wish to determine when X' is a Banach function space. As X' has the ideal property and is complete, it suffices to check that it has the saturation property to conclude that it is a Banach function space. This, however, turns out to not always be the case. Indeed, for X=L^p(Ω) with 0<p<1 we have L^p(Ω)'={0}, which is not saturated. Our goal is to characterize for which quasi-Banach function spaces X the associate space X' is a Banach function space.
When X is a Banach function space, X^* is non-trivial by the Hahn–Banach theorem. In fact, it similarly turns out that in this case X' is automatically saturated, and, hence, is a Banach function space:
Let X be a Banach function space over (Ω,μ). Then X' is also a Banach function space over (Ω,μ).
By the above discussion, we need only prove that X' satisfies the saturation property. This follows from <cit.>
This result allows us to prove the following characterization of when X' is a Banach function space for a quasi-Banach function space X:
Let X be a quasi-Banach function space over (Ω,μ). Then the following are equivalent:
* X' is a Banach function space over (Ω,μ);
* There is a Banach function space E over (Ω,μ) such that X↪ E.
To prove <ref>⇒<ref>, assume X' is a Banach function space, so X” is well-defined. Since X↪ X” with
f_X”=sup_g_X'=1fg_L^1(Ω)≤sup_g_X'=1f_Xg_X'=f_X
and X has the saturation property, X” has the saturation property as well. Thus, X” is a Banach function space. Therefore <ref> is satisfied with E=X”.
For the converse, let E be as in <ref> and let C>0 such that f_E≤ Cf_X for all f∈ X. Then, by Proposition <ref>, the space E' has the saturation property. Since
g_X'=sup_f_X≤ 1fg_L^1(Ω)≤sup_f_E≤ Cfg_L^1(Ω)=Cg_E'
for all g∈ E', we have that E'⊆ X'. This means that X' has the saturation property as well, proving <ref>.
§.§ The Fatou property
The Fatou property is essentially an X-version of the monotone convergence theorem and Fatou's lemma from integration theory, and reduces back to these classical theorems in the case that X = L^1(Ω).
Let X be a quasi-Banach function space over (Ω,μ). We say that X satisfies the Fatou property if it satisfies the following condition: if 0≤ f_n ↑ f for (f_n)_n≥ 1 in X and sup_n≥ 1f_n_X<∞, then f ∈ X and f_X=sup_n≥ 1f_n_X.
The Fatou property is equivalent with an X-version of Fatou's lemma, which explains the nomenclature.
Let X be a quasi-Banach function space over (Ω,μ). Then X has the Fatou property if and only if for any sequence of positive-valued functions (f_n)_n≥ 1 in X with lim inf_n →∞f_n_X <∞, one has lim inf_n →∞ f_n ∈ X and
lim inf_n →∞ f_n_X ≤lim inf_n →∞ f_n_X.
We only need to prove the forward implication, for which we
define g_n = inf_k≥ n f_k for n ≥ 1. Then, by the ideal property, we have g_n ∈ X and for all m≥ n we have
g_n_X ≤f_m_X.
In particular,
g_n_X ≤inf_m ≥ n f_m_X.
Since 0≤ g_n ↑lim inf_m→∞ f_m, it follows from the Fatou property that lim inf_m →∞ f_m ∈ X and
lim inf_m →∞ f_m_X = lim_n →∞ g_n_X =sup_n≥ 1 g_n_X ≤sup_n≥ 1inf_m ≥ nf_m_X = lim inf_n →∞ f_n_X.
This finishes the proof.
When X is a quasi-Banach function space, it follows from the monotone convergence theorem that X' satisfies the Fatou property. This proves one direction of the so-called Lorentz–Luxemburg theorem:
Let X be a Banach function space over (Ω,μ). Then X satisfies the Fatou property if and only if we have X”=X with equal norm.
For a full proof of this result we refer the reader to <cit.>. In particular this result implies that for a Banach function space X, its Köthe dual X' is norming, i.e., we have
f_X=sup_g_X'=1fg_L^1(Ω).
We point out that this result means that the Fatou property is equivalent to reflexivity in terms of Köthe duality.
Even if a quasi-Banach function space does not have the Fatou property, it always embeds into one that does. This explains why one often assumes the Fatou property as part of the definition of a quasi-Banach function space.
Let X be a quasi-Banach function space over (Ω,μ). Then there is a quasi-Banach function space Y over (Ω,μ) that satisfies the Fatou property for which X↪ Y with f_Y≤f_X for all f∈ X.
According to Zaanen (see <cit.>), the following construction was originally introduced by G. G. Lorentz in an unpublished work.
We let Y⊆ L^0(Ω) denote the space of f∈ L^0(Ω) for which there exists a sequence (f_n)_n≥ 1 in X for which 0≤ f_n↑ |f| a.e. and sup_n≥ 1f_n_X<∞. We equip this space with the quasi-norm
f_Y:= infsup_n≥ 1 f_n_X : 0≤ f_n ↑f.
Since for f∈ X we can set f_n:=|f| for n≥ 1, we have X⊆ Y with f_Y≤f_X.
In the case that ·_X is a norm, the proof that ·_Y is also a norm satisfying the Fatou property can be found in <cit.>. These proofs remain valid, mutatis mutandis, when ·_X is a quasi-norm.
When X' is saturated, one can note that X” is a Banach function space with the Fatou property that X embeds into. As a matter of fact, it is shown in <cit.> that when X is a Banach function space, then the Y constructed in the above proof is equal to X”. Remarkably, the above construction remains valid even when X' is not saturated (in which case ·_X” would only be a seminorm).
To see that Y satisfies the Fatou property, let (h_k)_k≥ 1 in Y with 0≤ h_k↑ h∈ L^0(Ω) and
C:=sup_k≥ 1h_k_Y<∞.
For each k≥ 1 we pick a sequence (f_nk)_n≥ 1 in X for which 0≤ f_nk↑ h_k and
sup_n≥ 1f_nk_X≤h_k_Y+1/k.
As (Ω,μ) is σ-finite, we can pick an increasing sequence of sets (Ω_n)_n≥ 1 of finite measure that increases to Ω. Fix k≥ 1. By Egerov's theorem, there is a set E_k⊆Ω_k with μ(E_k)≤ 2^-k so that there is an n_k≥ 1 such that for every x∈Ω_k\ E_k we have h_k(x)-f_n_k k(x)<1/k. Setting g_k:=f_n_k k, we note that this implies that the set
E:=⋂_m=1^∞⋃_k=m^∞ E_k
satisfies μ(E)=0 and for every x∈Ω\ E there is a k_x≥ 1 such that for all k≥ k_x we have x∈Ω\ E_k. Increasing k_x if necessary, we may assume that x∈Ω_k\ E_k and, hence, h_k(x)-g_k(x)<1/k. Thus, for every x∈Ω\ E we have g_k(x)→ h(x) as k→∞. We conclude that the increasing sequence f_k:=inf_m≥ kg_m satisfies
lim_k→∞f_k=lim inf_k→∞g_k=h
a.e. As f_k∈ X by the ideal property of X, we conclude that h∈ Y and, by (<ref>),
h_Y≤lim_k→∞f_k_X≤lim sup_k→∞f_n_k k_X≤lim sup_k→∞h_k_Y+1/k≤ C.
This proves the assertion.
Sometimes one only has a weaker version of the Fatou property:
* Weak Fatou property: There is a C>0 such that if 0≤ f_n ↑ f for (f_n)_n≥ 1 in X and sup_n≥ 1f_n_X<∞, then f ∈ X and f_X≤ Csup_n≥ 1f_n_X.
For a quasi-Banach function space with the weak Fatou property, one actually has Y=X isomorphically in Proposition <ref>. A typical example of when one runs into the weak Fatou property, is when one passes to an equivalent quasi-norm on a space with the Fatou property. For example, this happens when one equips a quasi-Banach function space with the Aoki-Rolewicz p-norm as defined in (<ref>). If one then wants to retain the Fatou property, one can then apply the construction Proposition <ref> on this p-norm and check that this is again a p-norm, this time with the Fatou property.
Finally, we show that quasi-Banach function spaces with the Fatou property are, in some sense, maximal. In particular, we show that if a quasi-Banach function space X isometrically embeds as a proper subspace into a quasi-Banach function space Y, then X cannot have the Fatou property. For example, this means that c_0 does not have the Fatou property, as it is a proper closed subspace of the Banach function space ℓ^∞.
Suppose X and Y are quasi-Banach function spaces over (Ω,μ), X ⊆ Y and f_X=f_Y for all f ∈ X. If X has the Fatou property, then X=Y.
Let f ∈ Y, let 0<u ∈ X be a weak order unit, and define g_n := min(f,nu ) for n ≥ 1. Then, by the ideal property of X, we have g_n ∈ X for all n ≥ 1. Moreover, g_n ↑f a.e. and, by the ideal property of Y,
sup_n≥ 1g_n_X=sup_n≥ 1g_n_Y≤f_Y.
Hence, f∈ X by the Fatou property of X so that f ∈ X by the ideal property of X.
§.§ Order continuity
Having dealt with X-valued versions of the monotone convergence theorem and Fatou's lemma through the Fatou property, we now wish to discuss the third main convergence theorem of integration theory: the dominated convergence theorem. To make sense of this theorem in a quasi-Banach function space setting, we introduce the notion of order convergence. We say that a sequence (f_n)_n≥ 1 in X order converges to f ∈ X if there is a sequence (g_n)_n≥ 1 in X such that g_n ↓ 0 and f-f_n≤ g_n for all n≥ 1. Using this terminology, the dominated convergence theorem is equivalent to the statement that order convergence implies norm convergence for X=L^1(Ω), which can be seen by taking g_n = sup_k≥ nf-f_k for n ≥ 1.
Not all quasi-Banach function spaces have the property that order convergence implies norm convergence. Indeed, if X = L^∞(Ω), order convergence corresponds to pointwise a.e. convergence for a bounded sequence of functions, whereas norm convergence corresponds to uniform a.e. convergence. This motivates the following definition.
A quasi-Banach function space X over (Ω,μ) is called order-continuous if for sequences (f_n)_n≥ 1 in X with f_n↓ 0 pointwise a.e. we have f_n_X↓ 0.
We note that in an order-continuous quasi-Banach function space, order convergence implies norm convergence, which explains the nomenclature. Rephrasing, a quasi-Banach function space X is order-continuous if and only if an X-version of the dominated convergence theorem holds, i.e. for any sequence (f_n)_n≥ 1 in X such that f_n → f pointwise a.e. and f_n≤ g ∈ X for all n ≥ 1, it follows that
lim_n →∞f_n - f_X = 0.
As already noted, L^∞(Ω) is not order-continuous. In particular, the sequence space ℓ^∞ is not order-continuous. This space is actually the prototypical space that is not order-continuous in the following sense: Any quasi-Banach function space that is not order continuous contains a (lattice) isomorphic copy of ℓ^∞. For Banach lattices, this can be found in <cit.> (see also <cit.>), which can be adapted to the quasi-Banach function space setting using the Aoki–Rolewicz theorem.
Various authors use a different, but equivalent notion instead of order-continuity. A quasi-Banach function space X is said to have absolutely continuous quasi-norm if, for all f ∈ X and for all decreasing sequences of measurable sets (E_n)_n≥ 1 with _E_n↓ 0 a.e., we have f_E_n_X↓ 0.
Let X be a quasi-Banach function space over (Ω,μ). Then X is order-continuous if and only if X has an absolutely continuous quasi-norm.
It is clear that order-continuity implies that X has an absolutely continuous quasi-norm by taking f_n = |f|_E_n. For the converse, let (f_n)_n≥ 1 be a sequence in X with f_n ↓ 0 pointwise a.e. Take ε>0, let u∈ X be a weak order unit, and
define
E_n:=x∈Ω:f_n(x)u(x)^-1>(2K_Xu_X)^-1ε.
Since f_n u^-1↓ 0 pointwise a.e., we know that E_n decreases to a set of measure zero. By the absolute continuity of the quasi-norm of X, we can find an N≥ 1 such that
f_1_E_N_X<ε/2K_X.
By the ideal property, this implies for all n ≥ N
f_n_X ≤ K_Xf_n_Ω\ E_N_X+K_Xf_n_E_N_X
≤ K_X · (2K_Xu_X)^-1ε·u _Ω\ E_N_X+K_Xf_1_E_N_X
<ε/2+ε/2=ε.
The assertion follows.
We say that the measure space (Ω,μ) is separable if there is a countable collection of measurable sets A such that for every measurable set E⊆Ω with μ(E)<∞ and every ε>0 one can find an A ∈A with μ(AΔ E)<ε. Note that, in particular, the Lebesgue measure on ^d is separable.
For separable (Ω,μ), the order-continuity of a quasi-Banach function space X over (Ω,μ) implies the separability of X.
Let X be a quasi-Banach function space over a separable measure space (Ω,μ). If X is order-continuous, then X is separable.
Let A be a countable collection of measurable sets
such that for every measurable set E⊆Ω with μ(E)<∞ and every ε>0 there is an A ∈A with μ(AΔ E)<ε. By Proposition <ref>, there is a weak order unit u ∈ X, i.e. a function u∈ X such that u>0 a.e. We claim that the countable set of functions
∑_k=1^n a_k· u _A_k: a_k ∈⊕ i, A_k ∈A⊆ X
is dense in X. Indeed, for any f ∈ X, we can find a sequence of simple functions (f_n)_≥ 1 such that f_n≤fu^-1 for all n≥ 1 and f_n → fu^-1 pointwise a.e. Therefore, by the order-continuity of X, we have that f_nu → f in X. Hence, by the density of in and the quasi-triangle inequality, it suffices to show that for all measurable E⊆Ω there is a sequence (A_n)_n≥ 1 in A such that lim_n→∞u_E-u_A_n_X=0 Moreover, since (Ω,μ) is σ-finite, it suffices to consider μ(E)<∞
Fix a measurable E ⊆Ω with μ(E)<∞ and let let (A_n)_n≥ 1 be a sequence of measurable sets in A such that μ(A_nΔ E) → 0 as n→∞, i.e. _A_n→_E (locally) in measure. Then there is a subsequence such that u_A_n_k→ u_E pointwise a.e. By the order-continuity of X, we conclude that lim_k→∞u_A_n_k-u_E_X = 0, finishing the proof.
The converse of Proposition <ref> also holds: if X is a separable quasi-Banach function space over (Ω,μ), then X is order-continuous and (Ω,μ) is separable. The order-continuity of X follows from the fact that X contains an isomorphic copy of ℓ^∞ if it is not order-continuous and for the separability of (Ω,μ), one can adapt the proof of <cit.>
By Proposition <ref>, X' can be identified with a closed subspace in X^∗. In the next proposition we will characterize when X'= X^*.
Let X be a quasi-Banach function space over (Ω,μ).
* If X is order-continuous, then X' = X^*.
* If X is a Banach function space and X' = X^*, then X is order-continuous.
For <ref> assume that X is order-continuous and let u ∈ X be a weak order unit, i.e. u>0 a.e. Take x^* ∈ X^* and for all measurable E ⊆Ω define λ(E) = x^*(_E u). Then λ is a complex measure, since for disjoint, measurable E_1,E_2,…⊆Ω we have
λ⋃_n=1^∞ E_n = lim_N→∞∑_n=1^N λE_n + x^*∑_n = N+1^∞_E_nu = ∑_n=1^∞λE_n.
where the last step follows from ∑_n = N+1^∞_E_nu ↓ 0 pointwise a.e. as N →∞ and the order continuity of X.
Moreover, note that λ is absolutely continuous with respect to u dμ, so by the Radon–Nikodym theorem there is a g ∈ L^0(Ω) such that x^*(_E u) = λ(E) = ∫_Eg uμ for all measurable E ⊆Ω.
Now let f ∈ X be arbitrary and let (f_n)_n≥ 1 be a sequence of simple functions such that f_n≤fu^-1 for all n≥ 1 and f_n → fu^-1 pointwise a.e. By the order-continuity of X, we have f_n u → f in X and thus, by the dominated convergence theorem,
x^*(f) = lim_n →∞ x^*(f_n u) = lim_n →∞∫_Ω f_n g uμ = ∫_Ω fg μ.
We conclude that g ∈ X' and ι(g) = x^*, which shows that X^* =X'.
For <ref> assume that X is a Banach function space and X' = X^*. Let (f_n)_n≥ 1 be a sequence in X such that f_n ↓ 0 pointwise a.e. For any g ∈ X' we have, by the dominated convergence theorem, that
lim_n →∞∫_Ω f_n gμ = 0.
Since X' = X^*, we deduce that f_n:n≥ 1∪0 is weakly closed. Thus, by the Hahn–Banach separation theorem, its convex hull is norm closed. Therefore, for any ε>0, there are a_1,…,a_n ≥ 0 with ∑_k=1^n a_k=1 such that
∑_k=1^n a_k f_k_X<ε.
Since (f_n)_n≥ 1 is decreasing, this implies that f_j_X<ε for all j≥ n, finishing the proof.
We note that, if X is a quasi-Banach function space, it can happen that X^* = 0. In this case the assumption X'=X^* is trivial, which explains the need for the assumption that X is a Banach function space in Proposition <ref><ref>. For an example of a quasi-Banach function space X with trivial dual and which is not order-continuous, we refer the reader to <cit.>.
We end this subsection with a corollary on the connection between order-continuity and reflexivity.
Let X be a Banach function space over (Ω,μ). Then X is reflexive if and only if X and X' are order-continuous.
If X and X' are order-continuous, we immediately obtain
X^**=X'^* = X” = X
by Proposition <ref><ref> and Proposition <ref>, so X is reflexive. For the converse assume that X is reflexive and suppose that X' is a proper closed subspace of X^*. By the Hahn-Banach theorem we can find a nonzero f ∈ X = X^** such that
∫_Ωfg dμ = 0
for all g ∈ X', which implies that f_X=0 by Proposition <ref>, a contradiction. Thus X'=X^*, which by Proposition <ref><ref> implies that X is order-continuous. Applying Proposition <ref> once more, we have
X'^*=X^** =X =X”
and therefore X' is also order-continuous by Proposition <ref><ref>.
§.§ Weighted Banach function spaces
In this final subsection, we want to make clear that the saturation property naturally allows one to consider weighted spaces without having to change any of the defining properties of a quasi-Banach function space to weighted versions, as was done in <cit.>.
Moreover, we will provide a general strategy which can be used to transfer results in the literature for quasi-Banach function spaces (and their Köthe duals) assumed to contain all indicator functions of sets of finite measure to results for quasi-Banach function spaces satisfying the saturation property.
To do so, we discuss two ways of introducing a weight to a quasi-Banach function space: as a multiplier and as a change of measure. For the multiplier viewpoint, we take a weight 0<w∈ L^0(Ω), a quasi-Banach function space X over (Ω,μ), and define a new space X(w) as the space of those f∈ L^0(Ω) for which fw∈ X, equipped with the quasi-norm
f_X(w):=fw_X.
This is again a quasi-Banach function space over (Ω,μ):
Let X be a quasi-Banach function space over (Ω,μ) and let 0<w∈ L^0(Ω). Then X(w) is a quasi-Banach function space over (Ω,μ) with K_X(w)=K_X and
X(w)'=X'(w^-1).
Moreover, if X has the Fatou property or is order-continuous, then the same holds for X(w).
We observe that the map f↦ fw^-1 is an order preserving isometric isomorphism between X and X(w). Hence, the ideal, Riesz–Fischer and Fatou properties, as well as order-continuity are respectively possessed by X(w) if and only if they are by X. Similarly, for the saturation property, note that if 0<u∈ X is a weak order unit, then its image under this map uw^-1∈ X(w) is also a weak order unit. This concludes the proof of the first result.
For the equality X(w)'=X'(w^-1), we note that
g_X(w)' =sup_f_X(w)=1∫_Ω|f|w|g|w^-1 dμ
=sup_h_X=1∫_Ω|h||g|w^-1 dμ
=gw^-1_X'=g_X'(w^-1).
This proves the result.
Instead of adding a weight as a multiplier, we can also take a weight 0<w∈ L^0(Ω) and consider it as a change of measure through
w(E):=∫_Ew dμ,
which we will also denote as w dμ. The measure space (Ω,w dμ) is again a σ-finite measure space: since L^1(Ω)(w) is saturated by Proposition <ref>, it follows from Proposition <ref> that there exists a sequence of measurable sets F_n⊆Ω increasing to Ω with _F_n∈ L^1(Ω)(w), i.e., w(F_n)<∞, for all n≥ 1.
In a similar vein, a quasi-Banach function space X over (Ω,μ) is also a quasi-Banach function space over (Ω,w dμ). Note that Köthe duality depends on the underlying measure space, so when there is such a change of measure, we will write X^† to denote the Köthe dual with respect to this new measure.
Let X be a quasi-Banach function space over (Ω,μ) and let 0<w∈ L^0(Ω). Then X is also a quasi-Banach function space over (Ω,w dμ). Moreover, we have
X^†=X'(w).
If X has the Fatou property or is order-continuous with respect to (Ω,μ), then the same holds with respect to (Ω,w dμ).
Since the ideal property, Riesz–Fischer property, Fatou property, order-continuity, and the existence of a weak order unit all only depend on the null sets of the underlying measure space, the fact that (Ω,μ) and (Ω,w dμ) have the same null sets proves that X has any of these respective properties with respect to (Ω,w dμ) if and only if it has them with respect to (Ω,μ). As for the duality result, we have
g_X^†=sup_f_X=1∫_Ω|fg|w dμ
=gw_X'=g_X'(w),
as desired.
When it comes to the Lebesgue spaces, the idea of changing measure takes the following form: for p∈[1,∞), the space L^p(Ω,w dμ) can be seen as either a space over (Ω,w dμ), in which case the Köthe dual is given by L^p'(Ω,w dμ), or as a space over (Ω,μ), in which case the Köthe dual is given by L^p'(Ω,w^1-p' dμ). Both of these approaches appear in the literature, and it seems to be a matter of taste and context which one is preferred.
For the multiplier approach, for p∈(0,∞) we have
fw_L^p(Ω)=(∫_Ω|f|^pw^p dμ)^1/p,
which shows that L^p(Ω)(w)=L^p(Ω,w^p dμ). When p∈[1,∞], Proposition <ref> yields
L^p(Ω)(w)'=L^p'(Ω)(w^-1),
which is equal to L^p'(Ω,w^-p' dμ) when p>1.
In Lebesgue spaces when p<∞, both the multiplier and the change of measure approach yield the same theory up to a change of weight w↦ w^p. When p=∞, the multiplier approach is preferable. As a matter of fact, while the change of measure approach is classically used more frequently, we would argue that the multiplier approach does not only result in a theory that includes the case p=∞ in a satisfying manner, but also leads to a more symmetric theory altogether. Therefore, in our view, it is more intuitive and easier to work with.
The situation quickly becomes more complicated when we venture beyond the classical (weighted) Lebesgue spaces. For example, when dealing with weak type spaces L^p,∞(Ω) for p∈(0,∞), both approaches yield different kinds of weighted spaces. Indeed, while both the spaces
L^p,∞(Ω)(w), L^p,∞(Ω,w^p dμ)
contain the space L^p(Ω)(w)=L^p(Ω,w^p dμ) by Chebyshev's inequality, they are generally not equal.
For example, set Ω=^d equipped with the Lebesgue measure and define w(x):=|x|^-d/p. Then _^d∈ L^p,∞(^d)(w), but _^d∉ L^p,∞(Ω,w^p dx).
As we saw in Theorem <ref>, a change of measure will lead to a weight as a multiplier in the associate space. Hence, to fully understand the properties of a space with respect to weights, usually both of these approaches need to be understood.
Since many results in the literature are proven for quasi-Banach function spaces X over (Ω,μ) with the property that for all measurable E⊆Ω with μ(E)<∞ we have _E∈ X and ∫_E|f| dμ<∞ for all f∈ X (i.e., _E∈ X'), one might wonder if there is a general method of replacing these arguments with arguments that only require the saturation property. This is indeed the case, as we will outline next.
By Proposition <ref>, the saturation property means that there is a function u∈ X that satisfies u>0 a.e. If we now take any measurable set E⊆Ω, then the ideal property of X implies that also _E u∈ X. Thus, the weighted space X(u) contains the indicator function of all measurable subsets of Ω; not just the ones with finite measure. Moreover the map f ↦ u^-1 f is a positive isometry from X to X(u).
As noted in <cit.>, it was an observation by Kalton that if the space X(u) is considered a Banach function space over (Ω,w dμ) for a particular weight w, then for all measurable set E⊆Ω we also have that _E∈ X(u)^†, i.e.,
∫_E|f|w dμ<∞
for all f∈ X(u). The following result is a quasi-Banach function space version of this.
Let X be a quasi-Banach function space over (Ω,μ) and let 0<u∈ X be a weak order unit. Then _E∈ X(u) for all measurable E⊆Ω.
If X' is a Banach function space, then there is a weight 0<w∈ L^1(Ω) for which the space X(u), considered as a quasi-Banach function space over (Ω,w dμ), additionally satisfies _E∈ X(u)^† for all measurable E⊆Ω, i.e.,
∫_E|f|w dμ<∞
for all f∈ X(u).
For the first assertion, note that _E∈ X(u) if and only if _Eu∈ X. Since _Eu≤ u, this follows from the ideal property.
For the second assertion, let 0<v∈ X' be a weak order unit and set w:=uv∈ L^1(Ω). Then it follows from Proposition <ref> and Proposition <ref> that
X(u)^†=X(u)'(w)=X'(u^-1w)=X'(v),
so using the same argument as before with X replaced by X' and u replaced by v proves the result.
This result can essentially be used as a “patch”
to extend results appearing in the literature that assume that the space contains the indicator functions of sets of finite measure, to the more general class of spaces with the saturation property. We do wish to point out that, in our opinion, this solution is inelegant. One might as well do the proof correctly in the first place by arguing through a weak order unit directly. As a matter of fact, it is our hope that this survey will function as a “patch” for the future literature.
§.§ Acknowledgements
We are grateful to Ben de Pagter for reading our manuscript to make sure we did, in fact, not do Banach function spaces wrong.
Moreover, we would like to thank Jordy van Velthoven for bringing the PhD thesis <cit.> to our attention.
Finally, we would like to thank Mark Veraar for encouraging us to write this manuscript as well as for suggesting the title.
alpha
|
http://arxiv.org/abs/2307.00740v1
|
20230703040432
|
Role of strange quarks in the $D$-term and cosmological constant term of the proton
|
[
"Ho-Yeon Won",
"Hyun-Chul Kim",
"June-Young Kim"
] |
hep-ph
|
[
"hep-ph",
"hep-ex",
"hep-lat"
] |
INHA-NTG-04/2023
[E-mail: ]hoywon@inha.edu
Department of Physics, Inha University, Incheon 22212,
South Korea
[E-mail: ]hchkim@inha.ac.kr
Department of Physics, Inha University, Incheon 22212,
South Korea
School of Physics, Korea Institute for Advanced Study
(KIAS), Seoul 02455, South Korea
[E-mail: ]jykim@jlab.org
Theory Center, Jefferson Lab, Newport News, VA 23606, USA
We investigate the mechanics of the proton by examining the
flavor-decomposed proton cosmological constants and generalized vector
form factors. The interplay of up, down, and strange quarks within the
proton is explored, shedding light on its internal structure. The
contributions of strange quarks play a crucial role in the D-term
and cosmological constants. We find that the flavor blindness of the
isovector D-term form factor is only valid in flavor SU(3) symmetry.
Role of strange quarks in the D-term and cosmological
constant term of the proton
June-Young Kim
August 1, 2023
===================================================================================
Introduction –
The proton, a fundamental building block of matter, possesses a set of
fundamental observables, including its electric charge, magnetic
dipole moment, mass, spin, and the D-term. The D-term, akin to
these observables, plays a crucial role in unraveling the mechanical
properties of the proton and shedding light on how it achieves
stability through the intricate interplay of quarks and
gluons. Understanding the distribution of mass, spin, pressure, and
shear force within the proton is facilitated by its gravitational form
factors (GFFs) <cit.> in a manner similar to how
the electromagnetic form
factors reveal charge and magnetic distributions. Specifically, the
pressure and shear-force distributions are intimately linked to the
D-term form factor (see a recent review and references
therein <cit.>).
Direct measurement of the proton GFFs necessitates the interaction of
gravitons with protons, which is experimentally impractical due to the
exceedingly weak gravitational coupling strength of the
proton. However, a promising avenue emerges through the generalized
parton distributions (GPDs), which offer indirect access to the
mechanical properties of the proton. The GFFs can be regarded as the
second Mellin moments of the vector GPDs <cit.> (see also reviews <cit.>), providing insight into the proton's mechanical
structure <cit.>. Deeply virtual Compton scattering
(DVCS) serves as an effective means to access the GPDs, enabling the
extraction of valuable information about the
GFFs <cit.>.
Recent advancements by Burkert et al. <cit.> have
witnessed the experimental extraction of the quark component of the
proton D-term form factor, marking a significant breakthrough. By
leveraging experimental data on the beam-spin asymmetry and
unpolarized cross-section for DVCS on the proton, they successfully
obtained valuable insights into the D-term. However, their analysis
assumed the large N_c limit, thereby considering the up-quark
contribution (d_1^u) to be approximately equal to the down-quark
contribution (d_1^d) in the leading order. This assumption leads to
an almost null result for the leading isovector D-term
d_1^u-d <cit.>. Hence, it becomes imperative to
critically examine the validity of this assumption. DVCS provides an
effective way to access the GPDs.
Furthermore, Burkert et al. <cit.> also assumed
flavor SU(2) symmetry, neglecting the contribution of strange
quarks. However, as we shall establish in this study, the inclusion of
strange quarks becomes indispensable for accurately describing the
D-term. While the strange quark's contribution to the nucleon mass
and spin may be marginal, it assumes a pivotal role in characterizing
the D-term, indicating that the nucleon's stability can only be
comprehensively understood by considering the degrees of freedom
associated with up, down, and strange quarks. Note that the gluon
GPDs are only accessible at higher orders in α_s, so they are
expected to be smaller than the quark GPDs.
In this Letter, our objective is to elucidate the significance of
strange quarks in unraveling the mechanical structure of the
proton within the framework of the chiral quark-soliton model
(χQSM) <cit.>,
The χQSM provides a suitable relativistic quantum-field
theoretic framework for our analysis. It is noteworthy that previous
studies employing the χQSM <cit.> have yielded
results in excellent agreement with experimental data on the pressure
and shear-force distributions of the proton, as reported by Burkert et
al. <cit.>. In a recent publication by Won et
al. <cit.>, the proton D-term and PCC were further
explored by decomposing them into up and down-quark flavor components
through the computation of generalized isovector vector form
factors. The magnitude of the down-quark component was found to
be larger than that of the up-quark component, leading to a nonzero
value of d_1^u-d (see also Ref. <cit.>). However,
in order to comprehensively understand the proton's mechanical
structure, it becomes imperative to consider flavor SU(3) symmetry and
incorporate the generalized triplet and octet vector form factors
alongside the GFFs. This extension enables us to decompose the GFFs
into their up, down, and strange quark components, thereby gaining a
deeper understanding of the internal mechanics of the proton.
Notably, a recent lattice calculation has provided insights into the
flavor decomposition of the proton's spin and
momentum <cit.>. However, it is important to
emphasize that the flavor-decomposed PCCs were not
considered in this analysis. In our current work, we aim to fill this
gap by investigating the role of flavor-decomposed PCCs in examining
the mechanical structure of the proton. By incorporating these crucial
factors, we can refine our understanding of the proton's intricate
mechanics and shed further light on its fundamental properties.
General vector form factors –
The GFFs can be related to the matrix element of the
flavored (q) symmetric energy-momentum tensor (EMT) current defined as
T̂_q^μν=q̅i/4D^{μγ^ν}q
with the covariant derivative D^μ =
∂^μ-2igA^μ
and ∂^μ =
∂^μ-∂^μ , and
a^{μb^ν} = a^μ b^ν + a^ν b^μ, which
can be parametrized in terms of the four GFFs A_q, J_q,
D_q, and c̅_q:
p'T̂_q^μν ( 0 ) p = u̅(p')
[ A_q ( t ) P^μ P^ν/ M_N
+ J_q ( t ) i ( P^μσ^νρ + P^νσ^μρ ) Δ_ρ/ 2 M_N
+ D_q ( t ) Δ^μΔ^ν
- g^μνΔ^2/4M_N + c̅_q ( t ) M_N
g^μν] u(p),
where P= (p'+p)/2, Δ= p'-p, and Δ^2 = -Δ^2= t.
A_q, J_q, and D_q are related to the second
moments of the vector GPDs defined in Ref. <cit.>
A_q(t) = A_20,q(t),
2J_q(t) = A_20,q(t)+B_20,q(t),
D_q(t) = 4C_20,q(t),
where the subscript stand for the quark flavor. In the forward limit
(t→ 0) <cit.>, Eq. (<ref>)
reduces to
pT̂_q^μνp
= u̅(p)
[ A_q ( 0 ) p^μ p^ν/ M_N
+ c̅_q ( 0 ) M_N
g^μν] u(p) .
In the χQSM, the gluon degrees of freedom were integrated out
through the instanton vacuum, and their effects are absorbed in the
dynamical quark mass M, which was originally
momentum-dependent <cit.>.
Thus, the quark part of the EMT current is conserved within the
framework of the χQSM
∑_q∂_μT̂_q^μν = 0.
It implies that the PCC form factor c̅=∑_q c̅_q
vanishes in the whole range of the momentum transfer. At t=0, the
mass, spin, and cosmological constant of the proton are normalized
respectively as A=1, J=1/2, and c̅=0. Note that,
however, there are no constraints on the generalized triplet and octet
vector form factors.
The chiral quark-soliton model: pion mean-field approach –
The χQSM has been
successful in describing not only the well-known baryonic
observables <cit.>
but also the first data on the D-term form
factor <cit.> and other
GFFs <cit.>.
The formalism is well known already <cit.>, we briefly
explain the essential feature of the model. The detailed expressions
can be found in a forthcoming work <cit.>.
We start from the low-energy QCD effective partition function in
Euclidean space <cit.>
𝒵_eff = ∫𝒟π^a exp[ - S_eff
( π^a ) ],
where π^a is the SU(3) pseudo-Nambu-Goldstone (pNG) boson fields
with the superscript a=1,⋯ 8 and S_eff represents
the effective chiral action expressed as
S_eff = -N_c Trlog[
i∂ + i M U^γ_5 + im̂].
N_c denotes the number of colors. The chiral field U^γ_5 is
defined by U^γ_5 :=e^iγ_5 π^a λ^a= P_LU + P_R
U^†, where P_L(R):=(1∓γ_5)/2 and U:=e^i π^a
λ^a. M stands for the dynamical quark mass, which arises
from the spontaneous breakdown of chiral
symmetry <cit.>. Though M is
originally momentum-dependent and its value at the zero virtuality is
determined by the saddle-point equation from the instanton
vacuum <cit.>, we will use it as the
only free parameter in the current work. However, M=420 MeV is known
to be the best value for describing various baryonic
observables <cit.>.
m̂ is the current-quark mass matrix
diag(m_u m_d, m_s). We
consider and flavor SU(3) symmetry (m_u =
m_d=m_s) in the current work.
The first three components of the pNG fields can be coupled to the
three dimensional coordinates, which is called the hedgehog ansatz
π^a = P(r) n^a with the unit basis vectors n^a=x^a/r. It is the
minimal generalization that allows to incorporate the pion fields.
P(r) is called the profile function for the classical soliton. It
can be determined by solving the classical equation of motion
self-consistently. In flavor SU(3) symmetry, we employ the Witten's
embedding to preserve the hedgehog symmetry
U = e^i π^a λ^a = [ e^in·τ P(r) 0; 0 1; ].
Note that the zero-mode quantization with this embedding correctly
yields the spectrum of the lowest-lying SU(3) baryons such as the
baryon octet and decuplet. The zero-mode quantization can be performed
by the functional integration over rotational and translational zero
modes of the U field. Including the external tensor
source field, we can evaluate the matrix element of the EMT current.
The zero-mode quantization naturally furnishes the rotational 1/N_c
corrections. While they do not contribute to the GFFs except for J_q(t),
they provide substantial effects on the generalized triplet and octet
vector form factors.
We present the final expressions for the GFFs with flavor q:
(χ=3, 8)
[
A_q(t) + c̅_q (t)
- t/4M_N^2(D_q ( t ) - 2 J_q ( t ) )
] = 4π/M_N∫ dr r^2
j_0(kr)ε_q (r) , [ c̅_q (t) - t/6M_N^2
D_q] = - 4π/M_N∫ dr r^2
j_0(kr) p_q (r) , D_q (t) =
16π M_N∫ dr r^2 j_2(kr)/t s_q
(r) , J_q (t)
= 12 π∫ dr r^2 j_1(kr)/krρ_q^J(r) ,
with k=√(-t) and J_3'=J_3=1/2. For detailed expressions for the
distributions, we refer to the forthcoming work <cit.>.
Results and discussion –
Figure <ref> illustrates the results obtained for the GFFs. The
dashed curves represent the contributions from valence quarks, while
the short-dashed curves depict the contributions from sea
quarks. Remarkably, the sea-quark contributions are found to dominate
over the valence-quark contributions, as demonstrated in the first
panel of Fig. <ref>. This observation
aligns with the classical nucleon mass values, given by
M_N = N_c E_val + E_sea =
611.1 MeV + 645.3 MeV = 1256 MeV. It is
noteworthy that sea quarks contribute approximately 51.4 % to the
proton's mass.
Regarding the average momentum fraction, a recent lattice calculation
yielded a value of ⟨ x⟩_p = 0.497(12)(5)|_conn
+ 0.307(121)(95)|_disc + 0.267(12)(10)|_gluon =
1.07(12)(10) <cit.>, which can be identified as
the mass form factor A(t) in the forward limit. By assuming that the
sea-quark contributions implicitly incorporate the effects of
integrated-out gluon degrees of freedom within the instanton vacuum,
our current findings are in good agreement with the lattice data.
In contrast to the proton mass, the sea quarks contribute
approximately 24 % to the proton spin, as depicted in the second
panel of Fig. <ref>.
In the third panel of Fig. <ref>, it is evident that the
sea-quark contribution surpasses the valence-quark contribution. This
observation aligns with the nature of the proton D-term form
factor, which exhibits a quadrupole structure due to the rank-2 tensor
characteristics of the EMT, as described in
Eq. (<ref>). Remarkably, similar behavior can be observed in the
electric quadrupole (E2) form factors of spin-3/2 baryons, as
discussed in Refs. <cit.>.
The PCC form factor c̅(t) is
expected to vanish, as a consequence of EMT current conservation, as
shown in Eq. (<ref>). Interestingly, the valence-quark
contribution is exactly canceled by the sea-quark contribution. These
results, depicted in Fig. <ref>, underscore the importance of
employing relativistic quantum-field theoretic approaches to
comprehend the mechanical structure of the proton. Such methodologies
are crucial for unraveling the intricate dynamics within the proton
and gaining deeper insights into its mechanical properties.
Once we have evaluated the GFFs and we can decompose
the GFFs as follows:
F_u =F^0/3+F^3/2 + F^8/2√(3),
F_d =F^0/3-F^3/2 + F^8/2√(3),
F_s =F^0/3 - F^8/√(3),
where F^0, F^3, and F^8 denote the generic GFFs, and generalized
triplet and octet form factors, respectively.
The flavor decomposition of the proton mass form factor A(t) is
presented in the upper panel of Fig. <ref>. As emphasized in the
Introduction, considering the PCCs is crucial for a proper
understanding of the proton mass decomposition <cit.>. The contribution of strange quarks to the mass form
factor is found to be negligible. Again, the up-quark contribution dominates
over the contributions from down and strange quarks. This dominance of
up quarks is also reflected in the proton spin, with up quarks
accounting for the majority inside a proton, as depicted in the
second panel of Fig. <ref>. In contrast, the spin of the neutron
is primarily attributed to down quarks, as evidenced in Table <ref>.
The flavor decomposition results for the D-term form factor are
displayed in the third panel of Fig. <ref>. Notably, the up- and
down-quark contributions exhibit remarkable similarity, while the
strange-quark contribution comprises approximately 25 % of their
combined effect. This finding exhibits profound physical implications.
The result for the flavor-decomposed D-term shown in the third panel
of Fig. <ref> apparently yields almost a neglible value of D^u-d.
However, one should keep in mind that in flavor SU(2) D^u-d is
nonnegligible <cit.>. It implies that a certain amount of
the d-quark contribution is taken over by the strange quark in
flavor SU(3). Thus, the blindness of D^u-d∼ 0 assumed in
Ref. <cit.> is only valid in the flavor SU(3) symetric
case.
In the final panel of Fig. <ref>, the flavor decomposition of the
PCC form factor is depicted. Interestingly, the up and strange-quark
contributions dominate the PCC form factor, while the down-quark
contribution is relatively small. The PCC form factor c̅^0
vanishes due to the conservation of the EMT current, whereas
c̅^3 and c̅^8 can have finite values. Notably, both
c̅^3 and c̅^8 exhibit negative numerical values,
resulting in a small magnitude for c̅_d according to
Eq. (<ref>). On the other hand, the up and strange components are
defined respectively as (c̅^3+c̅^8/√(3))/2 and
-c̅^8/√(3). Although the sign of c̅_s is opposite
to c̅_u, its magnitude is comparable to that of the up-quark
component. This observation has significant implications. While the
PCC form factor itself vanishes, its flavor-decomposed components
remain finite and the strange quark comes into a crucial role. As
stated in Eq. (<ref>), the PCC is linked to the
pressure distribution, indicating that the strange quark should be
considered in understanding the internal mechanical structure of the
proton. Furthermore, it suggests that when the D-term form factor is
extracted in future experimental data, one should carefully consider the
contribution from strange quarks.
When the GFFs are understood as the second Mellin moments of the
vector GPDs as expressed in Eq. (<ref>), the PCC does not
appear from the leading-twist GPDs. The form factor A_q in the
forward limit is identified as the momentum fraction of the proton,
denoted as ⟨ x⟩_q. However, if we specifically consider
the temporal component of Eq. (<ref>), we derive the general
decomposition of the proton mass in the rest frame
as <cit.>
M_p = ∑_q (A_q(0)+ c̅_q(0)) M_p,
which leads to ∑_q (A_q(0)+ c̅_q(0))=1.
Given the conservation of the EMT current, the total PCC is expected
to vanish, implying that ∑_q A_q = 1. However, it does not
necessarily mean that each flavor component c̅_q is
zero. Consequently, the decomposed momentum fraction, denoted as
⟨ x⟩_q, may not be equivalent to the decomposed proton
mass expressed as M_p^q=(A_q(0) +c̅_q(0))M_p. Hence, a
compelling comparison arises between M_p^q and ⟨ x ⟩_q:
M_p^u/M_p = 59.7 % < ⟨ x ⟩_u =
64.4 %,
M_p^d/M_p = 34.6 % > ⟨ x ⟩_d =
33.4 %,
M_p^s/M_p = 5.7 % > ⟨ x ⟩_s =
2.2 %.
These results indicate a remarkable feature of c̅_q in
describing the proton mass. One can generalize the above findings as
follows: if the c̅_q is positive (negative),
then the flavor-decomposed proton M^q_p/M_p is larger
(smaller) than the nucleon momentum fraction carried by quarks
⟨ x ⟩_q:
c̅_q(0)>0 → M^q_p/M_p > ⟨ x
⟩_q, c̅_q(0)<0 → M^q_p/M_p < ⟨ x
⟩_q.
If the c̅_q(0) is zero, then we obtain the trivial relation,
i.e., M^q_p/M_p = ⟨ x ⟩_q.
Summary and conclusions –
We have conducted an investigation into the flavor decomposition of
the gravitational form factors of the proton. Here are the key
findings of our study:
* The dominant effects on the mass, D-term, and cosmological
constant of the proton stem from the sea quarks rather than the
valence quarks. This emphasizes the necessity of employing a
relativistically quantum-field theoretic approach to accurately
describe these quantities.
* The contributions of strange quarks play a particularly
significant role in the D-term and cosmological constants.
Therefore, when extracting these contributions from
experimental data, it is essential to take into account the
influence of strange quarks.
* While the mass form factor A(t) is associated with the average
momentum fraction, the decomposition of the nucleon mass reveals a
noteworthy contribution from the cosmological constants. This
indicates a profound connection between the mass distribution of the
proton and its mechanical structure.
* Burkert et al. <cit.> assumed the flavor
blindness of the isovector D-term (D^u-d∼ 0).
We showed in the current work that it is only valid in the flavor
SU(3) symetric case, since the strange quark takes off a certain
amount of the down-quark contribution.
In conclusion, our investigation emphasizes the importance of
considering sea quarks, especially strange quarks, in understanding
the gravitational form factors of the proton. It highlights the role
of the cosmological constant in the nucleon mass decomposition and its
implications for the mechanical properties of the proton.
Acknowledgments –
The work was supported by the Basic Science Research Program through
the National Research Foundation of Korea funded by the Korean
government (Ministry of Education, Science and Technology, MEST),
Grant-No. 2021R1A2C2093368 and 2018R1A5A1025563 (HYW and HChK).
This work was also supported by the U.S. Department of Energy, Office
of Science, Office of Nuclear Physics under contract
DE-AC05-06OR23177 (JYK) and partially supported by the U.S. Department
of Energy, Office of Science, Office of Nuclear Physics under the
umbrella of the Quark-Gluon Tomography (QGT) Topical Collaboration
with Award DE-SC0023646 (JYK).
apsrev4-2
|
http://arxiv.org/abs/2307.02131v2
|
20230705091409
|
Beyond Known Reality: Exploiting Counterfactual Explanations for Medical Research
|
[
"Toygar Tanyel",
"Serkan Ayvaz",
"Bilgin Keserci"
] |
cs.AI
|
[
"cs.AI",
"J.3"
] |
Differentially Private Adversarial Auto-Encoder
to Protect Gender in Voice Biometrics
Melek Önen
=====================================================================================
This study employs counterfactual explanations to explore "what if?" scenarios in medical research, with the aim of expanding our understanding beyond existing boundaries. Specifically, we focus on utilizing MRI features for diagnosing pediatric posterior fossa brain tumors as a case study. The field of artificial intelligence and explainability has witnessed a growing number of studies and increasing scholarly interest. However, the lack of human-friendly interpretations in explaining the outcomes of machine learning algorithms has significantly hindered the acceptance of these methods by clinicians in their clinical practice. To address this, our approach incorporates counterfactual explanations, providing a novel way to examine alternative decision-making scenarios. These explanations offer personalized and context-specific insights, enabling the validation of predictions and clarification of variations under diverse circumstances. Importantly, our approach maintains both statistical and clinical fidelity, allowing for the examination of distinct tumor features through alternative realities. Additionally, we explore the potential use of counterfactuals for data augmentation and evaluate their feasibility as an alternative approach in medical research. The results demonstrate the promising potential of counterfactual explanations to enhance trust and acceptance of AI-driven methods in clinical settings.
§ INTRODUCTION
As we incorporate automated decision-making systems into the real world, explainability and accountability questions become increasingly important <cit.>. In some fields, such as medicine and healthcare, ignoring or failing to address such a challenge can seriously limit the adoption of computer-based systems that rely on machine learning (ML) and computational intelligence methods for data analysis in real-world applications <cit.>. Previous research in Explainable Artificial Intelligence (XAI) has primarily focused on developing techniques to interpret decisions made by black box ML models. For instance, widely used approaches such as local interpretable model-agnostic explanations (LIME) <cit.> and shapley additive explanations (SHAP) <cit.> offer attribution-based explanations to interpret ML models. These methods can assist computer scientists and ML experts in understanding the reasoning behind the predictions made by AI models. However, end-users, including clinicians and patients, may be more interested in understanding the practical implications of the ML model's predictions in relation to themselves, rather than solely focusing on how the models arrived at their predictions. For example, patients' primary concern lies not only in obtaining information about their illness but also in seeking guidance on how to regain their health. Understanding the decision-making process of either the doctor or the ML model is of lesser importance to them.
Counterfactual explanations <cit.> are a form of model-agnostic interpretation technique that identifies the minimal changes needed in input features to yield a different output, aligned with a specific desired outcome. This approach holds promise in enhancing the interpretability and accountability of AI models by offering deeper insights into their decision-making processes.
Our proposed approach aims to provide enhanced transparency regarding the relationship between MRI features, moving beyond generating actionable outcomes solely for individual patients. Through counterfactual explanations, previously unseen decisions within the decision space can be brought to light. Numerous questions can be explored, such as how to determine the modifications required to transform a patient's diagnosis from one tumor subtype to another. Initially, posing such a question may seem nonsensical and illogical since an individual's actual tumor type cannot be magically altered. However, considering the challenge of distinguishing these two tumor types in clinical settings, asking such a question can effectively demonstrate which features are more informative in differentiating tumor types.
Counterfactual explanations enable us to identify the characteristics that distinguish two patient types with the smallest changes in features. Consequently, a deeper understanding of the interactions between MRI features and tumors can be gained; unveiling previously undisclosed outcomes that may be concealed in existing ML studies.
Furthermore, we have identified a potential contribution to clinical practice whereby a new patient with only MRI data available can have their tumor type estimated using a counterfactual approach, prior to receiving histopathological results. Since there is no prior label available for the patient, they are given an "unknown" label and the counterfactual approach is used for each tumor type, allowing estimation of the tumor type with the lowest distance and smallest change in features. While this approach shares similarities with ML, the crucial distinction lies in retaining information about the reasoning behind the estimated tumor type and its corresponding feature changes. This, in turn, can enhance our understanding and the use of AI models in clinical practice.
Last but not least, in situations where the acquisition of data is limited or not possible, various data augmentation methods have been developed to enhance the performance of ML and related applications <cit.>. However, these methods also give rise to additional issues while fulfilling their intended purpose, such as introducing biased shifts in data distribution. To address this issue, we employed counterfactuals generated from different spaces in order to balance the data by maximizing its diversity, and subsequently reported the results for different scenarios.
§.§ Brief Introduction to Posterior Fossa Pediatric Brain Tumors
Brain tumors represent the predominant form of cancer in children, constituting more than 25% of all cases. Specifically, the posterior fossa (PF) region comprises approximately 60-70% of these tumors and encompasses subtypes such as medulloblastoma (MB), ependymoma (EP), pilocytic astrocytoma (PA), and brainstem glioma (BG).
Clinical information obtained from radiological interpretations and histopathological analysis of tumors plays a crucial role in diagnosing, prognosing, and treating PF tumors in pediatric patients. Histopathological evaluation is essential for the initial diagnosis, providing valuable insights into patient prognosis and guiding clinical and therapeutic decisions. It serves as the established standard for differentiating between various PF tumor types. However, performing biopsies of different PF brain tumors carries significant risks of morbidity and mortality, in addition to being costly. Recent advancements in characterizing tumor subtypes using cross-sectional diagnostic imaging have shown promise in predicting differential survival rates and treatment responses. This progress holds significant potential for future treatment stratification in PF tumors. Hence, the advancement of a novel non-invasive diagnostic tool holds utmost importance in precisely classifying tumor type and grade, as well as aiding in treatment planning.
Magnetic resonance imaging (MRI) has emerged as the leading non-invasive imaging modality. It offers inherent advantages such as excellent soft-tissue contrast while avoiding the potential hazards of ionizing radiation. Conventional MRI protocols, including T1-weighted (T1W), T2-weighted (T2W), and fluid-attenuated inversion recovery (FLAIR) sequences, have shown promising results in distinguishing various PF tumor types in pediatric patients <cit.>. Furthermore, diffusion-weighted imaging (DWI) combined with apparent diffusion coefficient (ADC) mapping enables the assessment of physiological characteristics. It facilitates the differentiation of low- and high-grade tumors, as well as their distinct subtypes <cit.>.
To summarize, our contributions include:
* introducing a new perspective on the use of the counterfactual explanations approach in the posterior fossa literature
* demonstrating how the counterfactual approach, which enables us to provide patient-specific local explanations, can inform us in differentiating tumor types
* providing a systematic scoring and comparative analysis of radiologists and machine explanations, with the aim of offering a feasibility that corresponds to real-world scenarios
§ MATERIAL & METHODS
§.§ Ethics Statement and Patient Characteristics
This prospective study (Ref: 352/NÐ2-CÐT dated 13 March 2020) was carried out in both Radiology and Neurosurgery departments, and was approved by the Institutional Review Board in accordance with the 1964 Helsinki declaration. Written informed consent was obtained from authorized guardians of patients prior to the MRI procedure. Our study comprised a cohort of 112 pediatric patients diagnosed with PF tumors, including 42 with MB, 25 with PA, 34 with BG, and 11 with EP. All BG patients were confirmed based on full agreement between neuroradiologists and neurosurgeons, whereas the remaining MB, PA, and EP patients underwent either surgery or biopsy for histopathological confirmation.
§.§ Data Acquisition and Assessment of MRI Features
For all patients, MRI exams including T1W, T2W, FLAIR, DWI (b values: 0 and 1000) with ADC, and contrast-enhanced T1W (CE-T1) sequences with macrocyclic gadolinium-based contrast enhancement (0.1 ml/kg Gadovist, Bayer, Germany, or 0.2 ml/kg Dotarem, Guerbet, France) were collected in the supine position using a 1.5 Tesla MRI scanner (Multiva, Philips, Best, the Netherlands).
The Medical Imaging Interaction Toolkit (German Cancer Research Center, Division of Medical Image Computing, Heidelberg, Germany) was utilized for measuring the region of interest (ROI) of PF tumors and normal-appearing parenchyma and and subsequently assessed the following MRI features: signal intensities (SIs) of T2, T1, FLAIR, T1CE, DWI, and ADC. Ratios between the PF tumor and parenchyma were calculated by dividing the SI of the tumor and the SI of the normal-appearing parenchyma based on T2, T1, FLAIR, T1CE, DWI, and ADC. Additionally, ADC values were quantified for both the PF tumor and parenchyma on the ADC map using the MR Diffusion tool available in Philips Intellispace Portal, version 11 (Philips, Best, The Netherlands). It is worth noting that, prior to analysis, bias field correction was applied to every image to correct for nonuniform grayscale intensities in the MRI caused by field inhomogeneities.
§.§ Standardization
Prior to conducting ML trainings, the dataset was subjected to a standardization process, using Python programming (version 3.9.13) with the Scikit-Learn library (version 1.0.2) module. This technique involved transforming the data to have a mean of zero and a standard deviation of one. To standardize all numerical attributes, the Scikit-Learn StandardScaler function was employed, which subtracted the mean and scaled the values to unit variance, ensuring the data was in a standardized format. To determine the standard score of a sample x_i , the following formula is used:
z = x_i - μ/σ,
where, μ represents the mean of the training samples, and σ represents their standard deviation.
§.§ Distance Calculation
When using counterfactuals as classifiers, the significant scale difference between the actual values of the MRI features in Tables <ref> and <ref> makes it illogical to calculate distances. To address this issue, we tackled the problem by disregarding the unchanged values indicated by `-' and rescaling all available values to a standard scale before reintroducing them. Subsequently, we computed the distance using the Euclidean distance metric on the generated counterfactual values of the current factual (i.e., new patient). By aiming to minimize the distance, we seek to determine the tumor type in this manner, as it corresponds to the least dissimilarity (Table <ref>).
The Euclidean distance formula can be represented as following:
Distance = √(∑_i=1^n (x_i - y_i)^2)
In the formula, x_i and y_i represent the values of the corresponding features in the current row and baseline row, respectively. The summation symbol ∑ calculates the sum of the squared differences for each feature. Finally, the square root function is applied to obtain the Euclidean distance. Please note that in the formula, the n represents the number of features or columns in the dataset.
§.§ Statistical Analysis
The statistical analysis was conducted using the t-test from the scipy library (version 1.10.1). A two-tailed p-value of <0.05 was considered statistically significant.
The analysis was performed as follows: First, the analysis involved assessing whether the counterfactuals generated by changing the tumor type from 𝒳 to 𝒴 underwent a statistically significant change (dependent t-test). Second, it involved analyzing whether the counterfactuals generated by changing the tumor type from 𝒳 to 𝒴 exhibited significant similarity to the original (factual) patients with tumor type 𝒴 that we previously had (Welch's t-test).
Five counterfactuals were generated for each patient transition from tumor type 𝒳 to 𝒴. When applying dependent and independent t-tests, the generated counterfactuals were tested in different ways. In the case of measuring how different the newly generated data is from the original data, we created more data than our original sample size. Therefore, we cannot satisfy the requirement for equal dimensions in the dependent analysis during the testing phase. To address this, we designed a different analysis approach: for each counterfactual, the data of the corresponding factual patient was considered as the initial data for testing. Subsequently, a significance test was performed on the old-new values of the five feature variables that changed the most for each counterfactual generated from this factual data.
In the case of independent analysis, all generated counterfactuals were independently tested by including all patients present in the real data for each of the three features that underwent the most significant changes. The corresponding values for these three features were tested independently.
In summary, the fundamental difference between the two tests can be considered as being patient-based and feature-based in nature.
§.§ Distribution Plotting
To generate individual kernel density estimation (KDE) plots for each feature, we utilized the kdeplot function from the Seaborn package (version 0.11.2). By specifying a hue parameter (e.g., Tumor Type), we were able to incorporate a meaningful association using this method. Consequently, we transformed the default marginal plot into a layered KDE plot. This approach tackles the challenge of reconstructing the density function f using an independent and identically distributed (iid) sample x_1, x_2,..., x_n from the respective probability distribution.
§.§ Machine Learning
To decrease overfitting and convergence issue of counterfactuals, especially for EP, we took less patients to implement the task: 25 patients from MB, PA and BG and 11 patients from EP. For testing, to ensure the reliability of our ML models, particularly with a small dataset, we conducted five runs using stratified random sampling based on tumor type with 55% train and 45% test patients.
Using nine ML models, including support vector machine (SVM), adaboost (ADA), logistic regression (LR), random forest classifier (RF), decision tree classifier (DT), gradient boosting classifier (GB), catboost classifier (CB), extreme gradient boosting classifier (XGB) and voting classifier (VOTING), we evaluated the models on the raw data with the outcomes prior to our counterfactual interpretations. CB and XGB were obtained from CatBoost version 1.1.1 and XGBoost version 1.5.1 libraries, respectively, while the other models were obtained from the Scikit-Learn library.
We assessed the performance of the models using precision, recall, and F1 score, which were calculated based on the counts of true positives (TP), true negatives (TN), false positives (FP), and false negatives (FN). In order to ensure an accurate interpretation of the ML results, we opted not to balance the labels. Instead, we employed macro precision, macro recall, and macro F1 score metrics, which take into account the contributions of all labels equally. This approach enabled us to observe the genuine impact of the varying label frequencies, EP in this case.
The validation metrics used in ML are as follows:
Macro Precision = 1/n∑_i=1^nTP_i/TP_i + FP_i
Macro Recall = 1/n∑_i=1^nTP_i/TP_i + FN_i
Macro F1 Score = 1/n∑_i=1^n2 ×TP_i/2 ×TP_i + FP_i + FN_i
where n represents the total number of classes or categories.
§ COUNTERFACTUAL EXPLANATIONS
Given the challenges associated with local approximations, it is worthwhile to explore prior research in the "explanation sciences" to identify potential alternative strategies for generating reliable and practical post-hoc interpretations that benefit the stakeholders affected by algorithmic decisions <cit.>. To create explanations that are understandable and useful for both experts and non-experts, it is logical to investigate theoretical and empirical studies that shed light on how humans provide and receive explanations <cit.>. Over the past few decades, the fields of philosophy of science and epistemology have shown increasing interest in theories related to counterfactual causality and contrastive explanations <cit.>.
In philosophy, counterfactuals serve not only to assess the relationship between a mental state and reality, but also to determine whether a mental state can be considered as knowledge. The problem of identifying knowledge with justified true belief is complicated by various counterexamples, such as Gettier cases (1963) <cit.>. However, some scholars proposed additional conditions to address these counterexamples. This literature highlighted two significant counterfactual conditions:
Sensitivity: If ρ were false, 𝒮 would not believe that ρ.
Safety: If 𝒮 were to believe that ρ, ρ would not be false.
[https://plato.stanford.edu/entries/counterfactuals/https://plato.stanford.edu/entries/counterfactuals]Both of these conditions express the notion that 𝒮's beliefs must be formed in a manner that is sensitive to the truthfulness of ρ. The counterfactual semantics has influenced from this idea in various ways, including the establishment of their non-equivalence, clarification, and resolution of potential counterexamples.
This concept has sparked a fresh wave of counterfactual analyses that employ new methodologies. Hitchcock <cit.> and Woodward <cit.>, for instance, constructed counterfactual analyses of causation using Bayesian networks (also known as "causal models") and structural equations. The basic idea of the analysis can be summarized as follows: "𝒳 can be considered a cause of 𝒴 only if there exists a path from 𝒳 to 𝒴, and changing the value of 𝒳 alone results in a change in the value of 𝒴".
Ginsberg (1986) <cit.> initiated his discussion by outlining the potential significance of counterfactuals in artificial intelligence and summarizing the philosophical insights that have been drawn regarding them. Following this, Ginsberg provided a structured explanation of counterfactual implication and analyzed the challenges involved in executing it. Over time, numerous developments in the fields of artificial intelligence and cognitive science, including the Bayesian epistemology approach, have gone beyond what was previously envisioned by Ginsberg regarding the potential application of artificial intelligence and counterfactuals <cit.>. Furthermore, Verma et al. <cit.> conducted a comprehensive review of the counterfactual literature, analyzing its utilization in over 350 research papers.
In recent times, there has been a growing interest in the concept of counterfactual explanations, which aim to provide alternative perturbations capable of changing the predictions made by a model. In simple terms, when given an input feature x and the corresponding output produced by an ML model f, a counterfactual explanation involves modifying the input to generate a different output y using the same algorithm. To further explain this concept, Wachter et al. <cit.> introduce the following formulation in their proposal:
c = min_cℓ(f(c), y) + |x - c|
The initial component ℓ of the formulation encourages the counterfactual c to deviate from the original prediction, aiming for a different outcome. Meanwhile, the second component ensures that the counterfactual remains in proximity to the original instance, thereby emphasizing the importance of maintaining similarity between the two.
§.§ Generating Counterfactual Explanations
The argument of causality in counterfactuals applies to various situations that involve making decisions about an individual's future, like determining admission to a university <cit.>, distributing government aid <cit.>, evaluating job applicants <cit.>, and identifying those at risk of future illnesses <cit.>. In similar events to these, if the received response is negative, it is not sufficient to only learn that the response is negative; it is quite important to understand how the results can be improved or modified in the future without making significant and unrealistic changes to the data.
Furthermore, we propose that counterfactual explanations can effectively utilize factual knowledge obtained from MRI features. These features serve as differentiators for distinct tumors, enabling the differentiation between two tumors with minimal adjustments. In essence, this approach aims to identify the most distinguishing characteristics when comparing tumors. This methodology becomes especially valuable when conventional diagnostic methods struggle to differentiate between tumor types. By integrating interpretations and explanations into ML models, we have the potential to identify key features that contribute to accurate tumor classification and, ultimately, improve patient outcomes.
The concept of data manifold proximity, as depicted in Fig. <ref>, is an important constraint that needs to be carefully addressed. It is important to have confidence in the credibility of a counterfactual explanation, which entails generating a set of features that bear resemblance to prior observations encountered by the classifier. If a counterfactual produces unrealistic features that diverge from the training data or disrupt the observed associations between features, it would be considered impractical and outside the norm established by the training data points <cit.>. Hence, it is essential to ensure that generated counterfactuals are realistic, closely aligning with the training data and preserving the observed feature associations. To address this issue, several methods, including constraint-based approaches, have been consciously integrated into algorithms. For instance, changing the "age" and "gender" parameters would be highly unreasonable; therefore, we specify this in the model and prevent counterfactual from deviating from reality. In this research, we impose restrictions on parenchyma features, which serve as a reference point for tissue characteristics. We integrated parenchyma features during the training phase, but they were constrained from being modified during the counterfactual generation process.
Diverse Counterfactual Explanations (DiCE[https://github.com/interpretml/DiCEhttps://github.com/interpretml/DiCE]) <cit.> library provides us with easy-to-use and modifiable code to accomplish this task. DiCE utilizes cutting-edge research to generate counterfactual explanations for any ML model. The main concept is to frame the process of finding these explanations as an optimization problem, similar to how adversarial examples are found (e.g., DeepFool <cit.>). However, the crucial distinction is that for explanations, we require perturbations that not only alter the output of the ML model but are also varied and realistic to implement.
The counterfactual generation engine of DiCE incorporates diversity and feasibility constraints, which involve several factors: diversity through determinantal point processes, proximity, sparsity, and user constraints. Subsequently, an optimization formula was proposed as follows:
C(x) = min_c_1,...,c_k1/k∑_i=1^k ℓ(f(c_i), y) + λ_1/k∑_i=1^k dist(c_i, x) - λ_2 dpp_diversity(c_1,...,c_k),
where the function ℓ was chosen as hinge loss,
ℓ = max(0, 1 - z * logit(f(c))),
where z is assigned a value of -1 when y equals 0, and a value of 1 when y equals 1. The term logit(f(c)) refers to the unscaled output generated by the ML model. For instance, it represents the final logits that are input into a softmax layer to facilitate predictions within a neural network.
dpp_diversity represented as,
dpp_diversity = det(K),
where K_i,j = 1/1+dist(c_i,c_j) and dist(c_i,c_j) indicates a measurement of distance between the two counterfactual examples. Further details of the formula and evaluation can be found at <cit.>.
§ RESULTS
§.§ What if the counterfactual explanations graciously provide us with additional insights into classification?
DiCE provides multi-class training capability, allowing us to develop a framework that facilitates joint training and response acquisition for all four tumor types. Fig. <ref> illustrates the visualization of idea for binary classification. This framework aims to leverage counterfactual explanations, acting as a classifier, to determine the tumor type that best aligns with the numerical MRI data of a newly arrived patient. Furthermore, it strives to uncover the factors and distinctive characteristics that differentiate this tumor type from others, even when only numeric MRI data is available for the patient.
By utilizing all four tumor types, we essentially construct a decision space of reality with our existing patients. As the new patient is guided through this space, attempting to transform into each disease sequentially, the degree of self-modification required for each specific tumor condition will vary. As the required changes decrease, it can be inferred that the patient is closer to that particular tumor type since they necessitate fewer modifications. Similarly, understanding the level of dissimilarity and the contributing features to this dissimilarity has been explored as a critical approach in determining the tumor type.
Our approach for using counterfactual explanations as classifiers avoids the need to separate different test sets. As a result, performance of ML models significantly surpasses the baseline scores, with only a few patients excluded from the decision space to simulate the scenario of newly arriving patients. In this case, all train samples become test patients, as we explore the decision space. To achieve this, DiCE provides valuable information regarding misclassified samples, enabling us to exclude the associated counterfactuals from the statistical analysis through post-processing.
The LR model outperformed other models in overall performance. Furthermore, when we experimented with alternative models for generating counterfactuals, we observed that a larger number of patients failed to converge to the target disease compared to LR. This situation made it challenging to write an automated code for entire patients using DiCE when analyzing counterfactuals. LR resolved this issue since it was able to converge for the transformation of all the provided patients. Therefore, we decided to continue using LR for generating counterfactuals.
Figure <ref> and Table <ref> illustrate the concept of using counterfactuals as a classifier. This approach can be explained as follows: We have a newly arrived patient who has undergone only an MRI scan, and it is not possible to determine the type of tumor based solely on the MRI images. Our approach aims to generate alternative realities or what if? scenarios (e.g., "what if we had MB? how much would it change?" or "what if we had EP?") for patient x by utilizing the factual MRI data we possess and leveraging information from the previously obtained decision space. By applying a what-if scenario to each tumor, we can clearly identify which tumor type the available data is closer to or which MRI features need to be adjusted to achieve closer proximity. The overall feature distance, suitably scaled, provides an indication of how different the tumor type of the new patient could be compared to others.
Table <ref> provides detailed information about an unknown patient whose ground-truth classification is EP. In case of the MB counterfactual sample, changes are observed in FLAIR_Tumor and T1CE_Tumor features, resulting in distances of -663 and 417.5, respectively. As for the EP group, the only noticeable change is observed in the T2_Tumor feature, with a distance of 137.2. On the other hand, significant changes can be observed for the PA group in the T2_Tumor (1286 to 2290.2), ADC_Tumor (1.009 to 2), and T1CE_Tumor (892 to 1492.5) features. Similarly, for the BG group, differences can be observed in DWI_Tumor (1175 to 544.23), ADC_Tumor (1.009 to 2), and T1CE_Ratio (1.595 to 0.781). Based on these findings, it can be inferred that fewer changes are required in our factual data to align with the characteristics of EP. Consequently, we can conclude the presence of EP patients and further investigate the discrepancies in features among other tumor types. Furthermore, Table <ref> is included to present additional potential clinical cases. The last patients belonging to each tumor type were chosen and removed from the decision space to assume their current status as unknown cases.
The MB counterfactual sample exhibits differences only in the FLAIR_Ratio feature, with a change from 1.141 to 0.742. In the case of EP, increases are observed in the FLAIR_Tumor feature (from 1107 to 2493) and the ADC_Ratio feature (from 0.87 to 2.316). Similarly, for PA, changes are observed in the T2_Ratio feature (from 1.638 to 2.61) and the ADC_Ratio feature (from 0.87 to 2.892). The algorithm selects changes in the ADC_Tumor feature (from 0.54 to 2.05) and ADC_Ratio (from 0.87 to 2.917) for the BG group.
In the case of PA, the factual data reveals differences in the T2_Ratio, FLAIR_Ratio, ADC_Tumor, and T1_Ratio features. Specifically, the changes in the MB counterfactual sample are from 2.297 to 0.97, 1.143 to 0.608, 1.879 to 0.4, and 0.57 to 0.535, respectively. For EP, decreases are observed in the T2_Tumor (1778 to 913.6), T2_Ratio (2.297 to 0.968), DWI_Tumor (809 to 402.32), DWI_Ratio (0.805 to 0.476), and ADC_Tumor (1.879 to 0.4) features. Conversely, for PA, no significant changes are observed except for the DWI_Ratio feature, which changes from 0.805 to 1.025. Regarding the BG group, changes occur in the T2_Ratio feature (from 2.297 to 1.072) and the T1CE_Ratio feature (from 1.125 to 0.768).
In the case of the last instance, BG, significant changes are required in almost all features to transform it into an MB patient compared to its factual data. For EP, changes are observed in the T2_Tumor feature (from 1709 to 860.3), T2_Ratio feature (from 1.724 to 0.908), FLAIR_Tumor feature (from 1150 to 2019), and ADC_Tumor feature (from 1.59 to 0.34). In case of PA, changes occur in the ADC_Tumor feature (from 1.59 to 1.56) and T1CE_Tumor feature (from 326 to 1234). Conversely, in the BG counterfactual, minimal changes are observed in the features, with only the FLAIR_Ratio feature undergoing a change from 1.189 to 0.752.
In Tables <ref>, <ref> and <ref>, T represents Tumor, and R represents Ratio (Tumor/Parenchyma). The variable x denotes the actual MRI feature values of a new unknown labeled patient. Although we have access to the ground-truth in this testing, we assume that we do not. The variable x_cf represents a hypothetical scenario in which we transform the patient into all possible tumor types to obtain counterfactual outputs. These outputs help us identify which features are similar and need to be altered to correspond to each tumor type. The symbol (-) indicates no modification in the feature. For example, in Table <ref>, the patient is identified as a EP patient based on having the lowest feature distance overall to the EP type. Thus, we predict that this new patient most likely has EP.
In Table <ref>, we present the same samples as depicted in Tables <ref> and <ref>, but with standardized features to enable meaningful distance measurements. This explicit representation of tumor classification allows for better comparison. For an actual patient diagnosed as MB, the distance from the counterfactual explanation generated using the MBs is 2.5, and it is close to an EP counterfactual explanation. In the case of a new patient diagnosed as EP, it is 2.985 units away from one of the closest MB counterfactual explanations and 0.35 units away from the generated EP counterfactual. If the patient is diagnosed as PA, it is 3.157 units away from the second closest BG counterfactual explanation and 1.252 units away from the PA counterfactual explanation. Lastly, for a patient diagnosed as BG, the distance from the generated BG counterfactual is 1.853 units, and is 2.574 units away from one of the closest PA counterfactual explanations. The data represented by (-) actually have the same values as the original data and are included for simplification and clearer representation. The distance metric used is generally independent of the results and only alters the magnitude of distances between them.
§.§ Revealing Key MRI Features through Counterfactual Explanations
As discussed in Section <ref>, counterfactual explanations can provide insights into feature importance. These explanations allow us to understand the reasoning behind ML model decisions and offer valuable options for restriction. In clinical settings, visible changes in features through counterfactual explanations can be more relevant and meaningful for real-world evaluations and applications.
Considering that we generated five counterfactuals for each patient, we obtained 125 explanations for MB, PA, and BG, and 55 explanations for EP. Table <ref> illustrates our reporting method for counterfactual analysis results for a case scenario (MB to EP). The patient count, the total number of generated counterfactual explanations for them, and the statistical information regarding the frequency of changes observed on which features in these counterfactuals to identify the top 3 influential features are shown. For instance, "FLAIR_Tumor 71 changes" signifies that out of 125 counterfactuals, 71 of them involved a modification from MB to EP. Therefore, FLAIR_Tumor creates such a distinction between these two tumors that the model considers altering this feature significantly influential in shifting the decision from one side to the other in the decision space. The greater the repetition of this occurrence, indicated by the magnitude of "changes," the more pronounced the outcome suggesting that even in random selections, optimization is achieved for that particular feature, significantly impacting the decision.
Table <ref> presents the findings from each tumor pair to identify feature differences between different tumor types. The observed changes in features align with expected outcomes from clinical studies. MB and EP tumors are distinguished by FLAIR and ADC features. MB and PA typically exhibit differences in T2 and ADC. MB and BG, on the other hand, show variations primarily in ADC, T2, and T1CE. In the case of EP and PA, T2 exhibits the most significant changes, while variations in ADC and T1CE are also observed. The most distinguishing features between EP and BG are T1CE_Ratio and ADC_Tumor. As for PA and BG, the T2_Ratio feature has been identified as a crucial factor in creating differentiation. Additionally, significant variations in T1CE features are frequently observed, further contributing to the dissimilarity between these tumor types.
The results presented in Table <ref>, along with the visualization in Fig. <ref>, provide insights into the PA example as follows: When considering the scenarios of MB to PA or PA to MB, it is generally observed that similar distributions dominate. The distinctiveness of the distributions between MB and PA becomes evident when examining the top 5 features that exhibit the most variation, as shown in Fig. <ref> and discussed in our previous study <cit.>. Furthermore, when examining nearly identical distributions between BG and PA, a lack of discernibility is found. This is further supported by the absence of these features among the important features for BG to PA or PA to BG, as demonstrated in Table <ref>. These findings indicate that the algorithm effectively operates in accordance with our expectation during counterfactual generation, which involves altering the most distinct features to achieve maximum impact with minimal modification. Notably, T1CE features and T2_Ratio are identified as the most distinctive features between BG and PA, as shown in Fig. 5 of <cit.>.
This understanding supports the notion that the top 3 features with the most distinct distributions in KDE plots are indeed the most discriminative features. It also corroborates our previous study <cit.>, which employed a different explanatory approach. Additionally, this concept provides numerous additional findings that can be interpreted in conjunction with our prior work. Furthermore, in addition to our previous proposition, the nature of the variation in counterfactual explanations inherently identifies the features that will have the greatest impact. This reveals that the most changing features in generating different counterfactual explanations for patients actually correspond to the features that contribute the most to the dissimilarity between the two tumor types.
§.§ Statistical Analysis of Generated Counterfactuals
During the construction of the counterfactual tumor y from the original tumor x, we conducted a dependent test to assess the statistical difference between x and y, as explained in Section <ref>. Apart from the PA to MB transition (e.g., p=0.04763, p=0.0307), no significant differences were observed in other tumor transitions. This result can be attributed to both the fundamental optimization principle of minimizing changes during counterfactual generation and the distribution distances shown in Fig. <ref>. Specifically, Fig. <ref>a demonstrates a distinct separation in the distributions during the PA to MB transition, requiring a significantly larger change for transformation.
Table <ref> presents the statistical similarity obtained when each tumor is transformed to represent the "what if?" scenario of other tumors. In other words, when we transform tumor x to tumor x'=y, we know that x' is still dependent on x. Therefore, we measure how similar x' is to the original distribution of y on the feature where x undergoes the most significant change. A high p-value indicates that we do not reject the difference, implying that the counterfactual explanations we generate sufficiently resemble the original distribution for that particular feature.
As expected, when attempting self-transformation on each tumor type, the obtained p-values were notably high. Evaluating at a significance level of 0.05, several features closely aligned with the actual feature distribution of the patients, making them indistinguishable from the ground truth. The following features exhibited this characteristic: FLAIR_Ratio and FLAIR_Tumor in the case of transforming EP to MB, ADC_Ratio when transforming PA to MB, ADC_Ratio during the transformation from MB to EP, T2_Tumor and T1CE_Tumor in the context of PA to EP transformation, DWI_Ratio when transforming BG to EP, T2_Tumor for MB to PA transformation, T2_Tumor and T2_Ratio in the case of EP to PA transformation, T1CE_Ratio and T1CE_Tumor during BG to PA transformation, and DWI_Ratio when transforming EP to BG. Fig. <ref> presents some of these cases along with their KDE distributions.
§.§ Pushing the Boundaries of Data Augmentation through Alternative Realities
During the construction of counterfactuals, we employed downsampling for MB and BG to align with the number of PA patients (25) during training, considering it appropriate. EP had a count of 11, and we did not increase it. The baseline results for this scenario can be observed in Table <ref>. For evaluation, the train-test splitting was conducted with a ratio of 45% for the baseline dataset, 35% for EP augmentation, and 25% for EP-PA-BG augmentation.
To address the data imbalance, we examined the inclusion of generated counterfactuals for data augmentation. For example, by equalizing EP with the other tumor types and incorporating 14 different generated counterfactuals alongside the originals, we excluded EP-to-EP instances. Opting for transitions from various tumor types to maximize variance and generalizability, we achieved an improvement of up to 12.06% as shown in Table <ref>, case A.
To incorporate the previously set aside MB and BG data, we aligned all tumor types, except themselves, with counterfactuals generated from different tumor types. BG, PA, and EP were included with MB, and all were evaluated as a group of 42 patients, which was the maximum patient count for one tumor type. When considering the counterfactuals as actual patients, the outcomes align with the results presented in Table <ref>, case B.
Furthermore, in the case examined in Table <ref>, case C, 11 patients were included from each tumor type in the test set, resulting in no actual EP patients in the training set. Consequently, during training, we had 31 real samples for MB, 0 real and 31 counterfactual samples for EP, 14 real and 17 counterfactual samples for PA, and 23 real and 8 counterfactual samples for BG. Notably, when evaluating on real samples, the results were intriguing. Despite the absence of real EP patients in the training data, the model successfully identified 5 out of the 11 patients, leading to an overall baseline score that was, on average, 0.76% higher.
§ DISCUSSION
The spatial heterogeneity in tumor characteristics presents a substantial clinical challenge in pediatric brain tumors. Specifically, tumors originating from the posterior fossa often exhibit overlapping imaging features, leading to difficulties in accurate differentiation, even for experienced clinicians. Accurate diagnosis is of paramount importance as each tumor type requires specific treatment strategies, directly impacting patient outcomes and overall quality of life. Despite the promising advancements in AI and medical imaging, the inherent black-box nature of most models and the challenges in convincing clinicians for everyday use often restrict these studies to the realm of research. It is crucial to aspire for these developments to become interactive and trustworthy tools that clinicians can readily utilize in real-life scenarios. Hence, our study introduces a novel approach to the existing literature, offering valuable insights into the underlying patterns and relationships among the features observed in MRI scans. We hypothesize that exploring "what if?" scenarios can significantly enhance our understanding of alternative outcomes and their implications for clinical decision-making. To the best of our knowledge, this research represents a pioneering effort in the investigation of pediatric brain tumors, highlighting its substantial influence on the interpretability and generalizability of ML models in this domain. By exploring alternative scenarios and their impact, we aim to contribute to the advancement of precise diagnostics and improve patient care in this challenging field.
The primary objective of this study was to enhance the interpretability of ML models’ outcomes and provide additional insights using a novel approach. Despite being debated in the fields of philosophy and psychology for half a century, the core idea of counterfactuals has been employed in the field of artificial intelligence under various names, and their complete implementation is relatively recent. In this study, we transformed this idea into the clinical literature to extract valuable information that could be beneficial for clinicians in real-life scenarios. We aim to demonstrate both the alternative possibilities in the decision space and the underlying reasons behind the selected decision by utilizing alternative realities. To achieve this, we perturbed the original data by imposing various constraints during a relatively straightforward mathematical optimization process.
The generated counterfactual explanations provide evidence that there is not always a single definitive choice in life. When considering the diversity in individuals' biological characteristics, it becomes apparent that approaching each case may require a personalized approach. This notion aligns with the concept of personalized healthcare, which has been extensively explored in the health literature <cit.>. In other words, our approach involves producing explanations tailored to each newly arrived patient by drawing insights from previous patients. By leveraging the decision space, we can identify the closest data points in terms of biological characteristics to the newly arrived patient and construct alternative realities specific to that individual. These alternative scenarios allow us to observe the differences in the tumor on the MRI and gain insights into which tumor type it is most closely related to.
As there were no existing counterfactual studies in the literature regarding PF tumors, our study aimed to bridge this gap by subjecting the obtained outputs to various statistical tests. The objective was to provide a comprehensive exploration of the subject matter for enhanced clarity. We specifically investigated two aspects: first, the potential utilization of the generated counterfactuals as a post-classifier, and second, whether they could reveal significant MRI features associated with the corresponding tumor. These investigations were conducted with meticulous attention. Furthermore, we examined the potential impact of reintroducing these diverse counterfactuals into the dataset to address the issue of data imbalance. Statistical tests were also performed to assess the similarity of counterfactuals generated from different spaces to the transformed target space. The results of these tests are presented in Section <ref>.
The LR model exhibited the highest score in our evaluation, and therefore, we utilized it to generate counterfactual explanations. To facilitate the model's learning process and balance the data, we included 11 instances of the EP tumor type while selecting 25 patients from the remaining tumor types, despite there being more instances of certain tumor types.
To automate the generation of counterfactuals for all patients, we developed a framework. However, in cases where an optimal counterfactual explanation cannot be found, the process is halted. Currently, addressing such situations comprehensively is not feasible, and updates are necessary in the DiCE framework. Although we did not encounter this problem with the LR model, as an alternative suggestion, if there is a sufficient number of patients, consider working with a subsample and replace the excluded patients with another patient from the actual population for statistical testing purposes. Instead of employing DiCE, alternative methodologies utilizing various counterfactual algorithmic approaches may be employed, which possess the capability to efficiently address the optimization problem within a more favorable time-interval.
Obtaining specific results for individual patients is not problematic and can be resolved through parameter adjustments or by utilizing different models. However, if the goal is to validate the study and focus on medical research rather than practical outputs, performing a more comprehensive statistical analysis by manually deriving counterfactual explanations for each patient would be less effective and time-consuming. Therefore, as demonstrated in this study, there is a clear need for at least a semi-automated system.
Fig. <ref> and Table <ref> depict a hypothetical scenario involving a patient with an initially unknown EP tumor. The radiologist examining the MR images was uncertain about whether the tumor was of the MB or EP type. A key challenge in such cases is the lack of additional information, which often necessitates invasive procedures like brain surgery and tissue sampling for histopathological analysis to obtain a definitive diagnosis. To overcome this issue, we generate alternative scenarios based solely on the MRI features. These scenarios provide additional quantitative information to the radiologist, enabling them to assess the response based on the individual's biological characteristics.
Moreover, Table <ref> presents examples of other potential tumor cases, while Table <ref> demonstrates the efficacy of our approach in identifying patients with diverse tumor types that were previously unidentified and not encompassed within the decision space. While ML models can also accomplish this task, our method offers an additional advantage by preserving information regarding tissue characteristics, which in turn reveal similarities or differences among tumors. Additionally, our approach calculates distances to other tumors by transforming the features into a uniform distribution through standard scaling, providing valuable insights about the proximity. This valuable information aids in our comprehension of the differentiation among tumors in the dataset.
Table <ref> presents the total count of modifications made to susceptible features, with the exception of the parenchymas that serves as reference points, when generating samples for different patients. The statistical report enables a human verification of the optimization process, wherein minimal changes are implemented to achieve the desired outcome. It also confirms that the features exhibiting the highest variations during the generation of alternative realities are those with the most distinct distributions between two tumors. To elucidate the analysis of their distributions, we present Fig. <ref> as a visual representation.
Table <ref> presents the top three most variable features extracted from the reports obtained for all tumor matches in Table <ref>. Table <ref> exhibits a statistical analysis demonstrating the high degree of similarity between the generated data and reality across different data spaces, specifically focusing on the most frequently selected features. A high p-value indicates that the generated samples cannot be well distinguished, implying the effectiveness of the independent transformation process, which produces significant alternative realities separate from the original space. Fig. <ref> illustrates an example of some transformations from Table <ref>, displaying their corresponding p-values, as well as the kernel density estimation of the generated data in comparison to the original data.
Ultimately, we investigated the potential of the generated alternative realities for data augmentation. The reliability of data augmentation methods, such as SMOTE <cit.>, is a subject of controversy in medical research due to their algorithmic dependencies and the often insignificant impact of the generated data on the distribution. These kinds of approaches often prioritize test performance improvement, i.e., the concept of any means to improve test performance is acceptable, without considering alignment with reality. As a result, the generated data mostly lacks interpretability and becomes disconnected from real-world scenarios. However, we believe that accepting this approach as universally valid would be misguided. In situations where both the available data and testing conditions are limited, relying solely on these approaches may not be suitable for ensuring generalizability. It is essential to recognize the limitations and potential drawbacks associated with using generated data for generalization purposes. In medical studies with limited data, we propose that the counterfactuals generated can provide an alternative solution to this problem. Table <ref> presents the performance evaluation of the data augmentation methods we assessed.
The results presented in Fig. <ref>, case C cannot be directly compared with the baseline due to the inclusion of additional test patients during the testing phase. However, it is evident that the inclusion of more real test samples leads to a slight improvement under the given training conditions. It is important to acknowledge that some of the EP patients pose challenges in terms of differentiation, as mentioned in <cit.>. When these difficult patients are included in the training data, it can lead to a significant elevation in scores or if they end up in the test set, they act as unpredictable outliers, negatively impacting the overall results. Despite these challenges, achieving accurate predictions for half of these patients without utilizing any real EP data in the training set is commendable and warrants attention for future research.
Moreover, the incorporation of counterfactual explanations holds potential in identifying and addressing model bias in medical diagnosis. In certain healthcare data scenarios, the removal of constraints, such as gender or ethnicity, which we previously recommended adding to this approach as restrictions, may facilitate the potential development of fairness, transparency, and accountability in algorithmic decision-making processes. However, further research and implementation efforts are required to explore and validate the applicability of counterfactual explanations in addressing model bias in medical research and practice.
There are several limitations that should be considered in the present study. One limitation pertains to the implementation of the DiCE method, which may pose challenges when applied to diverse datasets. The method sometimes requires extensive optimization time and can encounter difficulties in finding a convergence point, potentially hindering the generation of accurate counterfactual explanations. To address this issue, alternative counterfactual explanation methods (e.g., Dutta et al. <cit.>, Maragno et al. <cit.>) can be explored in conjunction with our proposed approach. For a more comprehensive collection of counterfactual algorithms, readers can refer to Guidotti's review paper <cit.>. Additionally, the dataset utilized in this study has limitations in terms of its scope and size. While it included an adequate number of samples for training ML models, it may not fully capture the range of scenarios encountered in clinical practice, thus potentially limiting the generalizability of the findings to other datasets. Furthermore, the dataset only encompassed four specific types of pediatric PF tumors, which may not represent the entire spectrum of pediatric brain tumors. Future studies should consider expanding the sample size and incorporating additional advanced MRI protocols, such as semiquantitative and quantitative perfusion MRI and MR spectroscopy, to gain deeper insights into the diagnostic and prognostic value of MRI features for pediatric PF tumors.
§ CONCLUSION
In conclusion, this paper introduces a novel perspective on interpretability in medical research, focusing on pediatric PF brain tumors as a case study. Leveraging counterfactual explanations, the study offers personalized and context-specific insights, validating predicted outcomes and shedding light on variations in predictions under different circumstances.
The proposed approach shows great promise in enhancing the interpretability of MRI features for medical research studies. By bridging the gap between ML algorithms and clinical decision-making, it has the potential to facilitate the adoption of advanced computational techniques in medical practice. Clinicians can benefit from valuable insights gained from the generated counterfactual explanations, leading to improved decision-making processes and ultimately better patient outcomes. Notably, the counterfactual explanations generated in this study maintain statistical and clinical fidelity in many cases, underscoring their significance.
To fully realize the potential of this approach, further research and validation are essential. Integrating counterfactual explanations into existing clinical workflows and evaluating their performance in real-world scenarios will be critical to ensuring the reliability and practicality of this method. The continued development and refinement of utilizing counterfactual explanations in MRI-based diagnoses could revolutionize the medical field, benefiting both patients and healthcare providers. Therefore, future studies with larger datasets within the same domain or for different diseases could yield even more robust alternative realities constructed from MRI features. Overall, this study represents a significant step forward in moving beyond known reality and improving the application of ML in medical research and practice.
§ DECLARATION OF COMPETING INTEREST
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
§ FUNDING
This work was supported by The Scientific and Technological Research Council of Türkiye (TUBITAK) [2232- 118C221].
§ DATA & CODE AVAILABILITY
The datasets generated and/or analyzed during the current study are not publicly available due to privacy concerns but are available from the corresponding author upon reasonable request.
The source codes of the presented study can be accessed at: https://github.com/toygarr/counterfactual-explanations-for-medical-researchhttps://github.com/toygarr/counterfactual-explanations-for-medical-research
unsrt
|
http://arxiv.org/abs/2307.02745v1
|
20230706031111
|
ALPCAH: Sample-wise Heteroscedastic PCA with Tail Singular Value Regularization
|
[
"Javier Salazar Cavazos",
"Jeffrey A. Fessler",
"Laura Balzano"
] |
stat.ML
|
[
"stat.ML",
"eess.SP"
] |
compat=1.18
#1
#1#1
|
http://arxiv.org/abs/2307.02253v1
|
20230705125048
|
Multivariate Time Series Classification: A Deep Learning Approach
|
[
"Mohamed Abouelnaga",
"Julien Vitay",
"Aida Farahani"
] |
cs.LG
|
[
"cs.LG"
] |
Multivariate Time Series Classification: A Deep Learning Approach
Mohamed Abouelnaga, Julien Vitay, Aida Farahani
§ ABSTRACT
This paper investigates different methods and various neural network architectures applicable in the time series classification domain. The data is obtained from a fleet of gas sensors that measure and track quantities such as oxygen and sound. With the help of this data, we can detect events such as occupancy in a specific environment.
At first, we analyze the time series data to understand the effect of different parameters, such as the sequence length, when training our models. These models employ Fully Convolutional Networks (FCN) and Long Short-Term Memory (LSTM) for supervised learning and Recurrent Autoencoders for semi-supervised learning.
Throughout this study, we spot the differences between these methods based on metrics such as precision and recall identifying which technique best suits this problem.
roman
arabic
§ INTRODUCTION
A time series is a collection of data points ordered in time <cit.>. The analysis of this data is very beneficial in many domains, such as weather forecasting <cit.>. An important application when we talk about time series classification is anomaly detection which is applicable in many domains, e.g., with the help of time series data such as velocity and acceleration, dangerous driving behaviors can be detected <cit.>.
§.§ Motivation
Our motivation for this paper is to harness the time series data obtained from a fleet of gas sensors deployed by Corant GmbH / Air-Q company [<https://www.air-q.com>] in many homes and companies to detect events that can't be measured directly by these sensors. The primary function of these sensors is to measure and track many chemicals and quantities, such as O2, CO2, NO2, pressure, and sound.
With the help of machine learning, we extend these sensors' functionality by detecting more events, such as whether a specific environment is occupied within a particular time range. Also, we can see whether the windows of the place are open. We can notify the users of these events, which helps them to have more control over their environments and raises the safety level. Moreover, the investigated methods can be tailored to similar problems in the domain of time series analysis.
§.§ Methods
Event detection in time series data can be done using various deep-learning architectures. We exploit the power of Fully Convolutional Networks (FCN) and Long Short-Term Memory (LSTM) in supervised learning. Also, we would introduce a simple Recurrent Autoencoder, which uses the unlabeled data in semi-supervised learning.
We mainly treat our problem as a multi-label classification in which we have two primary classes {'person,' 'window_open'} that can be detected simultaneously, with binary cross entropy loss function <cit.>.
We also experiment with a separate network for each class in a single-label classification manner with softmax as an output layer <cit.>.
§.§.§ Fully Convolutional Network
Our problem deals with multivariate time series data, so FCN can be applied to grasp each input channel's local and global features. FCN has no pooling operations. Therefore it is used in other applications, such as semantic segmentation, to produce a pixel-wise output <cit.>.
FCN performs as a feature extractor in our settings, as shown in Fig. <ref>. FCN has several convolutional blocks, each consisting of a convolutional layer, followed by a batch normalization layer, and has Rectified Linear Unit (ReLU) as an activation function. The batch normalization helps to improve the overfitting and speed up the convergence <cit.>.
As shown in <cit.>, FCN originally stacks three convolutional blocks with 1-D kernel sizes of {8, 5, 3} and filters count of {128, 256, 128} respectively. The number of filters and kernel sizes can be optimized to better suit the problem, especially in small data sets.
Instead of applying directly a fully connected layer, the last convolutional block is followed by a Global Average Pooling (GAP) layer to average its output over time dimension. This enables a drastic reduction of the parameters.
To preserve the time series length after each convolutional block, the convolution operations have zero padding and a stride equal to 1.
One main advantage of FCN is that it can handle time sequences with different sizes, unlike the standard Recurrent Neural Networks that struggle with long-term dependencies.
§.§.§ InceptionTime
InceptionTime <cit.> is a state-of-art architecture that achieves very high accuracy when applied to time series classification. It is an ensemble of five Inception Networks that are initialized with different random weights with two residual blocks, as opposed to ResNet <cit.>, which has three residual blocks as shown in Fig. <ref>. The residual connections fight the vanishing gradient problem <cit.>.
Each residual block is comprised of three Inception modules. After the second block, a Global Average Pooling (GAP) is applied instead of directly using a fully connected layer.
As shown in Fig. <ref>, the core component of each Inception module is applying m filters with a stride equal to 1 and a length of 1. The result is called the bottleneck layer. This layer significantly reduces the dimension of the time series input and the model complexity. This technique allows for a longer filter with almost the same number of parameters as ResNet.
After that, several convolutions with different sizes are applied simultaneously on the bottleneck layer. To mitigate the perturbations, another MaxPooling layer is applied and concatenated with the output of the previous convolutions.
§.§.§ Long Short-Term Memory
Recurrent Neural Network (RNN) is an essential architecture when dealing with time series data, as the output depends on a history of inputs ordered in time. However, RNN sufferers from detecting long-term dependencies due to the application of Back Propagation Through Time (BPTT) for a specific Horizon <cit.>.
In contrast, Long Short-Term Memory (LSTM) <cit.> cell uses a state which represents a "memory" or a "context" besides the inputs and the outputs to overcome this issue. LSTM contains three gates to control the dependencies; an input gate to select the inputs, a forget gate to free some part of the memory, and an output gate to control the output, as shown in Fig. <ref>.
We use an LSTM in our supervised method with only one hidden layer, as shown in Fig. <ref>.
§.§.§ Recurrent Autoencoder
Supervised learning algorithms require a lot of labeled data to train the model, especially if we have multivariate time series data. However, obtaining annotated data is a challenging and usually expensive task. According to Vapnik-Chervonenkis theorem <cit.>, generalization error depends significantly on the amount of data the model is trained on, not only the complexity of the model.
As unlabeled data, in contrast, is cheap to obtain, we can combine them with a small amount of labeled data to get good accuracy in semi-supervised learning <cit.>. Also, the random initialization of the parameters can lead to a longer training time. Therefore, as shown in Fig. <ref>, we can apply a Recurrent Autoencoder on the unlabeled data and minimize the reconstruction error, which is based on the Mean Squared Error (MSE) <cit.>.
Then we use the Encoder only with frozen parameters with a shallow classifier on the labeled data, resulting in less number and good initialization of the parameters.
§.§ Software Setup
For applying the previous neural network architectures, we used the "Tsai" library <cit.>, which is based on "PyTorch" <cit.> and "Fastai" <cit.>.
For data manipulation and analysis, we used "Pandas" <cit.>, "NumPy" <cit.>, and "Scikit-Learn" <cit.>.
For plotting the graphs, we used "Matplotlib" <cit.> and "Plotly" <cit.>.
§ EXPERIMENTS
Before applying any method, we need to understand the data first. The data contains 17 features which are {pressure, temperature, sound, tvoc, oxygen, humidity, humidity_abs, co2, co, so2, no2, o3, pm2_5, pm10, pm1, sound_max, dewpt} and two classes {person, and window_open}. The sensors measure a sample every two minutes.
The labeled data is collected using only one device from July 2022 to December 2022, while the unlabeled data is collected using 740 sensors over two years.
§.§ Cleaning Data
For a visualization of the labeled data (see Fig. <ref>), we used only two features {o2,co2} and the labels for the sake of readiness to spot how the classes are distributed over time. We note that no data was present in August and most of September.
To better understand the distribution of labels, we can find in Fig. <ref> that the class person has a minimal number of labels when more than one person exists in the environment. Therefore, we merge all labels in which a person is found into one label. Hence we have two binary classes {person, and Window_open}.
An important aspect when dealing with data is cleaning it from missing values; however, we shouldn't delete the missing values directly in time series data as that may affect the series frequency.
After we know the missing values in our data, as shown in Fig. <ref>, we can interpolate these values to keep the same timeline. Fortunately, a maximum of 20 missing values exist in our data, all in the beginning, so it was safe to delete them directly. However, we generally use linear interpolation to substitute for the missing values.
§.§ Features Reduction
We have a small data set (below 70,000 samples), which can lead to low performance and high generalization errors. Also, not all features are equally crucial for accurate classification. One way to deal with that is to use fewer independent features.
We used Pearson correlation coefficient <cit.> to build a correlation matrix as shown in Fig. <ref> to eliminate the most correlated features. We obtained nine features {humidity, temperature, tvoc, oxygen, co2, co, pressure, o3, sound} which correlate well with the classes.
§.§ Under Sampling
After merging the labels of the data set, we can see an obvious imbalance of the labels as in Fig. <ref>. This can lead to false metrics when comparing different methods, e.g., for detecting a person; a model may produce high accuracy, although it performs poorly, as the ratio between the "People" label to "No Person" is very low.
We use under-sampling to overcome the imbalance problem and reflect more accurate metrics. As the count of a person's existence and open windows are very low, we choose a specific number of labels before and after every detection of a person or an open window. This leads to more balanced data, as shown in Fig. <ref>.
We note that we should apply the sliding operation on each segment resulting from under-sampling to obtain time sequences, then perform concatenation to maintain the time of the data.
To compare the results of under-sampling with the unbalanced data as shown in Table. <ref>, we perform under-sampling with a size of 30 and train FCN with default parameters on 70% of normalized data using standard scalar while keeping 20% for validation and 10% for testing as shown in Fig. <ref> and Fig. <ref>. This splitting is done randomly with a sequence length equal to 15, with a stride equal to one, and labeling at the end of each sequence.
We used only ten epochs with cosine learning rate scheduling <cit.>. Also, we compared the usage of all features against the minimized features we deduced in the previous section.
We can see from the results that using unbalanced data for training consumes (≈ 6x) training time compared to under-sampling, with similar F1 scores. Hence, we can use under-sampling to save time. Also, we can safely choose a minimal number of features.
§.§ Sequence Labeling and Normalization
We segment samples into sequences of a specific length for time series data and label each sequence. This label can be the first label, the very end, or the mean value of labels of all samples in each sequence.
In Table. <ref>, we compare the performance of FCN with the same settings as before while using different sequence labeling. We can find that labeling the sequences from the start or taking the mean value would result in better performance.
Also, Table. <ref> compares applying a standard scalar and min-max scalar <cit.> when normalizing the data with the mean value for sequence labeling. We obtain similar results. Hence we will use a standard scalar from now on.
§.§ Benchmarking Architectures
As shown in Table. <ref>, we compared FCN, LSTM (uni-directional and bi-directional one-layer Networks), and InceptionTime, all with default parameters with the mean value for sequence labeling. Regardless of the good results of InceptionTime, we would skip using it later as we have a small amount of labeled data compared to the parameter count of the model. Also, it has longer training time. From now on, we will use FCN and a uni-directional one-layer LSTM as the main models.
§.§ Minimized Architecture and Sequence Length
FCN with default parameters has a very high parameter count (269,698) compared to training samples, even after increasing the under-sampling size to 50 (5,000 training sequences on average, depending on sequence labeling). Therefore, we initially tune FCN to consist of only two convolutional blocks with kernel sizes of {5, 3} and filter count of {16, 32}, resulting in 2,418 parameters. This can mitigate overfitting.
Also, the test set was chosen using random splitting after segmentation, which may lead to data leakage while training, producing more biased accuracy. Therefore, we prefer a different test set that is separated before segmentation.
Fig. <ref> shows this test set and the remaining training set. Using this test set, we experiment with various sequence lengths using FCN and 100 epochs for training with early stopping <cit.>. According to the results shown in Table. <ref>, we can choose a sequence length of 7 as it represents 14 minutes in real-time to avoid a long prediction time; also, we choose the f st label for each sequence to perform further experiments.
Also, we would keep considering our problem as a multi-label classification as it is more realistic as a person and an open window can be detected simultaneously; also, the results are comparable with single-label classification, e.g., if we take each sequence with a length equals to 10 with a mean value labeling, we will obtain F1 scores of (0.94, 0.84) for person and window classes respectively.
§.§ Hyperparameter Optimization
We use Optuna <cit.>, an automatic hyperparameter optimization framework to optimize hyperparameters. The search space for the minimized FCN would be the number of filters from 8 to 32 with a step of 4. And for LSTM, we optimize the hidden size from 10 to 30 with a step of 2, also the dropout from 0.1 to 0.5 with a step of 0.1. We maximize the F1 score for optimization using 100 Optuna trials.
That leads to an FCN of 2,306 parameters with two convolutional blocks, with 32,8 filters of sizes 5,3, respectively. Moreover, we get a one-layer uni-directional LSTM of 2,950 parameters with a hidden size of 26 and a dropout of 0.2.
We present the results in Table. <ref>, with precision and recall metrics <cit.> beside the F1 score to reflect the contribution of false positives and false negatives separately.
For better records of true positives, true negatives, false positives, and false negatives, Fig. <ref> and Fig. <ref> show the confusion matrices in case of FCN and LSTM, respectively, for both classes {person, window_open}.
§.§ Predictions Distribution and Features Visualization
As precision and recall don't reflect the distribution of predictions over time, we visualize this distribution in Fig. <ref> and Fig. <ref> for FCN and LSTM, respectively.
Also, we visualize the feature space using Principle Component Analysis (PCA) <cit.> for FCN and LSTM as in <ref> and Fig. <ref>, respectively, on our separate labeled test set.
We can also see, as in Fig. <ref> when using unlabeled data from a different sensor, it does not follow the same distribution as in the labeled test set.
§.§ Encoder Classifier
Now we can use around 8,000,000 unlabeled sequences to train the recurrent autoencoder, then use the trained encoder with a shallow classifier consisting of a fully connected layer of 100 neurons and train it on the labeled data.
The encoder consists of three LSTM layers with sizes of {128, 64, latent_size} respectively, and the decoder also consists of three LSTM layers with sizes of {latent_size, 64, 128} respectively; where "latent_size" represents the latent space size.
We experiment with three different latent space sizes of {2, 10, 16} with parameters of {244,449, 249,825, 254,865} respectively.
After testing the different encoder classifiers on the labeled test set we used before, we can see in Table. <ref> that shows the scores of the encoder classifier, also in Fig. <ref>, Fig. <ref>, and Fig. <ref> which show the distribution of predictions; that the embedding size of two is shallow to compress the 17 features efficiently.
Therefore we will conduct the next experiments using the embedding size of 10 as it has similar results with 16 but with fewer parameters.
We also show in Fig. <ref> the confusion matrices for the encoder classifier with latent_size = 10, also Fig. <ref> shows the PCA of the latent space of the Encoder.
Also, we can see in Fig. <ref>, which shows the PCA of the encoder classifier when applied to the same unlabeled test set we used before in the PCA of FCN, that it follows the distribution of the feature space in contrast to FCN.
We can also smooth the predictions by rectifying the spikes with different widths that are considered errors, as shown in Fig. <ref>.
At last, we include some smoothed predictions as shown in Fig. <ref> using the encoder classifier when applied to various unlabeled test sets collected from different sensors combined with only three signals {o2, co2, humidity_abs} for better visualization. The signals are included to show the correlation with the predictions over time.
§ CONCLUSION
Time series data can be found anywhere in nature; therefore, using them benefits many applications. In our paper, we presented two different deep-learning approaches to notify a user if a person exists or the window is open in his environment using data obtained from various gas sensors. In the first approach, we used supervised learning using two architectures which are FCN and LSTM. This method works well but needs more generalization if data is not sufficient.
Also, we examined the usage of a semi-supervised learning technique by training a recurrent autoencoder on the unlabeled data, then using the trained encoder with a shallow classifier on the labeled data. This allows using less labeled data as we train only the classifier while freezing the encoder.
We should take care of some practices before dealing with time series data, such as cleaning data by interpolating missing values and not directly removing them to reserve the timeline. Also, choosing a sequence length and the labeling position for each sequence are two important factors. There is no significant difference between standard and min-max scalars as long as the normalization step is performed before training. Also, analyzing the feature space for the used architecture and visualizing the distribution of predictions give more insights into the best-fitting solution.
Ultimately, we can get more robust results if we use more data, in-depth hyperparameter optimization, or even different architectures. One important future architecture to examine is the self-supervised learning technique using Transformers <cit.>.
|
http://arxiv.org/abs/2307.01405v1
|
20230703235226
|
Mitigating the choice of the duration in DDMS models through a parametric link
|
[
"Fernando Henrique de Paula e Silva Mendes",
"Douglas Eduardo Turatti",
"Guilherme Pumi"
] |
stat.AP
|
[
"stat.AP",
"math.ST",
"stat.TH",
"62M10, 62F10, 91B84"
] |
myheadings
empty
Mitigating the choice of the duration in DDMS models through a parametric link
Fernando Henrique de Paula e Silva Mendes^a,[1]Corresponding author. This Version: August 1, 2023[-.15cm^aGraduate Program in Statistics - Federal University of Rio Grande do Sul.], Douglas Eduardo Turatti^b[-.15cm^bAalborg University Business School.] and Guilherme Pumi^a
[E-mails: fernandohpsm@hotmail.com (F.H.P.S Mendes), guilherme.pumi@ufrgs.br (G. Pumi) and det@business.aau.dk (D.E. Turatti)]
.3cm
One of the most important hyper-parameters in duration-dependent Markov-switching (DDMS) models is the duration of the hidden states. Because there is currently no procedure for estimating this duration or testing whether a given duration is appropriate for a given data set, an ad hoc duration choice must be heuristically justified. This is typically a difficult task and is likely the most delicate point of the modeling procedure, allowing for criticism and ultimately hindering the use of DDMS models. In this paper, we propose and examine a methodology that mitigates the choice of duration in DDMS models when forecasting is the goal. The idea is to use a parametric link instead of the usual fixed link when calculating transition probabilities. As a result, the model becomes more flexible and any potentially incorrect duration choice (i.e., misspecification) is compensated by the parameter in the link, yielding a likelihood and transition probabilities very close to the true ones while, at the same time, improving forecasting accuracy under misspecification. We evaluate the proposed approach in Monte Carlo simulations and using real data applications. Results indicate that the parametric link model outperforms the benchmark logit model, both in terms of in-sample estimation and out-of-sample forecasting, for both well-specified and misspecified duration values.
§ INTRODUCTION
Many authors have suggested variants of the Markov-switching model since the seminal paper of <cit.>. In this context, <cit.> proposed the duration-dependent Markov-switching (DDMS) model based on a higher-order Markov chain that allowed state transition probabilities to be duration-dependent. Initially applied to investigate business cycle, such as if the continuation of expansions or recessions is dependent on how long the economy has been in those regimes, this modeling approach also includes bull and bear market stock identification <cit.> and foreign exchange volatility estimation <cit.> since duration can be included as a conditioning variable in the conditional mean/variance equations. Based on that, the empirical literature has also explored generalizations of the DDMS model, broadening the baseline duration dependence structure in different directions <cit.>.
One of the key components in DDMS models is the duration value to be used in the application, which is a user-specified parameter that cannot be estimated. The duration selection in the existing literature is either arbitrary, under the argument that the impact of the duration dependence vanishes after selecting a “reasonably” large value or based on a grid search aimed at maximizing the likelihood. The difficulty in selecting and justifying the duration is the main criticism of DDMS models, often hindering their use in applications. In contrast to the existing literature, <cit.> showed, using a bayesian approach, that duration structures are not necessarily monotonic and, therefore, cannot be described by the conventional models. The authors restrict their approach to a bull and bear market analysis in the same spirit of <cit.> through a country-by-country stock data analysis; however, they do not extend their approach to other DDMS-type models and restrict the study to an in-sample design.
In this context, the main contribution of the present paper is to develop and examine a strategy for estimating DDMS models that attenuates the problem of duration selection and justification by allowing for a completely arbitrary choice. Our approach is based on the use of the asymmetric Aranda-Ordaz parametric link function <cit.> instead of the (fixed) logit link in the transition probabilities in DDMS models. The idea behind this approach is that any potentially incorrect duration choice is compensated for by the parameter in the link, increasing model flexibility by letting the data “tell” which is the “best” link, allowing the model to capture the latency of the Markov chain endogenously, improving the fit and forecasting accuracy under misspecification of the duration.
To illustrate our approach empirically, we extend the research that evaluates alternative volatility modeling and forecasting methods for S&P500 daily log returns by broadening the traditional DDMS specifications to include the Aranda-Ordaz link. More precisely, we conduct a pairwise Diebold-Mariano-West statistical test for the one-day ahead forecast stock volatility from April 2018 to January 2020, comprising 443 out-of-sample observations. For robustness checking, we apply different volatility proxies and loss functions commonly found in the literature. Overall, our modeling approach outperforms the benchmark logit specification under certain reasonable conditions, mitigating the duration uncertainty choice. In addition, we also compare the proposed specification to Garch-type models. We did not find statistical support to reject the null hypotheses of equal predictive accuracy, while these results were not observed for the models with logit link.
To evaluate the performance of the proposed approach, we conduct a Monte Carlo simulation study inspired by two classical applications of DDMS models: (i) a bull and bear market identification for the in-sample stock market cycles probabilities and (ii) a point volatility forecasting exercise conducted for different out-of-sample horizons. In general, we observe that the proposed Aranda-Ordaz approach improves estimates in different directions; however, a typical result is the advantage (in proportion) of the Aranda-Ordaz over the logit for higher likelihood values. Over the stock market probabilities, we observe that the Aranda-Ordaz model also presents superior frequency with probability closer to the true values under duration misspecification. From the forecasting perspective, the apply of the Aranda-Ordaz is advantageous regardless of the true value of the duration. This indicates that even when the correct duration is used, the model based on the logit cannot provide the “best possible” forecasts.
The remainder of this paper is organized as follows: Section 2 presents the proposed model; Section 3 describes the optimization algorithm designed to conduct the maximum likelihood estimation. Section 4 explore Monte Carlo simulations. The data and our empirical findings are found in Section 5; Section 6 concludes the paper.
§ THE PROPOSED MODEL
Based on <cit.>, we start by considering a simple stochastic volatility model given by
Y_t = μ (S_t) + σ(S_t,D(S_t))Z_t,
where S_t denotes the state mixing variable, D(S_t) is the duration of the state S_t, at time t, and Z_t∼ N(0,1) is an i.i.d. error term. The duration D(S_t) depicts the length of a run of realizations of a particular state and, in principle, could grow very large causing estimation problems and numerical instability. To avoid such problems, we define
D(S_t):=min{D(S_t-1)I(S_t=S_t-1)+1,τ},
where τ∈ is a user chosen threshold such that the duration is accounted for up to time τ, and I is the indicator function. The transition probabilities associated to the latent states S_t are parameterized using a similar approach as in generalized linear model by means of a link. The most commonly applied link is the logit, which yields
P(S_t=i|S_t-1=i, D(S_t-1)=d) = exp(γ_1(i)+γ_2(i) (d∧τ))/1+exp(γ_1(i)+γ_2(i) (d∧τ)), i=0,1,
where d∧τ = min{d,τ} and γ_j(i), i,j∈{1,2}, are parameters to be estimated.
In this work we propose to parameterize the transition probabilities upon applying a twice differentiable one-to-one parametric link function g(·,λ):(0,1)→, in the same generalized linear model approach as before. That is, we consider
g(P(S_t=i | S_t-1=i, D(S_t-1)=d);λ) = γ_1(i)+γ_2(i) (d∧τ), i=0,1,
or, equivalently,
P(S_t=i | S_t-1=i, D(S_t-1)=d) = g^-1(γ_1(i)+γ_2(i) (d∧τ);λ) i=0,1,
where λ is a parameter to be estimated from the data. One of the most commonly applied parametric link function is the so-called asymmetric Aranda-Ordaz link function <cit.>, given by
g(y;λ)=log((1-y)^-λ-1/λ),
for y∈(0,1) and λ>0, whose inverse is given by
g^-1(x;λ) = 1-( 1+λ e^x) ^-1/λ,
for x∈. Observe that lim_λ→ 0+g^-1(x;λ)=1-e^-e^x and lim_λ→ 0+g(x;λ)=log(-log(1-x)) which is the so-called cloglog link function.
To exemplify the effects of the parameters d and λ in the transition probabilities (<ref>) as a function of γ_1(0) and γ_2(0). Figure <ref> presents the case d=3, and λ∈{1,12} and in Figure <ref>, we have λ=1 and d∈{3,12}. In both cases we have transition probabilities parameters γ_j(0)∈[-2,2], for j∈{1,2}.
§ OPTIMIZATION STRATEGY
Parameter estimation of DDMS models can be challenging and prone to numerical issues. The likelihood function may have multiple local maxima, flat and spiky regions making numerical optimization difficult. Adding an extra parameter through the Aranda-Ordaz link may further complicate the estimation process, as the transition parameters and the link function are free to vary. This suggests that the likelihood function may have issues identifying the transition matrix and the link function parameters, implying it can be near flat on some directions. Additionally, the estimation process in DDMS models with a large duration parameter can also pose computational difficulties. A large duration parameter often leads to a large sparse transition matrix in the extended representation of the DDMS model. This results in a sparse transition matrix that may approach singularity for several combinations of parameters. Singularity means that some states are given probabilities numerically close to zero, causing the unconditional probabilities to not exist. This results in the likelihood function being undefined at multiple points in the parameter space.
To address these challenges, we developed an optimization algorithm especially tailored to maximize the log-likelihood function of DDMS models. The proposed approach applies a combination of random and grid search techniques for initial parameter values and constrained numerical optimization within defined bounds to tackle multimodality. We also impose nonlinear restrictions to ensure the invertibility of the transition matrix. The details of the algorithm are outlined below.
* Find the starting values using a combination of random and grid search: Let κ be the number of parameters in the DDMS model. Define a vector b consisting of 100 evenly spaced values between 0.1 and 10 for the parameter λ. For the remaining parameters, create a (κ-1) × 100 matrix C with random numbers, where each row is drawn from a continuous uniform distribution. The bounds of the uniform distributions are heuristically defined and may vary depending on the particular model and dataset. The resulting draws can then be represented as [ C' b_i⊗ 1_100]', for all i = 1, …, 100, where ⊗ denotes the Hadamard (elementwise) product and 1_k∈^k denotes a vector of ones.
* Evaluate the likelihood function at each of the points defined by [ C' b_i⊗ 1_100]', for all i = 1, ⋯, 100. Sort the results in decreasing order and store the top s values. Then proceed with the first set of parameters.
* Let θ_0:=(θ_0,1,⋯,θ_0,κ)' be the current vector of starting values for optimization. We look for a local maxima in the domain [θ_0,1 - r, θ_0,1 + r]×⋯× [θ_0,κ - r, θ_0,κ + r], where r can either depend on the values of θ_0 or be fixed exogenously. For simplicity, we set r = r_1. However, it is important to note that some parameters may have restricted parameter spaces, such as λ, which must be positive. In such cases, the bounds must satisfy the restrictions on the parameter space.
* To guarantee the existence of unconditional probabilities, which are defined by π = (A'A)^-1 A'[ 0_N; 1 ], where A=[ I_N-𝒫; 1_N' ], with 𝒫 denoting the transition matrix for the extended states[See <cit.> for details.], and 0_N the null vector in ^N, ensure that the reciprocal condition number of the matrix (A'A) is above the machine precision. In practice, a small value such as 10^-9 is sufficient to virtually eliminate numerical issues caused by a near singular transition matrix.
* Use a derivative based numerical optimization method to find the maximum of a nonlinear, multivariate function with bounds, as specified in 3 and nonlinear constraints as defined in 4.
* The optimization in step 5 is considered successful if the first-order optimality measure is close to zero and the proposed solution does not approach the boundary defined in step 3. We evaluate the proximity of the proposed solution to the bounds by computing the absolute percentage difference. This percentage difference should be above a specified threshold. More specifically, let
ℓ^-_i:=|θ_1,i - (θ_0,i - r)|/|θ_1,i|, ℓ^+_i:=|θ_1,i-(θ_0,i + r)|/|θ_1,i|,
for i∈{1,⋯,κ}, where θ_1,i is the proposed value for the ith parameter. We say that the proximity criteria is met for a given threshold δ>0 if min{ℓ^-_1,⋯,ℓ^-_κ,ℓ^+_1,⋯,ℓ^+_κ}>δ. Note that some parameters have restricted spaces, such as λ > 0. In such cases, it is acceptable for the estimation to be close to the restrictions, and the percentage proximity criteria should not be calculated. We apply δ=0.01 in most cases.
* If the first-order optimality measure or the proximity criteria are not satisfied:
* If the first-order optimality measure is not close to zero, repeat the optimization process by returning to step 2 and choosing the next starting value. Continue this process until the first-order optimality criterion is met.
* If the first-order optimality is close to zero but the proximity criteria is not satisfied, proceed to step 3 and adjust the value of r_1 to r_2, where, r_2>r_1. If this second round of optimization still fails to satisfy the criteria, return to step 3 and use a much larger value of r, r_3>r_2, only for those parameters that do not meet the criteria.
* Report the optimization as unsuccessful if all s stored initial values fail to simultaneously meet both the first-order optimality and the proximity criteria.[In this paper we set r_1=1, r_2=2 and r_3=10, respectively.]
We use the algorithm described above to obtain maximum likelihood estimates for both the Aranda-Ordaz and Logit DDMS models. The optimization set-up is identical for both models, except in the logit case, where only the matrix of random numbers C is used for the search of starting points. It is worth noting that we employ the same matrix C for both models, ensuring that the starting points search is comparable across models.
§ MONTE CARLO SIMULATION
In this section, we perform a Monte Carlo simulation study to compare the proposed Aranda-Ordaz approach to the traditional logit case following the contributions of <cit.>. More precisely, our analysis is divided into two model applications based on an empirically relevant set of parameters. In the first case, we analyze the in-sample state probabilities generated under duration uncertainty for both functions following a bull and bear market analysis as in <cit.>. For the second case, our study structure is quite similar; however, we explore the out-of-sample context, considering a conditional variance specification following <cit.>. All codes were written in Matlab by the authors, and are available upon request.
§.§ In-sample capabilities: the bull and bear market model
In the first Monte Carlo investigation, we focus on the in-sample predictive ability related to the transition probabilities of the proposed Aranda-Ordaz approach compared to the fixed link logit case. We consider the following model,
Y_t=μ_0(1-S_t)+μ_1S_t+((1-S_t)σ_0+S_tσ_1)Z_t, Z_t∼ N(0,1).
In choosing the simulation scenario, we consider parameters reflecting the bull and bear market stylized facts, similar to the real data application presented in Section <ref>. The bull (bear) market is characterized by positive (negative) mean returns and lower (higher) variance. For the transition probabilities parameters, we set γ_2(i)>0 for i=0,1, and this reflects that the probability of staying in the bull (bear) market increases as the duration increases. This dependence structure can be interpreted as a momentum effect, as discussed in <cit.>, among others.
We consider model (<ref>) with true duration d_0=8 and parameters μ_0=-0.5, μ_1=1.5, σ_0=6 and σ_1=2. The transition probabilities are given by (<ref>) with parameters γ_1(0)=-1.8, γ_2(0)=0.7, γ_1(1)=-0.8 and γ_2(1)=0.6.[The parameters value scale refers to log returns multiplied by 100.] We estimate the model with duration d∈{4,6,8,10,12} using both, the proposed Aranda-Ordaz and the fixed logit link. We generate time series of length 1,000 and discarded the first 200 observations as burn-in, yielding a final sample size of n=800. The experiment was replicated 1,000 times. For each time series we fit model (<ref>) using the logit and Aranda-Ordaz links. For each approach, we obtain the predictive, filtered and smoothed probabilities, denoted respectively by P(S_t=i|φ_t-1), P(S_t = i|φ _t) and P(S_t = i|φ_T) as in <cit.>.
Tables <ref> and <ref> show the simulation results. Table <ref> presents the proportion of times the MAPE of the Aranda-Ordaz probability's MAPE is less than that of the logit's as well as the average MAPE difference for each probability type computed. Table <ref> shows the proportion of times the likelihood obtained using the proposed Aranda-Ordaz approach is greater than the likelihood obtained with the logit. From Table <ref>, we observe that when the model is misspecified (i.e. τ≠τ_0), the Aranda-Ordaz approach yields more precise probabilities more often than the logit. The more distant is τ from τ_0, the better the Aranda-Ordaz performs in comparison to the logit case. Under the correct specification, however, the logit link outperforms the Aranda-Ordaz by a narrow margin, presenting the smallest of all MAPE differences, and a slightly superior proportion for smaller MAPE. On the other hand, the results presented in Table <ref> shows, on the other hand, that the Aranda-Ordaz produces a higher likelihood in the vast majority of cases, even in the correctly specified scenario.
As an illustration, Figure <ref> presents the bear market filtered probabilities. For the 200th path over 1,000 replications, and τ=4 (misspecified case), both links depicted similar market phases with different probabilities in some periods, as see Figure <ref>. We also compare these estimates to the DGP in Figure <ref>. Graphically speaking, the model's performance follows the true probability values, with the accumulated advantage of the Aranda-Ordaz specification period by period. The blue line (logit link) is much more evident over the red line (DGP), revealing some difference at this sing path. However, results confirmed over the battery of time series support the gain of the Aranda-Ordaz for misspecified duration cases.
§.§ Out-of-Sample capabilities: Volatility Forecasting
In this Section, we compare the proposed Aranda-Ordaz approach to the traditional logit approach in terms of out-of-sample capabilities. To provide grounds for comparison, we conduct a series of Monte Carlo simulations considering
Y_t=σ(S_t,D(S_t))Z_t, σ(S_t,D(S_t))=(ω(S_t)+ζ(S_t)D(S_t))^2,
where the latent state affects the level of volatility, ω(S_t)=ω_0(1-S_t)+ω_1S_t, while the duration of the states, D(S_t), affect the dynamics of volatility through ζ(S_t)D(S_t), where, ζ(S_t)=ζ_0(1-S_t)+ζ_1S_t. Z_t is assumed to follow an identically and independently normal distribution, and transition probabilities for the states S_t are given by (<ref>).
We generated time series of size n+10, say y_1,⋯,y_n+10 for values of τ_0∈{15,25,35}. The last 10 values were reserved to create the conditional variances necessary for out-of-sample forecasting purposes. For each scenario, we estimate the model using y_1,⋯,y_n with the logit and the Aranda-Ordaz link functions for τ∈{τ_0-10,τ_0-5, τ_0, τ_0+5,τ_0+10}. Next, for each estimated model we obtain h-steps ahead forecast for forecasting horizons h∈{1,⋯,10}. Let σ̂^2_n+1, ⋯, σ̂^2_n+10 and σ̃^2_n+1, ⋯, σ̃^2_n+10 denote the forecasted values of σ^2_n+1,⋯,σ^2_n+10 using the Aranda-Ordaz and the logit, respectively. For each horizon h∈{1,⋯,10} and each method, we calculate the forecasting mean absolute percentage error (MAPE), given by
MAPE_AO(h)=1/h∑_k=1^h |σ̂^2_n+k-σ^2_n+k/σ^2_n+k|, MAPE_logit(h)=1/h∑_k=1^h |σ̃^2_n+k-σ^2_n+k/σ^2_n+k|.
We replicate the experiment 1,000 times. To simplify the exposition, in Table <ref> we present the difference between the average MAPE of the proposed Aranda-Ordaz approach and the logit, for each forecast horizon, that is, D(h):=MAPE_logit(h)- MAPE_AO(h), for h∈{1,⋯,10}.
Table <ref> only contains positive values indicating that, on average, forecasting using the Aranda-Ordaz is advantageous regardless the true value of τ_0 and the duration τ used in the estimation procedure. It is interesting to notice that the smallest values of the difference D(h) are usually not obtained when τ=τ_0, which may be a consequence of model complexity, indicating that even when the correct duration is used, the model based on the logit is not capable to provide the “best possible” forecasts. Regarding the magnitude of τ_0, the higher the duration, the bigger the difference D(h) for all forecasting horizons. The smallest differences D(h) are obtained for τ_0=15, τ=5, ranging between about 10% and 23%, while the overall higher are obtained for τ_0=35 and τ=30, ranging from over 100% to a bit over 130%.
The results on Table <ref> show that, on average, applying the Aranda-Ordaz link is advantageous over the logit even when the duration and the link function are correctly specified. One question that remains is how often (if at all) the likelihood of the proposed Aranda-Ordaz approach is higher than the logit. To shed some light into this issue, under the same DGP as in Section <ref>, we compare the likelihood obtained using the proposed Aranda-Ordaz approach and the logit.
Table <ref> presents the frequency at which the log-likelihood obtained using the Aranda-Ordaz link is superior to the one obtained with the logit. The first column presents the true value τ_0 applied in the data generating process, while the value of τ used in the estimation procedure is displayed in parenthesis. We observe that in all scenarios considered in the simulation, using the Aranda-Ordaz link yields superior likelihood in average, including the case where the link and duration are correctly specified. This result is expected since the Aranda-Ordaz link is more flexible than the logit. The frequency at which the likelihood is greater for the Aranda-Ordaz ranges from 62% to 91%. Interestingly, the higher the true value of τ, the greater this frequency is.
§ EMPIRICAL EXERCISE
§.§ S&P500
In this section, we present a real data application of the proposed methodology considering the log returns of daily closing S&P500 index from January 2nd 2015 to January 2nd 2020, yielding a sample size n=1,259, as seen in Figure <ref>. The descriptive statistics is presented in Table <ref>. We observe a mean very close to zero and a small standard deviation, with an annualized value of 0.1344 (0.0085 ×√(250) ). The Jarque-Bera (JB) normality test rejects the null hypothesis of an underlying normal distribution, which is further corroborated by the kurtosis, considerably higher than the normal distribution's. The Lagrange Multiplier and Ljung-Box tests are highly significant, suggesting ARCH effects in the log returns.
Our goal is to conduct an out-of-sample forecasting exercise considering a 1-day step ahead volatility forecast through an expanding window using 443 daily observations, starting on April 1st, 2018. During this period, the S&P500 presented evidence of intra-state fluctuations, such as bear rallies (positive sub-trend) and bull corrections (negative sub-trend), encompassing a primary bull market state as described by the empirical literature <cit.>. We use this period of uncertainty and volatility to run a point forecasting exercise to compare the performance of a single Aranda-Ordaz DDMS to benchmark models.
§.§ Realised measures
A well-known problem in financial econometrics is that volatility is a latent variable and, therefore, cannot be directly observed, turning the evaluation of the predictive power of different approaches problematic. In this section, we consider the 5-minute intra-day quotes as an alternative source to proxy volatility. More specifically, besides the traditional realised variance <cit.>, we also consider alternative realised measures robust to jump and microstructure noise to proxy the true volatility, namely the bipower variation <cit.>, MinRV and MedRV <cit.>.
In general, these measures are regarded as better alternatives to proxy the true volatility than the squared daily return <cit.>, which are defined as:
RV_t :=∑_i = 1^m r_i,t^2,
BV_t :=π/2[ m/m - 1]∑_i = 1^m - 1 |r_i,tr_i + 1,t|,
MinRV_t :=π/π - 2[ m/m - 1]∑_i = 1^m - 1min{|r_i,t|,|r_i + 1,t|}^2,
MedRV_t :=π/6 - 4√(3) + π[ m/m - 2]∑_i = 2^m - 1med{|r_i - 1,t|,|r_i,t|,|r_i + 1,t|}^2,
where, r_i,t is the ith high-frequency return of day t with m=78 cases. The 5 minutes intra-day quotes range from 9.30 AM to 4.00 PM, and the time series were obtained from First Rate Data website https://firstratedata.comhttps://firstratedata.com.
§.§ Robust loss function
Typically, volatility proxies are used to evaluate volatility forecasts. However, these proxies are estimates of the integrated variance and, as such, they are imperfect. <cit.> defines a sense of robustness for loss functions in ranking volatility forecasts, and based on this concept, derived a general class of loss functions that are robust in that sense. Letting σ̅^2 the volatility proxy and σ̂^2 the volatility forecasts, we consider three loss functions to evaluate volatility forecasts that are members of <cit.>'s class, namely, the MSE, the QLIKE, and a measure denote hereafter by RLF, given by
MSE(σ̅^2,σ̂^2) :=(σ̅^2-σ̂^2)^2/2, QLIKE(σ̅^2,σ̂^2):=σ̅^2/σ̂^2+log(σ̅^2/σ̂^2)-1,
RLF(σ̅^2,σ̂^2) :=σ̂^2 - σ̅^2[log( σ̅^2 /σ̂^2)-1].
Observe that <cit.>'s MSE and QLIKE losses differ from the usually applied loss functions of the same name.
§.§ Results & Estimation
We consider the volatility model given by (<ref>), which was already considered in the Monte Carlo simulation study. To conduct reliable evaluation between Logit and Aranda-Ordaz links, we also take the same setup for starting point parameters to optimize the model's likelihood functions. The study is constructed using pairwise Diebold-Mariano-West test.
§.§.§ Parameter Estimates
Before comparing the predictive performance of the Aranda-Ordaz model versus the benchmark specifications, we analyze the λ estimates at each step in the expansion window estimation exercise. Figure 4(a) shows different estimated values of λ in the time sequence sample size. Figure 4(b) presents the estimates sorted with the lowest value of 0.0103, and the highest value of 6.5532. These values suggest the complexity of the model specification using real data. However, these λ estimates patterns were also observed in the Monte Carlo Simulation, revealing the intricate dynamic of the latency of the Markov chain also in controlled data.[These additional results can also be available upon request.]
§.§.§ Single Models
Table <ref> presents the t-statistics from Diebold-Mariano-West tests of equal predictive accuracy for the benchmark logit model for τ∈{5,10,15,20,25} and the Aranda-Ordaz approach with τ=15 considering three loss functions, MSE, RLF and QLIKE, and four realised variances, MedRV, MinRV, BV and RV. A positive t-statistic indicates that the benchmark model forecast produced larger average loss than the Aranda-Ordaz link. A t-statistic greater than 1.65 and 1.96 in absolute value indicates a rejection of the null of equal predictive accuracy at the 0.10 and 0.05 levels. These statistics are marked with one and two asterisk, respectively.
For the MSE and RLF loss functions, the t-statistic is positive for all pairwise models evaluation. The null hypothesis of equal predictive accuracy is rejected for most cases. The only exception occurs for τ=15 and partially for τ=20. Considering the latter, the null is rejected only for the RLF loss case. The t-statistic for the QLIKE is negative, meaning the Aranda-Ordaz link produces a larger average loss than the logit, and this result is observed for all cases. However, there is no statistical evidence of difference in predictive ability between the links. In general, we observe the flexibility of the Aranda-Ordaz over different logit duration setups attenuating at some point, the choice of the duration uncertainty nature.
§.§.§ Combination Models
In this section, we consider a model combination approach to aggregate N individual model forecasts, each indexed by a fixed duration, into a pooled model scheme to incorporate duration choice uncertainty. Let, σ̂_t+1^2 be the weighted average of the N individual volatility forecasts models {σ̂^2_i,t+1}_i = 1^N, that is
σ̂_t + 1^2 = ∑_i = 1^N w_i,tσ̂_i,t + 1^2,
where {w_i,t}_i=1^N are the combining ex-ante weights at time t and N is taken over a set of DDMS models restricted by lower (upper) bound of τ. We consider the naïve method of taking uniform weights w_i,t:=1/N and the discount mean square prediction error (DMSPE) combining method <cit.>, whose weight are given by
w_i,t = [φ _i,t∑_j = 1^N φ _j,t^-1]^-1,
φ_i,t=∑_s=m+1^tθ^t-s(σ̂_s^2-σ̂_i,s^2 )^2,
where θ is the discount factor and m is the observations contained in the subsample to estimate the models[We apply m=30 days.]. If θ=1, there is no discounting and the individual forecasts are uncorrelated. When θ<1, greater weight is attached to the recent forecast accuracy of the individual models <cit.>. Although much research has been done on model combination techniques, we focus on simpler methods since our goal is to marginalize the gains of different duration's models.
Table <ref> displays the results obtained by evaluating the logit model combination approach to assess the effectiveness of the Aranda-Ordaz link. Similar to the findings in Table <ref>, a positive t-statistic is observed for all pairwise model evaluations in the cases of MSE and RFL loss functions. For both the moving and fixed weight approaches, the null hypothesis of equal predictive accuracy is rejected, implying that the flexible link outperforms the pooling approach. Moreover, the Aranda-Ordaz link yields a greater average loss than the logit combination, as indicated by the negative t-statistic for QLIKE. However, no statistical evidence was found to support a difference in predictive performance.
§.§.§ Garch-type models
Our analysis also include the traditional Garch-type models in a pairwise evaluation. Based on <cit.>, we use a plain vanilla specification, which is given by
Y_t =σ_k,tZ_t,
σ_k,t^2 = ω_k + α_kY_t-1^2+β_kσ_k,t-1^2
where k is the number of regimes. We considered k=1, the traditional model, and k=2, the two regime model. In both models Z_t's are i.i.d. N(0,1). Despite many variations of this modeling approach, our objective is to compare the performance of the links relative to a simple and useful model.
Upon examination of the results presented in Table <ref>, it is observed that a positive t-statistic implies that the DDMS models yield larger average losses in comparison to Garch-type models. However, the null hypotheses of equal predictivity between the Aranda-Ordaz and Garch-type models cannot be rejected based on the MSE and QLIKE losses, indicating that these models are equally applicable. It is important to note that this evidence does not hold for the logit cases. For the single model, the null hypothesis is rejected for QLIKE, with higher values for the MSE t-statistics. In the case of the logit model combination, this result is even more pronounced, as evidenced by the null rejection by all three losses, highlighting the advantages of the Aranda-Ordaz link in the context of volatility forecasting.
§ CONCLUSION
This paper proposes a methodology to address the issue of duration choice in DDMS models. The methodology involves the application of a parametric link function in place of the typical fixed link function to calculate transition probabilities. This parametric link function results in likelihood values and transition probabilities that are more accurate. The proposed approach is capable of significantly improving forecasting accuracy, especially in cases of duration misspecification.
Two Monte Carlo simulations, based on classical applications of DDMS models, are employed to evaluate the methodology. The results demonstrate that using the Aranda-Ordaz link function leads to more precise forecasts and transition probabilities, not only in cases of duration misspecification but also when the model is correctly specified. Furthermore, an empirical study is conducted to forecast the volatility of the S&P500, which illustrates the effectiveness of the proposed methodology. The results indicate that the Aranda-Ordaz DDMS outperforms fixed logit transitions in terms of forecast precision for most duration parameters. Moreover, the Aranda-Ordaz link function improves the forecasting performance to such an extent that it is equivalent to MS-Garch models. Notably, such an improvement in forecasting accuracy is not observed when using fixed link functions.
apalike
|
http://arxiv.org/abs/2307.02458v1
|
20230705173230
|
The East-West Asymmetry of Particle Intensity in Energetic Storm Particle Events
|
[
"Zheyi Ding",
"Gang Li",
"Adolfo Santa Fe Dueñas",
"Robert W. Ebert",
"Nicolas Wijsen",
"Stefaan Poedts"
] |
astro-ph.SR
|
[
"astro-ph.SR",
"physics.plasm-ph",
"physics.space-ph"
] |
Zheyi Ding1,
Gang Li2,
Adolfo Santa Fe Dueñas3,4,
Robert W. Ebert3,4,
Nicolas Wijsen5,6,
Stefaan Poedts 1,7
1Centre for mathematical Plasma Astrophysics, KU Leuven, 3001 Leuven, Belgium
2Department of Space Science and CSPAR, University of Alabama in Huntsville, Huntsville, AL 35899, USA
3Southwest Research Institute, San Antonio, TX, USA
4Department of Physics and Astronomy, University of Texas at San Antonio, San Antonio, TX, USA
5NASA, Goddard Space Flight Center, Heliophysics Science Division, Greenbelt, MD 20771, USA
6Department of Astronomy, University of Maryland, College Park, MD 20742, USA
7Institute of Physics, University of Maria Curie-Skłodowska, Pl. M. Curie-Skłodowska 5, 20-031 Lublin, Poland
Gang Ligangli.uahuntsville@gmail.com
* East-West Asymmetry of particle intensity is often found in ESP events. We propose continuous shock acceleration can lead to this asymmetry.
* Continuous acceleration depends on shock geometry and through this, the injection efficiency plays a central role for this asymmetry
* We simulate this asymmetry using the iPATH model and compared our simulation results with observations
We examine the East-West asymmetry of the peak intensity in energetic storm particle (ESP) events using the improved Particle Acceleration and Transport in the Heliosphere (iPATH) model. We find that injection efficiency peaks east of the nose of coronal mass ejection shock where the shock exhibits a quasi-parallel geometry. We show that the peak intensity at the eastern flank is generally larger than that at the western flank and it positively correlates with the injection efficiency. We also examine this asymmetry for heavy ions, which depends sensitively on the ion energy. Comparison between the modelling results with the measurements of ESP events at 1 au shows a reasonable agreement. We suggest that the injection efficiency can be a primary factor leading to the East-West asymmetry of the peak intensity in ESP events. Additionally, the charge-to-mass (Q/A) dependence of the maximum particle energy affects this asymmetry for heavy ions.
§ PLAIN LANGUAGE SUMMARY
Energetic storm particle (ESP) events occur when coronal mass ejection-driven shocks pass a spacecraft, leading to abrupt increases in particle intensity and posing severe radiation hazards to astronauts and spacecraft. These enhancements are usually interpreted as the result of a local particle acceleration process. Therefore, in-situ measurements of ESP events provide a great opportunity to investigate the shock acceleration mechanism. Recent observations from multiple spacecraft show an East-West asymmetry in the peak intensity of ESP events, with significantly different intensities observed on the eastern and the western shock flank. In this study, we use the 2D improved Particle Acceleration and Transport in the Heliosphere (iPATH) model to investigate the East-West asymmetry of particle intensity in ESP events. We find that the injection efficiency, which depends on the shock geometry, is the key parameter responsible for this East-West asymmetry.
§ INTRODUCTION
Solar energetic particles (SEP) accelerated by the coronal mass ejection (CME) shocks are known as gradual SEP events. During gradual events, the enhancements of particle intensity associated with the passage of the interplanetary shock near spacecraft are referred to as energetic storm particle (ESP) events <cit.>. The properties of ESP events observed at 1 au, and their correlations with shock properties and upstream conditions have been widely studied. For instance, Desai2003 found significant correlations between the interplanetary shock abundances and the ambient superthermal ions. Reames2012ApJ...757...93R suggested that the shock speed correlates best with the particle intensities in ESP events. Additionally, ebert2016ApJ...831..153E examined seven multiple-spacecraft ESP events and found that the peak intensities near the shock nose are larger than at the flank of the shock. More recently, Duenas2022 showed that heavy ion peak intensities and spectra at 1 au are organized by longitude relative to their source flare location, which appeared to have an East-West asymmetric distribution of the peak intensity. This asymmetry refers to the difference of particle intensities between the eastern and the western shock flank. Unlike the East-West asymmetry of particle intensity in large SEP events, which is affected by the extended shock acceleration and the transport effects <cit.>e.g.,>Lario2006, Strauss2017,Xie2019,ding2022modelling, ESP events are typically interpreted as a direct consequence of a local shock acceleration process. Thus, the East-West asymmetry of the particle intensity in ESP events may provide important insights into the underlying acceleration mechanism.
In ESP events, the diffusive shock acceleration (DSA; Axford1977ICRC...11..132A) mechanism is regarded as a primary mechanism for accelerating protons and ions at the shock. In the DSA theory, particles are accelerated by moving in the turbulent magnetic fields near the shock and traversing the shock many times to gain energy. A controversial issue in DSA is how particles are injected at the shock from thermal or superthermal plasma, known as the “injection problem" <cit.>See e.g., >Desai+2016. For particles to participate in the DSA process, their speeds must exceed an injection threshold so that they can scatter diffusively across the shock, which refers to the injection speed V_ inj. A classical threshold of V_ inj in DSA is the de Hoffmann–Teller speed <cit.>:
V_ inj = U_ up/cosθ_ BN,
where U_ up is the upstream flow speed in the shock frame and θ_ BN is the angle between the upstream magnetic field and the shock normal direction. The physical meaning of V_ inj is clear: an ion moving along the upstream magnetic field with a speed v>V_ inj can stay in front of the shock, thus participating in the shock acceleration process. However, considering the gyrophase degree of freedom, the injection speed at a quasi-parallel shock should be larger than that derived from Equation (<ref>).
An alternative approach by giacalone+1999 (also used in Zank+2006) is to require the particle anisotropy to be small when using the Parker transport equation (see more discussion in Section <ref>). Both forms explicitly show the injection speed as a function of shock obliquity. With the knowledge of injection energy, the injection efficiency is defined as the ratio of integral number density above the injection energy to upstream flow density. Therefore, it also depends on the shock obliquity. If the injection energy increases with increasing shock obliquity angle, then the injection efficiency decreases. This feature has been addressed in ellison1995acceleration,Li+2012,Battarbee2013A A...558A.110B.
The injection efficiency of the seed population is not only important for determining particle intensity but also plays a crucial role in particle acceleration. The intensity of the self-generated waves, which govern the maximum particle energy attainable at a shock, is proportional to the number of injected particles <cit.>. Therefore, the injection efficiency not only determines the injected particle number density but also affects the maximum particle energy. Li+2012 examined shock obliquity dependence of injection efficiency and its correlation with maximum particle energy. They suggested that a quasi-parallel shock is more efficient in accelerating particles. At 1 au, CME-driven shocks typically have a quasi-parallel geometry at the eastern flank and a quasi-perpendicular geometry at the western flank under nominal Parker magnetic field conditions. Consequently, particle intensity and maximum particle energy at the western and eastern flanks can differ. Due to the limited number of spacecraft at different longitudes at 1 au, it is often impossible to observe an individual CME-driven shock and its ESP phase by multiple spacecraft (>3) that are well longitudinally separated. In contrast, numerical models provide an effective approach to simulate the longitudinal variation of particle intensity in ESP events at a CME-driven shock.
In this study, we investigate the East-West asymmetry of the peak intensity in ESP events using the two-dimensional (2D) improved Particle Acceleration and Transport in the Heliosphere (iPATH) model <cit.>. In the original one-dimensional (1D) PATH model <cit.>, which only considered the propagation of the CME-driven shock through a uniform solar wind only in the radial direction and adopted the steady-state DSA solution at different times. The iPATH model is extended from the PATH model, which includes the evolution of the shock obliquity in the ecliptic plane. Thus, it is capable of simulating the time intensity profiles and spectra at multiple spacecraft simultaneously.
Later Ding2020 and Li2021 have examined two ground level enhancement (GLE) events during solar cycle 24 using the iPATH model and showed reasonable agreements with observations at multiple spacecraft. Recently, ding2022modelling utilized the iPATH model to examine the East-West asymmetry of particle fluence in large SEP events. They suggested that this asymmetry is a result of the effects of the extended shock acceleration and the geometry of the magnetic field. These works have demonstrated the capability of iPATH in including the necessary physical processes of particle acceleration at the shock and the propagation of energetic particles.
In the following, the injection of the seed population and the DSA solution in the iPATH model are described in Section <ref>. The results of the model and observation are shown in Section <ref>. A conclusion is given in Section <ref>.
§ MODEL
One important parameter in understanding the SEP events is the cut-off energy for particle acceleration at the shock. In the steady-state solution of DSA mechanism <cit.>, the shock parameters do not change significantly during the shock dynamic time scale t_ dyn, which, following Li2017ScChD equals,
t_ dyn= min {R/dR/dt,B/dB/dt,N/dN/dt},
where R, B and N are the radial distance from the sun, the magnetic field magnitude and the particle number density at the shock downstream, respectively.
The maximum particle energy is obtained by balancing the acceleration time with t_ dyn,
t_ dyn = ∫_p_ inj^p_ max3s/s-1κ/U_ up^21/pdp,
where p is the particle momentum, p_ inj and p_ max are the injection momentum and the maximum particle momentum, s is the shock compression ratio, κ is the particle diffusion coefficient, U_ up is the upstream solar wind speed in the shock frame. In the DSA theory it is not considered how the particles are injected into the shock. giacalone+1999,Zank+2006 calculated the injection speed by requiring the total anisotropy ξ to be smaller than 1:
ξ=3|𝐅|/vf = 3 U_ up/v[(β/3-1)^2 + κ_ Bohm^2 sin^2 θ_ BN + (κ_∥-κ_⊥)^2 sin^2 θ_ BNcos^2 θ_ BN/(κ_∥cos^2 θ_ BN + κ_⊥sin^2 θ_ BN)^2]^1/2,
where 𝐅= -κ·∇ f - U_ upp/3∂ f/∂ p is the streaming flux in the shock frame and f is the particle distribution function. The second term is due to the Compton-Getting effect. v is the particle speed and β=3s/(s-1), κ_∥ and κ_⊥ are parallel and perpendicular diffusion coefficient, and κ_ Bohm = vr_L/3 is the Bohm diffusion coefficient (r_L is the gyroradius). See Zank+2006 for a complete derivation of the particle anisotropy. Comparing the equation in giacalone+1999, we note that the dependence of β in Equation (<ref>) is a result of the Compton-Getting effect in the shock frame. However, this approach needs to know the diffusion coefficients κ, but the injection speed is required to calculate κ in the iPATH model when considering the amplified wave intensity at the shock front. To avoid this dilemma, if we assume that ξ=1 and κ_⊥,κ_ Bohm≪κ_∥, the injection speed is approximated by
V_ inj = U_ up[(β-3)+ 3tanθ_ BN].
In the above approximation, we use a+b to approximate √(a^2+b^2), where a=β/3-1 and b=tanθ_ BN. This approximation works well for both cases of a≫ b and a ≪ b, and overestimates √(a^2+b^2) by a factor of 1.4 when a=b. This simplification does not affect the correlation between injection efficiency and shock parameters, nor does it change the primary conclusion of our study. It is worth noting that the assumption that κ_⊥ and κ_ Bohm are much smaller than κ_∥ only holds for small magnetic fluctuations. The simplest model for κ_∥ is Bohm diffusion, which assumes that the parallel mean free path cannot be smaller than the Larmor radius of the charged particles. Generally speaking, for small turbulent fluctuations, κ_∥ is much larger than κ_ Bohm. Regarding κ_⊥, the perpendicular transport of a particle is primarily governed by the field line random walk around the mean magnetic field B. If the magnetic fluctuations are small, it is often assumed that κ_⊥ is much smaller than κ_∥. However, if magnetic fluctuations are strong (δ B ∼ B), the isotropic spatial diffusion leads to κ_⊥ being approximately equal to κ_∥, and κ_∥ approaches κ_ Bohm, so Equation (<ref>) is not valid anymore.
For a strong shock (β=4), Equation (<ref>) reduces to Equation (<ref>) when θ_ BN =0^∘, but Equation (<ref>) is about three times larger than Equation (<ref>) when θ_ BN→ 90^∘. This can be circumvented if we replace the 3tanθ_ BN by
tanθ_ BN in Equation (<ref>). This replacement does not affect the injection at a parallel shock, but for oblique shocks, it yields a smaller value than using Equation (<ref>).
This can be improved by using the following ansatz:
V_ inj = U_ up[(β-3 η )+ tanθ_ BN],
where η is a parameter to regulate the injection speed.
From the discussion below equation (<ref>), we see that a proper choice of
η should be smaller than 1. Taking η=1 and β=4, Equation (<ref>) yields a similar behavior to Equation (<ref>) as θ_ BN→ 0^∘ and θ_ BN→ 90^∘. By considering different values of η, we can estimate the various thresholds of injection speed at the parallel shock. We note that the choice of injection speed/energy is associated with the resulting maximum particle energy since injection energy affects injection efficiency and therefore the amplified wave intensity. A comparison of the injection energy and maximum energy for the aforementioned injection forms can be found in <ref>.
Choosing η=1/3, we recover the injection speed used in Li+2012,Hu+etal+2017.
We assume that this equation is valid for 0^∘<θ_ BN<85^∘. At nearly perpendicular shocks (θ_ BN > 85^∘), cross-field diffusion via field line random walk plays a major role in particle injection.
Some observational and numerical studies suggest a lower threshold of injection speed at quasi-perpendicular shocks due to field line meandering allowing particles to cross the shock front multiple times <cit.>e.g.,>Giacalone2005ApJIrregular,NeergaardParker2014ApJ...782...52N.
Giacalone2005ApJthermal have shown that thermal population can be accelerated by perpendicular shocks with sufficient pre-existing large-scale turbulence. To address particle injection at nearly perpendicular shocks, we must consider the perpendicular diffusion coefficient in the injection form. Thus Equation (<ref>) is not applicable at perpendicular shocks and we do not consider the cases of θ_ BN > 85^∘ in this work. We note that in reality, due to the stochastic nature of the IMF <cit.>, having a portion of a shock with θ_ BN > 85^∘ for an extended period of time can be very rare.
Equation (<ref>) shows that the injection speed increases from a quasi-parallel shock to a quasi-perpendicular shock and from a higher shock compression ratio to a lower one. We note that the injection speed also depends on the shock speed, since U_ up is the upstream speed in the shock frame. The total injected particle number density is proportional to the shock speed associated with the volume it swept during a time step, but it is not a key point in explaining the East-West asymmetry of the peak intensity. Particles above the injection speed are regarded as the seed particles, which are assumed to be a single power law distribution,
f(E) = f(E_ inj^0) (E/E_ inj^0)^-δ,
where E is the particle energy, and E_ inj^0 is the injection energy at the strongest parallel shock (i.e., θ_ BN=0^∘ and s=4), δ is the spectral index. For different events δ generally varies from 1.0 to 3.5 in the ambient solar wind prior to the arrival of the shocks <cit.>. In this work, we use δ=3.5 as the highest limit, but it is a free parameter that can be constrained by ambient populations. The spectral index is an important parameter to adjust the magnitude of injection efficiency and asymmetry of injection efficiency as discussed in <ref>. The ratio of injection particle number density at an oblique part of the shock (N_1) and at the parallel part of the shock (N_0) is given by,
N_1/N_0∝(E_ inj^1/E_ inj^0)^1-δ,
where E_ inj^1 is the injection energy at an oblique shock. Early observational studies suggested 0.5% ∼ 1% thermal solar wind protons are accelerated at interplanetary shocks <cit.>. Recent iPATH modelling works for realistic SEP events also suggested the injection efficiency can vary from several 0.1% to 1% <cit.>. Following the previous works <cit.>, the injection efficiency ϵ_0 at a strong parallel shock (i.e., β=4 and θ_ BN=0) is set to be 1% in this work. Using Equations (<ref>) and (<ref>) and η=1/3, the injection efficiency ϵ_1 at an oblique shock becomes,
ϵ_1 = ϵ_0((β-1)+tanθ_ BN/3)^2-2δ.
Equation (<ref>) shows that the injection efficiency depends on the shock obliquity, the shock compression ratio and the spectral index of the seed population. This is the key point in explaining the East-West asymmetry of the peak intensity of ESP events, we discuss it more in Section <ref>.
In Ding2020, the instantaneous particle distribution function f at the shock front 𝐫 at time step t_k is described by a power-law with an exponential tail,
f(𝐫, p,t_k) = c_1ϵ_𝐫n_𝐫p^-βH[p-p_ inj, 𝐫]
exp(-E/E_b,𝐫),
where ϵ_r is the injection efficiency, n_r is the upstream solar wind density,
p_ inj,r is the particle injection momentum, and E_b,𝐫 is the kinetic energy that corresponds to a maximum proton momentum p_ max,r. H is the Heaviside function. c_1 is a normalization constant,
c_1 = 1/∫_p_ inj,r^+∞p^-β H[p-p_ inj,r]exp(-E/E_b,r)d^3p.
Downstream of the shock, the accelerated particles advect and diffuse in the shell model of iPATH. The detailed description of the shell model can be found in Zank+etal+2000,Hu+etal+2017. Tracking particles in each shell allows one to compute the particle spectrum downstream of the shock as a function of time, including the ESP phase when the shock passes over the spacecraft. Some examples of ESP events in the iPATH model can be found in Fu+2019. In this work, we focus on the peak intensity of ESP events and investigate the causes for the East-West asymmetry of the peak intensity in ESP events.
§ RESULTS
By way of example, we consider a CME with an eruption speed of 1600 km/s and a width of 120 degrees, launched at a heliocentric distance of 0.05 au. The CME propagates into a uniform solar wind with a speed of 400 km/s. The left panel of Figure <ref> shows the equatorial snapshot of the scaled number density n r^2 from 0.05 au to 2.0 au. The black curves represent nominal Parker field lines. The center of CME propagates towards 0^∘. A total of 21 virtual observers at 1 au, separated by 5^∘ in longitude, are denoted by dots. In this work, instead of using the SC-flare angle to classify events as in the work of Duenas2022, we define an SC-CME deflection angle, Δϕ,
which is the longitudinal difference between the CME center and the spacecraft. Note that in observations, it is harder to decide the CME center direction than the flare location. If we assume the CME center and the flare have the same longitude, our definition of Δϕ becomes that in Duenas2022. A positive (negative) Δϕ represents that the spacecraft is located at the eastern (western) side of the CME center. To describe the shock geometry, we distribute the 21 observers into three groups: the green dots, which correspond to the shock nose, have Δϕ between -15^∘ and 15^∘; the blue dots, which correspond to the eastern flank, have Δϕ between 20^∘ and 50^∘; and the red dots, which correspond to the western flank, have Δϕ between -20^∘ and -50^∘.
The right panel in Figure <ref> shows the shock parameters as a function of SC-CME deflection angles. The distributions of s and V_ shock are almost symmetrical with respect to 0^∘, but θ_ BN decreases from 70^∘ to 30^∘ as Δϕ increases from -50^∘ to 50^∘. The injection efficiency is calculated from these shock parameters using Equation (<ref>). If the shock compression ratio and the spectral index of the seed population are the same, the injection efficiency decreases as the shock obliquity angle increases. Therefore, the distribution of injection efficiency is asymmetric and peaks around Δϕ = 20^∘. This asymmetry is due to the longitudinal distribution of shock obliquity, indicating that the quasi-parallel shock is more suitable for seed particle injection.
Figure <ref> shows the proton peak intensity of ESP events at 21 virtual observers versus the shock speed for three energy channels of 0.2 MeV/n, 1.1 MeV/n and 8.9 MeV/n. Since particle intensity is largely affected by the shock speed <cit.>, it is clearer to show the East-West asymmetry of the peak intensity as a function of the shock speed rather than SC-CME deflection angles. The blue and red lines correspond to Δϕ larger than 0^∘ and smaller than 0^∘. The colors of the dots show the shock obliquity angle, and the size of the dots indicates the injection efficiency. The larger dot represents the higher injection efficiency. It is clear that the peak intensity at the eastern flank (Δϕ> 0^∘) is generally larger than that at the western flank (Δϕ< 0^∘) for the shown energies. If we ignore the magnitude of peak intensity in three energy channels, the fluctuations of data points between different energies are similar. This is because the injection efficiency is energy-independent. In this case, between the eastern and the western flanks, the difference of peak intensity at a similar shock speed is around several times. It is difficult to validate this difference from current observations due to the limited multiple-spacecraft events. In recent multiple-spacecraft studies of ESP events by ebert2016ApJ...831..153E,Duenas2022, for an individual event, the difference of peak intensity of ∼ 0.5 MeV/n ions at two spacecraft is mostly smaller than a factor of 10 and the shock speeds are different between two spacecraft. Therefore, we suppose that the magnitude difference of peak intensity may be small between East-West flanks with similar shock speeds. We note that the slope is larger in the channel of 8.9 MeV/n since shock with low shock speed and low compression ratio is less efficient for accelerating particles to high energies.
To further investigate the East-West asymmetry of ESP events and to better compare to observational results, we consider five cases with different CME eruption speeds, which are 1300 km/s, 1600 km/s, 1900 km/s, 2200 km/s, and 2500 km/s respectively. In the work of Gopalswamy2010, the average speeds of CMEs with shocks are about 999 km/s and the fraction of partial/full halo CMEs (width ≥ 120 degrees) is 88%. Of these CMEs, they found that the average speeds of CMEs with radio-loud shocks are 1237 km/s and the fraction of partial/full halo CMEs (width ≥ 120 degrees) is 96%. These radio-loud shocks are presumably the source of large SEP events. Furthermore, in the work of Duenas2022, the authors have found that that the angular width of ESP events has a clear threshold at near-sun CME speed to be 1300 km/s. Above this speed, the ESP events show a significant longitudinal dependence.
Therefore, we also choose the CME eruption speed larger than 1300 km/s in our simulation, which is necessary to examine the ESP events in a wide longitudinal extent. We further assume the width of CME to be a constant of 120^∘ in all cases to get adequate ESP events at the eastern and the western flanks of the shock. In general, the CME width does not intensively change the shock properties between the eastern and the western flanks. Consequently, we focus on the relationship between CME speeds and the peak intensity of ESP events.
We divide 21 virtual observers into three groups as shown in Figure <ref>. We then average each group of observers' peak intensities and shock speeds. We choose three energy channels of 0.2 MeV/n, 1.1 MeV/n and 8.9 MeV/n. The average peak intensities for five cases versus average shock speeds are shown in Figure <ref>. In each panel, green, blue and red symbols represent the observers related to the shock nose, the eastern flank and the western flank. The labels of different symbols represent the different CME eruption speeds. This result allows us to compare the East-West asymmetry of the peak intensity to statistical results in observations. We utilize the same expression in Duenas2022 to fit the relation between the shock speed (V) and the peak proton intensities (I):
log_10(I) = C_1log_10(V) + C_0,
where C_1 is the slope and C_0 is the y-intercept of the log-log fit. The fitting parameters and the coefficient of determination R^2 are labelled in the upper left of each panel. For all three energies, the peak intensity is highest at shock nose (Δϕ∈ [-15^∘,15^∘]), and is always higher at the eastern flank (Δϕ∈ [20^∘,50^∘]) than at the western flank (Δϕ∈ [-50^∘,-20^∘]). We note that the highest peak intensity observed at central events is due to the high compression ratio (>3) near the shock nose in this study. As discussed in Section <ref>, a high compression ratio leads to a high injection efficiency. Between eastern and western events, the shock compression ratio is similar as shown in Figure <ref>, therefore the main difference in shock properties is the shock obliquity. The slope C_1 at the western shock flank is the largest, which suggests that the injection efficiency of the western shock flank is weaker than that at the shock nose or eastern flank. Furthermore, the slope difference between the eastern and western flanks becomes larger in higher energies. It suggests that quasi-parallel geometry at the eastern flank can accelerate protons to higher energies than quasi-perpendicular geometry at the western flank.
We now examine the East-West asymmetry for heavy ions.
At the shock front, ions resonate with the ambient or proton-excited waves. With the same pitch angle and same energy/nucleon, ions with a lower charge-to-mass (Q/A) ratio resonate with a smaller wave number, with Q being the ion charge number and A being the ion mass in the unit of proton mass. The iPATH model has considered the heavy ion acceleration through interacting with the wave turbulence generated by the streaming protons <cit.>. Li+2005a showed that the maximum particle energy/nucleon could be related to the rigidity dependence of the diffusion coefficient and suggested a cut-off energy/nucleon ∼ (Q/A)^2 at a parallel shock. Figure <ref> shows the peak intensity at 0.2 MeV/n, 1.1 MeV/n and 8.9 MeV/n for Helium (Fe), Oxygen (O) and Iron (Fe) as a function of shock speeds. The format of this figure is the same as Figure <ref>. We also use Equation (<ref>) to fit the peak intensity for the three groups of observers. We can see that the East-West asymmetry of the peak intensity exists in three species of ions. At the energy of 0.2 MeV/n and 1.1 MeV/n, the slope C_1 for He, O and Fe are almost the same for all three groups of observers. This is because ions can be well accelerated to 1.1 MeV/n in all cases. However, the slope C_1 shows large variations at the energy of 8.9 MeV/n. For instance, for the eastern group (blue), C_1= 4.6 ± 0.7 for He; C_1=5.1 ± 0.8 for O; and C_1=6.3 ± 1.1 for Fe. The typical value of Q/A for He, O, and Fe in gradual SEP events are 2/4, 7/16 and 12/56 respectively <cit.>. As the Q/A value of ions decreases, their maximum energy also decreases, resulting in a steeper slope of the fitting line for ions with smaller Q/A.
Now we compare the model results of ions with measurements. The Solar Isotope Spectrometer (SIS; Stone1998), and the Ultra-Low Energy Isotope Spectrometer (ULEIS; mason1998ultra) on board the Advanced Composition Explorer (ACE) are used to survey ∼0.1 – 35 MeV/n energetic Helium, ∼0.1 – 76 MeV/n Oxygen, and ∼0.034 – 31 MeV/n Iron ion observations during ESP events identified between August 1997 and December 2019. The list of events used in this study is obtained from the Near-Earth interplanetary CME (ICME) catalog (hereafter referred to as List 1), compiled by Richardson2010, the list of IP shocks observed during solar cycle 23 (List 2) in Gopalswamy2010 and the GMU CME/ICME List (List 3) <cit.>. List 1 provides disturbance times that are typically associated with the arrival of a shock at Earth or an observing spacecraft starting from May 1996. List 2 and List 3 provide the CME originating flare position on the solar surface based on events from the first list. The total number of candidate events is 297. We then applied selection criteria to identify suitable events for our study. Specifically, we choose events where the energetic particle intensity profiles exhibited a synchronized increase of at least 200% above the pre-event background within ±5 hours of the shock arrival time in most of the energy bins, and no other dominant shocks were detected within a 1-day window. Using these restrictions, we identify 85 events that have the IP shock speed and SC-flare angle available. The list of events in this work is available at <https://doi.org/10.5281/zenodo.7969720> <cit.>.
Figure <ref> displays the peak intensity at 0.2 MeV/n, 1.1 MeV/n and 8.5 MeV/n for He, O and Fe as a function of interplanetary (IP) shock speed. The data of 8.5 MeV/n Fe is unavailable, we replace it with 13.3 MeV/n. Using Equation (<ref>), we fit the peak intensity of the western observers (Δϕ < -30^∘) and eastern observers (Δϕ > 30^∘), denoted by the red and blue lines and dots respectively. Because there is an uncertainty of the CME propagation direction in observations, we choose the criteria of ± 30^∘ to better distinguish the eastern and western flanks. This choice is larger than the criteria of ± 20^∘ in the model calculation. With the criteria of ± 30^∘, we have 25 events from the eastern observers, 14 from the western observers, and 46 from the central observers. We note that the number of data points in each panel is not exactly the same, since there is no significant enhancement of particle intensity for high energy channels in some events.
The central observers' peak intensities (-30^∘ < Δϕ < 30^∘) are not displayed in this plot but can be found in <ref>. Because most ESP events are observed by central observers, these events can be caused by CME eruptions that have a wide range of scales (e.g., CME width and speed) and can vary by several orders of magnitude of peak intensity at a similar speed. This large variation in peak intensity can make it difficult to accurately fit the data. However, the eastern and western ESP events are usually generated by wide shocks, which are associated with large CME eruptions. These shocks typically have different shock geometry between the eastern and western flanks. Therefore, we limit our discussion to the possible fitting trends of the eastern and western ESP events as shown in Figure <ref>.
We conduct the two-sample t-test to examine the significance of difference between the eastern and western ESP events. In the panels of 1.1 MeV/n, the t-test shows that at a 95% confidence level the mean of peak intensity for eastern observers is larger than that for western observers. However, other panels show a larger p-value. Therefore, we can not conclusively determine the significance of difference between eastern and western data. Nevertheless, the low p-value indicates a possible trend where the peak intensities may differ between eastern and western observers. From the linear fit, we find that the fitted peak intensity of eastern observers (blue lines) is generally higher than that of western observers (red lines), and the slope C_1 is larger at the western observers for 0.2 MeV/n, 1.1 MeV/n and 8.5 MeV/n ions. These features are similar to our model calculations. As discussed above, the injection efficiency is higher at the quasi-parallel shock, which is generally associated with eastern shock flanks. Hence, we suggest that this East-West asymmetry is a consequence of asymmetric injection efficiency. However, the uncertainty of C_1 and C_0 is relatively large, and the coefficient of determination R^2 is small for the eastern events. Additionally, peak intensity for events having a similar shock speed shows a difference up to 2-3 magnitude. There are several possible reasons that lead to such a large variation. First, the spectral indices of the seed population may differ between events and even differ during an individual event. A recent study by Wijsen2023JGRA..12831203W suggests that the spectral indices of the seed population can be strongly altered by velocity shears in the solar wind (e.g., corotating interaction regions). As discussed in <ref>, the injection efficiency sensitively depends on the spectral indices of the seed population. Second, the variation of shock compression ratio and shock obliquity can also contribute to this large uncertainty as suggested by Equation (<ref>). Third, the magnetic fluctuations can alter the injection energy as well as the maximum particle energy, indicated by Equation (<ref>), which can in turn affect the peak intensity of ions. We note that the number of eastern and western events is limited and not enough to conduct a comprehensive statistical analysis. The observations from multiple spacecraft are necessary to increase the statistical significance of the results and to confirm the observed East-West asymmetry. However, the current study provides valuable insight into the possible mechanisms behind the observed asymmetry and can serve as a basis for future studies.
§ CONCLUSION
Recent observations <cit.> have shown that there is an East-West asymmetry of the peak intensity in ESP events, which is an asymmetric longitudinal distribution of the peak intensity relative to the flare center. In this work, we use the iPATH model to examine this asymmetry of the peak intensity in ESP events. Our results show that the injection efficiency peaks to the east of the CME center where the shock geometry is quasi-parallel. Consequently, for a similar shock speed, the quasi-parallel portion of a shock corresponds to a higher particle intensity than the quasi-perpendicular portion of a shock. We also examine the Q/A dependence of this asymmetry for Helium, Oxygen and Iron. These heavy ions display a similar East-West asymmetry for low energies (e.g., 0.2 MeV). However, this asymmetry of ions varies significantly at high energies because the maximum energy/nucleon of ions depends on Q/A. Our model results are in qualitative agreement with measurements of ESP events from the ACE observations. We note that the number of ESP events at the eastern and western shock flank is still not sufficient for a comprehensive statistical study, and we have not considered the multiple-spacecraft observations of ESP events in this work. We hope that the possible joint observations from multiple spacecraft during solar cycle 25 can provide more evidence to reveal the East-West asymmetry of ESP events.
In summary, our study suggests that injection efficiency can be the key factor leading to the East-West asymmetry of the peak intensity in ESP events, which depends sensitively on the shock obliquity. Additionally, the Q/A dependence of the maximum particle energy affects this asymmetry for heavy ions.
§ OPEN RESEARCH
ACE data are publicly available at the CDAWeb database (<https://cdaweb.gsfc.nasa.gov/index.html/>). All simulation data presented in the paper and the list of ESP events are available at Zenodo via <https://doi.org/10.5281/zenodo.7969720> <cit.>.
This work is supported in part by NASA grants 80NSSC19K0075, 80NSSC19K0831 and 80NSSC19K0079 at UAH. Work at SwRI is supported by NASA grants 80NSSC20K1815 and NNX17AI17G. Supports by ISSI and ISSI-BJ through the international teams 469 is also acknowledged. N.W. acknowledges support from the NASA program NNH17ZDA001N-LWS and from the Research Foundation – Flanders (FWO – Vlaanderen, fellowship no. 1184319N). The work at KU Leuven has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 870405 (EUHFORIA 2.0) and the ESA project “Heliospheric modelling techniques" (Contract No. 4000133080/20/NL/CRS). These results were also obtained in the framework of the projects C14/19/089 (C1 project Internal Funds KU Leuven), G.0D07.19N (FWO-Vlaanderen), SIDC Data Exploitation (ESA Prodex-12), and Belspo project B2/191/P1/SWiM. For the computations we used the infrastructure of the VSC-Flemish Supercomputer Center, funded by the Hercules foundation and the Flemish Government-department EWI.
§ INJECTION ENERGY AND INJECTION EFFICIENCY
By way of example, we use the shock parameters for the case shown in Figure <ref> to compare the different formulae for injection speed and the associated maximum proton energy at 1 au. As discussed in Section <ref>, the injection efficiency depends on the choice of injection formulae and so is the maximum particle energy since the amplified wave strength is proportional to the number of injected particles. We calculate injection energy, the maximum proton energy and the injection efficiency based on Equation (<ref>) and Equation (<ref>). We consider a strong parallel shock, with s=4 and θ_ BN=0^∘, as the base case. Its injection efficiency is set to ϵ_0 =1% and we use δ=3.5. Then the injection efficiency at oblique shocks can be obtained accordingly. The injection efficiency ϵ_1 associated with Equation (<ref>) at oblique shocks is given by,
ϵ_1^Eq. 5 = ϵ_0(β-3 + 3tanθ_ BN)^2-2δ.
Similarly, the injection efficiency ϵ_1 from Equation (<ref>) can be expressed as
ϵ_1^Eq. 6 = ϵ_0((β-3η)+tanθ_ BN/4-3η)^2-2δ.
Figure <ref> compares the injection energy, the maximum proton energy and the injection efficiency as a function of SC-CME deflection angle (Δϕ) using Equation (<ref>) and Equation (<ref>). For Equation (<ref>), η=1, η=1/2 and η=1/3 are chosen. Lower values of η result in higher injection energy (E_ inj) and maximum energy (E_ max) due to their strong influence on the injection efficiency, as demonstrated in the right panel. Equation (<ref>) has the lowest injection efficiency since its injection energy has a stronger dependence on shock obliquity angle. For a reasonable estimate of the injection efficiency and its effects on the amplified wave intensity, we adopt Equation (<ref>) with η=1/3 as the injection speed threshold in the iPATH model.
In addition to the shock properties, the injection efficiency and its East-West asymmetry are also influenced by the spectral index of the seed population. Figure <ref> illustrates the comparison of the maximum proton energy and injection efficiency as a function of Δϕ for different spectral indices (δ=1,1.5,2.5,3.5). The injection energy is the same for all cases using Equation (<ref>) with η=1/3. The injection efficiency decreases with an increasing spectral index, leading to a decrease in the maximum proton energy at shock flanks. The plateau of E_ max near the shock nose is due to the wave intensity reaching the Bohm limit <cit.>. Note that the injection efficiency exhibits a more significant East-West asymmetry for larger δ, while there is no asymmetry of injection efficiency when δ=1.
§ SUPPLEMENTARY FIGURE ON ESP INTENSITY
Figure <ref> provides additional information on the peak intensity of ESP events at western, eastern and central observers, denoted by red, blue and green dots and lines respectively. The central events, shown as the green dots, with -30^∘ < Δϕ < 30^∘, have the largest number of data points. These events are often associated with the passage of the shock nose, which has the highest likelihood of generating ESP events. However, the peak intensities can vary by several orders of magnitude at a similar IP shock speed. As shown in Figure <ref>, most ESP events are observed by central observers and some small ESP events can also be recorded by central observers (i.e., events with low peak intensity). The large variability of peak intensity may indicate the complexity and diversity of shock parameters for central events. In comparison, eastern and western events are usually caused by large SEP events where wide CME-driven shocks are often observed. The variability of these events are therefore smaller than the central events.
For instance, the sample of eastern and western events of 0.2 MeV/n ions (blue and red dots) tends to have a higher peak intensity, whereas there are many central events with a low peak intensity.
Because the central events may include many smaller events, we exclude them in this study and focus on the western and eastern events which typically have distinct differences in shock obliquity.
|
http://arxiv.org/abs/2307.01060v1
|
20230703143929
|
Hierarchical Interpolative Factorization for Self Green's Function in 3D Modified Poisson-Boltzmann Equations
|
[
"Yihui Tu",
"Zhenli Xu",
"Haizhao Yang"
] |
physics.comp-ph
|
[
"physics.comp-ph"
] |
Hierarchical Interpolative Factorization for Self Green's Function in 3D Modified Poisson-Boltzmann Equations
Yihui Tu^1, Zhenli Xu^2, Haizhao Yang^3
^1 School of Mathematical Sciences, Shanghai Jiao Tong University, Shanghai 200240, China
^2 School of Mathematical Sciences, MOE-LSC, CMA-Shanghai and Shanghai Center
for Applied Mathematics, Shanghai Jiao Tong University, Shanghai, 200240, China
^3 Department of Mathematics, University of Maryland, College Park, MD, 20742, USA
August 1, 2023
==============================================================================================================================================================================================================================================================================================================================================================================================
The modified Poisson-Boltzmann (MPB) equations are often used to describe equilibrium particle distribution of ionic systems. In this paper, we propose a fast algorithm to solve MPB equations with the self Green's function as the self energy in three dimensions, where the solution of the self Green's function poses a computational bottleneck due to the need to solve a high-dimensional partial differential equation.
Our algorithm combines the selected inversion with hierarchical interpolative factorization for the self Green's function by extending our previous result of two dimensions.
This leads to an O(Nlog N) algorithm through the strategical utilization of locality and low-rank characteristics of the corresponding operators. Furthermore, the estimated O(N) complexity is obtained by applying cubic edge skeletonization at each level for thorough dimensionality reduction.
Extensive numerical results are performed to demonstrate the accuracy and efficiency of the proposed algorithm for problems in three dimensions.
Keywords: Selected Inversion; Hierarchical Interpolative Factorization; Linear Scaling; Self Green's Function; Modified Poisson-Boltzmann Equations.
§ INTRODUCTION
Electrostatic interaction plays important role in many systems at the nano-/microscale such as biomolecules, supercapacitors and charged soft matter <cit.>.
To provide a continuum description of charged systems, the Poisson-Boltzmann (PB) theory <cit.> based on the mean-field assumption is a typical implicit solvent model describing the ionic distribution. It fails to account for many-body characteristics that are essential to describe electrostatic many-body behaviors of many systems, such as ion correlation and dielectric fluctuation.
Various modified theories have been proposed <cit.> to account for many-body effects, together with many numerical methods <cit.>.
The Gaussian variational field theory <cit.> presents a promising approach to account for long-range Coulomb correlation, including dielectric variation <cit.>. This theory considers the self energy of a test ion as a correction to the mean-field potential energy, which is described by the self Green's function.
By taking into account the self-energy correction, the effect of dielectric inhomogeneity can be incorporated <cit.>.
The self Green's function used in the field theory satisfies the generalized Debye-Hückel (GDH) equation. The numerical solution of the GDH equation is computationally expensive due to its high spatial dimensions.
Based on the finite-difference discretization, the self Green's function corresponds to the diagonal of the inverse of the discrete elliptic differential operator of the GDH equation. The aim of our study is to calculate the self-energy in the GDH equation, a procedure that accelerates the numerical solution of the MPB equations, which requires efficient algorithm to determine the diagonal elements of the matrix inverse.
A straightforward method for extracting the diagonal of the matrix inverse is to first compute the entire matrix and then trivially extract the diagonal. This naive inversion approach has computational complexity of O(N^3), which is the same as that of matrix factorization. In calculations of electronic structure and electrostatic correlation, considerable effort is devoted to the development of an efficient method for acquiring the diagonal of the matrix inverse. A promising approach is the fast algorithm developed using sparsity and low-rankness. The selected inversion method was proposed by Lin et al.<cit.> with O(N^3/2) computational complexity for 2D problems and O(N^2) computational complexity for 3D problems, which involves a hierarchical decomposition of the computational domain Ω. The above method consists of two phases. Constructing the hierarchical Schur complements of the interior points for the blocks of the domain in a bottom-up pass, and then extracting the diagonal entries efficiently in a top-down pass by taking advantage of the hierarchical local dependence of the inverse matrices.
To further improve the efficiency of this method, Lin et al. <cit.> exploited a supernode left-looking LDL factorization of the matrix, which significantly reduces the prefactor in computational complexity. Additionally, Xia et al. <cit.> applied structured multifrontal LDL factorizations to achieve O(N poly(log N)) complexity.
Recently, the hierarchical interpolative factorization (HIF) <cit.> has been proposed to exploit a combination of multifrontal <cit.> and recursive dimensional reduction using frontal skeletonization. This approach aims to generate an approximate generalized LU/LDL decomposition with a linear or quasi-linear estimated computational cost. In contrast to previous methods
<cit.>
that utilize fast structured methods to work implicitly with entire fronts while keeping them implicitly, the HIF offers the advantage of explicit front reduction. As a result, HIF significantly saves the resources needed to compute 3D problems and performs well on large-scale problems.
More recently, the selected inversion with the HIF (SelInvHIF) was proposed <cit.>.
The supernode left-looking LDL factorization is replaced with the HIF and the extraction phase is modified to approximate the diagonal of the inverse of the matrix within O(N) operations for 2D problems.
In this work, we further extend the SelInHIF to three-dimensional problems with O(N log N) complexity by face skeletonization and O(N) complexity by means of skeletonizing cubic faces and then edges.
For convenience, the former algorithm is still called the SelInvHIF and the latter is called “SelInvHIF with edge skeletonization".
The computational complexity of the algorithm will be demonstrated through comprehensive theoretical derivation and the presentation of various numerical examples. In the subsequent section, the MPB is introduced, as it serves as an issue for testing the scaling of the algorithm within the context of three-dimensional problems.
The rest of the paper is organized as follows.
Section <ref> introduces the skeletonization of matrix factorization and presents the SelInvHIF algorithm in detail.
Then iterative solvers are described for MPB equations in Section <ref>.
In Section <ref>, we present various numerical results obtained using the SelInvHIF algorithm.
Finally, we make the conclusion of the paper and discuss future work in Section <ref>.
§ THE SELINVHIF ALGORITHM
Initially, we discuss details of SelInvHIF, followed by the introduction of SelInvHIF with edge skeletonization in Section <ref>. The SelInvHIF algorithm comprises two steps. In the first step, hierarchical Schur complements are constructed for the diagonal blocks of matrix A, which is discretized uniformly from the differential operator on a rectangular domain Ω. During the subsequent stage, the diagonal elements of A^-1 are extracted from the constructed hierarchy of Schur complements.
Prior to the introduction of the formal description of the SelInvHIF algorithm, we give a brief overview of the skeletonization of matrix factorization.
Let us determine some basic symbols and give necessary theorems for our algorithm. Given a matrix A, A_p q or A(I, J) is a submatrix with restricted rows and columns, where the p, q, r, I and J denote the ordered sets
of indices. For simplicity, matrix A is assumed to be symmetric and nonsingular given by
equationsection
A = [[ A_pp A_qp^T ; A_qp A_qq A_rq^T; A_rq A_rr ]]
which is defined over the indices (p, q, r). In this matrix structure, p is related
to the degrees of freedom (DOFs) of the interior points on domain 𝒟 (𝒟 is a subdomain of Ω), q to the DOFs on the
boundary ∂𝒟, and r to the external domain Ω / 𝒟. In general, the DOFs q separates p from r which is often very large.
Let G = A^-1 and G_1 = A_1^-1. G_pp is the submatrix of G corresponding to the row and column index set p, and A_1 is the Schur complement of A_pp, i.e.,
A_1=[[ A_qq-A_qpA_pp^-1A_qp^T A_rq^T; A_rq A_rr; ]].
One of the preliminary tools used in the SelInvHIF is based on a crucial observation in the selected inversion method <cit.>. Namely, to compute G_p p, only the values of G_1 with interaction in direct matrix A, (G_1)_q q are needed rather than the whole inverse of the Schur complement.
This implies that G_pp is determined by (G_1)_qq.
Furthermore, the diagonal entry (G_1)_qq can be calculated by utilizing a diagonal block of the inverse of the Schur complement of a submatrix of A_1. The recursive application of this approach leads to an efficient algorithm for computing G_pp. Specifically, we can compute a diagonal block of the A^-1 using a diagonal block of the inverse of a submatrix of A. Repeatedly applying this observation allows us to create an efficient recursive algorithm for computing the diagonal of A^-1.
The interpolative decomposition (ID) <cit.> for low-rank matrices based on Lemma <ref> below is the second frequently used tool in the SelInvHIF. Suppose a disjoint partition of q = q̂∪q̌ with |q̂|=k is use. The sets q̂ and q̌ are referred to as the skeleton and redundant indices, respectively.
Assume A ∈ℝ^m× n with rank k ≤min(m,n) and q be the set of all column indices of A. Then there exists a matrix T_q ∈ℝ^k× (n-k) such that A_:,q̌ = A_:,q̂ T_q.
Specifically, the redundant columns of matrix A can be represented by the skeleton columns and the associated interpolation matrix from Lemma <ref>, and the following formula holds,
A[[ I ; -T_q I; ]]=
[[ A_:,q̌ A_:,q̂; ]]
[[ I ; -T_q I; ]]
=
[[ 0 A_:,q̂; ]].
Eq.(<ref>) indicates that the sparsification of matrix A is feasible by multiplying a triangular matrix which is formed from the interpolation matrix T_q in Lemma <ref>.
The utilization of (<ref>) facilitates the elimination of redundant DOFs of a dense matrix featuring low-rank off-diagonal blocks, yielding a structured matrix of the form (<ref>). This idea is referred to as block inversion with skeletonization and is reflected in Lemma <ref>. It is worth noting that the idea of skeletonization was first introduced in the HIF method <cit.>.
Let symmetric matrix A have the following form
equationsection
A =
[[ A_pp A_qp^T; A_qp A_qq ]],
where A_qp is numerically low-rank. Interpolative matrix T_p satisfies A_qp̌ = A_qp̂ T_p with p = p̂∪p̌. Without loss of generality, rewrite
equationsection
A = [[ A_p̌p̌ A_p̂p̌^T A_qp̌^T; A_p̂p̌ A_p̂p̂ A_qp̂^T; A_qp̌ A_qp̂ A_qq ]]
and define
equationsection
Q_p =
[[ I ; -T_p I ; I ]].
Let A̅≜ Q_p^TA Q_p. Then one has
equationsectionA̅ =
[[ B_p̌p̌ B_p̂p̌^T ; B_p̂p̌ A_p̂p̂ A_qp̂^T; A_qp̂ A_qq ]],
with B_p̌p̌ = A_p̌p̌-T_p^TA_p̂p̌-A_p̂p̌^TT_p+T_p^TA_p̂p̂T_p, and B_p̂p̌ = A_p̂p̌ - A_p̂p̂T_p.
Further suppose that B_p̌p̌ is nonsingular. Let G = A^-1, G̅ = A̅^-1, G_1 = G_p̂∪ q, p̂∪ q, and A̅_1 be the Schur complement of B_p̌p̌, i.e.,
equationsectionA̅_1=
[[ A_p̂p̂-B_p̂p̌B_p̌p̌^-1B_p̂p̌^T A_qp̂^T; A_qp̂ A_qq; ]],
and G̅_1 = A̅_1^-1. Then, by Eq. (<ref>) the following formulas holds,
equationsection
G_p̌p̌ = G̅_p̌p̌ = B_p̌p̌^-1+[ -B_p̌p̌^-1B_p̂p̌^T 0 ]G̅_1[ -B_p̌p̌^-1B_p̂p̌^T 0 ]^T,
equationsection
G_1 =
[[ T_pB_p̌p̌^-1T_p^T ; 0 ]]+
[[ T_pB_p̌p̌^-1B_p̂p̌^T+I ; I ]]G̅_1[[ B_p̂p̌B_p̌p̌^-1T_p^T+I ; I ]].
Lemma <ref> shows that computing G_p̌p̌ requires only the values of G̅_1 associated with row and column indices in p̂, rather than the whole inverse of the Schur complement. Thus, G_p̌p̌ is determined by (G̅_1)_p̂p̂, a diagonal block of A̅_1^-1, which has a smaller size than the original matrix A. Although A̅_1 may be dense, if it has low-rank off-diagonal blocks, then the same approach used in Eq. (<ref>) can be applied to compute a diagonal block of G̅_1, resulting in a recursive algorithm that efficiently computes the diagonal blocks of G.
This skeletonization technique was proposed by Ho and Ying <cit.> and is based on the observation that the Schur complements have specific low-rank structures. Specifically, A_pp^-1, obtained from a local differential operator, often has low-rank off-diagonal blocks. Additionally, numerical experiments illustrate that the Schur complement interaction A_qq-A_qpA_pp^-1A_qp^T also possesses the same rank structure. In the following subsection, we use Lemma <ref> to generate hierarchical Schur complements for diagonal blocks of A.
§.§ Hierarchy of Schur complements
To achieve a hierarchical disjoint partition for the differential operator in domain Ω, bipartitioning is performed in each dimension, resulting in leaf domains of size r_0× r_0× r_0 and a total integer level L.
Domain Ω is defined by a grid size of √(N)×√(N)×√(N)= r_0 2^L-1× r_0 2^L-1× r_0 2^L-1 and is associated with a matrix A of size N× N.
Furthermore, to take advantage of the low-rankness of matrix A, L-1 fractional levels are introduced between L integer levels.
The hierarchy construction of Schur complements is carried out at levels 1, 3/2, 2, 5/2, …, and L.
Let us consider the case of r_0=6 and L=3 to describe the process in detail without loss of generality.
Initially, the entire domain is regarded as the top level (Level 3) and is partitioned into eight blocks at the next level (Level 2). Each block is further partitioned into eight sub-blocks at a lower level (Level 1), resulting in a total of 2^L-1× 2^L-1× 2^L-1=64 blocks at the bottom level, as illustrated in Figure <ref>. In addition,
one fractional level is considered between two consecutive integer levels and the low-rank matrices that represent the fronts between domain blocks are reduced into skeletons by this level.
§.§.§ Bottom level
The initial index set J_0 follows the row-major ordering, while domain Ω is hierarchically partitioned into disjoint blocks at level ℓ=1, with each block having a size of 2^L-ℓ× 2^L-ℓ× 2^L-ℓ=4× 4× 4.
All points within each block are classified into interior and boundary points, where the former are not related to the points in other blocks, and the latter are related to the neighboring points in other blocks.
The interior points are denoted as I_1;ijk (shown in gray or light blue in Figure <ref>) and the boundary points are denoted as J_1;ijk (shown in black or blue in Figure <ref>) for each block, where i,j,k=1,2,3,4 are the indices of the blocks in each dimension.
The differential operators have a locality property, which implies that A(I_1;ijk,I_1;i'j'k') = 0 (or A(I_1;ijk,J_1;i'j'k') = 0) if (i,j,k) ≠ (i',j',k').
The interior points are removed using block inversion. One can then focuses the problem on the boundary points.
To achieve this, one uses the proper row and column permutations to matrix A defined with the index set J_ 0 in order to place all of the interior points in front of the boundary points.
Actually, matrix A can be permuted into a new matrix by a permutation matrix P_1 as follows,
equationsection
A_1 = P_1^-1A P_1=
[[ U_1 V_1^T; V_1 W_1 ]]
with index set (I_1|J_1), U_1 = A_1(I_1,I_1), V_1 = A_1(J_1,I_1), and W_1 = A_1(J_1,J_1). Here I_1 represents the indices of all interior points, denoted as I_1 = I_1;111I_1;121...I_1;444, and J_1 represents the indices of all boundary points, denoted as J_1=J_1;111J_1;121...J_1;444.
Due to the locality property, both U_1 and V_1 are block diagonal matrices. Figure <ref> shows that interior points in different blocks are not connected. Boundary points in each block are only connected to the interior points in the same block. Furthermore, U_1 and V_1 are of the following form,
equationsection
U_1 =
[[ U_1;111 ; U_1;121 ; ⋱ ; U_1;444 ]],
V_1 =
[[ V_1;111 ; V_1;121 ; ⋱ ; V_1;444 ]],
with U_1;ijk = A_1(I_1;ijk,I_1;ijk) and V_1;ijk = A_1(J_1;ijk,I_1;ijk), for i,j,k=1,2,3,4.
Using Gaussian elimination, one can obtain,
equationsection
A_1^-1 =
L_1^T[[ U_1^-1 ; (W_1-V_1U_1^-1V_1^T)^-1 ]]L_1,
with
L_1 =
[[ I ; -V_1U_1^-1 I ]].
Since U_1 is a block diagonal matrix with each diagonal block of a size (r_0-2)^3 × (r_0-2)^3, its inverse can be computed directly.
By using the block diagonal matrices V_1 and U_1^-1, V_1U_1^-1 is also a block diagonal matrix and can be computed independently within each block,
equationsection
V_1U_1^-1 =
[[ V_1;111U_1;111^-1 ; V_1;121U_1;121^-1 ; ⋱ ; V_1;444U_1;444^-1 ]].
Similarly, the block diagonal matrix V_1U_1^-1V_1^T is expressed as
equationsection
V_1U_1^-1V_1^T =
[[ V_1;111U_1;111^-1V_1;111^T ; V_1;121U_1;121^-1V_1;121^T ; ⋱ ; V_1;444U_1;444^-1V_1;444^T ]].
Combining (<ref>) and (<ref>), one has
equationsection
G = P_1A_1^-1P_1^-1 = P_1L_1^T[[ U_1^-1 ; G_1 ]]L_1P_1^-1,
where G_1=(W_1-V_1U_1^-1V_1^T)^-1 is the inverse of the Schur complement of U_1. Consequently, by removing interior points from matrix A, one is able to simplify the problem.
§.§.§ Fractional level
At this level, the aim is to obtain G_1 in (<ref>) on index set J_1, which corresponds to the boundary points of the domain blocks at the first level.
Domain Ω is partitioned into 64 blocks and the total number of faces in blocks is 384 in this example with L=3 (see Figure <ref> (a)).
Each face consists of the DOFs inside the corresponding area and some of the DOFs on its boundary. In addition, a face not only interacts within its own block but also interacts with faces in neighbor blocks. The associated matrix permits low-rank off-diagonal blocks since the DOFs of a face only interact with a few number of other neighboring blocks. Furthermore, Lemma <ref> is applied to skeletonize the DOFs on the faces in each block. An ID can be implemented to select the redundant and skeleton DOFs approximately in each block, and the interpolation matrix T_q can be recorded as in Lemma <ref>.
Figure <ref> (a) shows the ith face, where the redundant DOFs are identified by I_3/2;i, the skeleton DOFs , by J_3/2;i, and the associated interpolation matrix, by T_3/2;i.
Similar to the bottom level, an appropriate permutation matrix P_3/2 is designed in order to move all of the redundant points in front of the skeleton points and reindex J_1 by the permutation matrix. Furthermore, matrix
W_1-V_1U_1^-1V_1^T can be permuted into a new matrix by the permutation matrix P_3/2 as follows,
equationsection
A_3/2 = P_3/2^-1(W_1-V_1U_1^-1V_1^T)P_3/2 =
[[ U_3/2 V_3/2^T; V_3/2 W_3/2 ]]
with index set (I_3/2|J_3/2),
U_3/2 = A_3/2(I_3/2,I_3/2), V_3/2 = A_3/2(J_3/2,I_3/2), and the dense matrix
W_3/2 = A_3/2(J_3/2,J_3/2). Here I_3/2 represents the indices of all redundant points, denoted as I_3/2;1I_3/2;2...I_3/2;384, and J_3/2 represents the indices of all skeleton points, such that it can be denoted as J_3/2;1J_3/2;2...J_3/2;384.
Denote T_3/2 by a block diagonal matrix
equationsection
T_3/2 =
[[ -T_3/2;1 ; ⋱ ; -T_3/2;384 ]],
and arrange a |J_1|× |J_1| matrix
equationsection
Q_3/2 =
[[ I ; T_3/2 I; ]].
Thus, the new matrix is updated
equationsectionA̅_3/2 = Q_3/2^TA_3/2Q_3/2 =
[[ U̅_3/2 V̅_3/2^T; V̅_3/2 W_3/2 ]],
where U̅_3/2 and V̅_3/2 are block diagonal matrices with
equationsectionU̅_3/2(I_3/2;i,I_3/2;j) = 0, V̅_3/2(J_3/2;i,I_3/2;j) = 0, ∀ i ≠ j.
Similarly, one can obtain the following inverse by Gaussian elimination,
equationsectionA̅_3/2^-1 =
L_3/2^T[[ U̅_3/2^-1 ; G_3/2 ]]L_3/2
with
equationsection
L_3/2=
[[ I ; -V̅_3/2U̅_3/2^-1 I ]],
G_3/2 = (W_3/2-V̅_3/2U̅_3/2^-1V̅_3/2^T)^-1
as in Lemma <ref>. Since, -V̅_3/2U̅_3/2^-1 and V̅_3/2U̅_3/2^-1V̅_3/2^T are block diagonal matrices, they can be computed independently within each block. Thus,
equationsection
G_1≈ P_3/2Q_3/2L_3/2^T[[ U̅_3/2^-1 ; G_3/2 ]]L_3/2Q_3/2^TP_3/2^-1.
Therefore, the inversion problem is reduced to a smaller matrix W_3/2-V̅_3/2U̅_3/2^-1V̅_3/2^T by eliminating the redundant DOFs as in Lemma <ref>.
§.§.§ Middle level
At Level ℓ=2, the domain Ω consists of 2^L-ℓ× 2^L-ℓ× 2^L-ℓ=2× 2×2 blocks with interior and boundary points. Similar to the former integer level, permutation matrix P_2 is applied to reindex the points in J_3/2 into I_2 and J_2,
equationsection
J_3/2P_2⟶ (I_2;11I_2;12I_2;21I_2;22|J_2;11J_2;12J_2;21J_2;22):=(I_2|J_2).
Use the same strategy as at Level 1 and denote
equationsection
A_2 = P_2^-1(W_3/2-V̅_3/2U̅_3/2^-1V̅_3/2^T)P_2 =
[[ U_2 V_2^T; V_2 W_2 ]]
with
equationsection
U_2 = A_2(I_2,I_2), V_2 = A_2(J_2,I_2), and W_2 = A_2(J_2,J_2).
It can be observed that matrices U_2 and V_2 possess a block diagonal structure. Thus,
equationsection
G_3/2 = P_2L_2^T[[ U_2^-1 ; G_2 ]]L_2P_2^-1,
and
equationsection
L_2 =
[[ I ; -V_2U_2^-1 I ]], and
G_2 = (W_2-V_2U_2^-1V_2^T)^-1.
Finally, one is able to simplify the problem to smaller matrix W_2-V_2U_2^-1V_2^T by removing interior points. The DOFs are shown in Figure <ref> (b) after elimination in level ℓ=2.
§.§.§ Fractional
As in ℓ=3/2, at this level, one aims to find G_2 indexed by J_2. Again, one divides domain Ω into 8 blocks with 48 faces. Through the ID, one distinguishes the redundant DOFs I_5/2;i and the skeleton DOFs J_5/2;i in the ith face, and records the interpolation matrix T_5/2;i. Reindex J_2 with a permutation matrix P_5/2 such that
equationsection
J_2P_5/2⟶ (I_5/2;1I_5/2;2I_5/2;3I_5/2;48|J_5/2;1J_5/2;2J_5/2;3J_5/2;48):=(I_5/2|J_5/2).
Denote
equationsection
T_5/2 =
[[ -T_5/2;1 ; ⋱ ; -T_5/2;48 ]]
and a |J_2|× |J_2| matrix
equationsection
Q_5/2 =
[[ I ; T_5/2 I; ]].
Then
equationsectionA̅_5/2 = Q_5/2^TP_5/2^-1(W_2-V_2U_2^-1V_2^T)P_5/2Q_5/2 =
[[ U̅_5/2 V̅_5/2^T; V̅_5/2 W_5/2 ]]
with
equationsectionU̅_5/2(I_5/2;i,I_5/2;j) = 0, V̅_5/2(J_5/2;i,I_5/2;j) = 0, ∀ i ≠ j.
Therefore,
equationsection
G_2≈ P_5/2Q_5/2L_5/2^T[[ U̅_5/2^-1 ; G_5/2 ]]L_5/2Q_5/2^TP_5/2^-1,
with
equationsection
L_5/2 =
[[ I ; -V̅_5/2U̅_5/2^-1 I ]], and
G_5/2 = (W_5/2-V̅_5/2U̅_5/2^-1V̅_5/2^T)^-1.
Note that U̅_5/2 and V̅_5/2 are block diagonal.
The matrix inversion problem now has been reduced to W_5/2-V̅_5/2U̅_5/2^-1V̅_5/2^T. The DOFs are shown in Figure <ref> (c) after skeletonization in level ℓ=5/2.
§.§.§ Top level
Partition domain Ω into 2^L-ℓ× 2^L-ℓ× 2^L-ℓ=1× 1× 1 block (i.e., no partition at this level, in Figure <ref> (d).). Similarly, the index set J_5/2 is reindexed by partitioning it into the union of an interiors index set I_3 and a boundary index set J_3, using a permutation matrix P_3 as follows:
equationsection
J_5/2P_3⟶ (I_3|J_3).
Thus, one has
equationsection
G_5/2 = P_3L_3^T[[ U_3^-1 ; G_3 ]]L_3P_3^-1,
with
equationsection
L_3 =
[[ I ; -V_3U_3^-1 I ]],and
G_3 = (W_3-V_3U_3^-1V_3^T)^-1.
In this top level, the inverse of G_3 can be computed directly because of its small size.
§.§.§ The algorithm for the hierarchy of Schur complements
In this section, we aim to construct a hierarchical structure of Schur complements for matrix A is constructed on an √(N)×√(N)×√(N) grid. The process involves dividing the points in each block at each integer level into interior and boundary points. Specifically, the interior points are only involved in interactions with other points within the same block, prompting a reindexing and subsequent elimination of the interior points. At each fractional level, face skeletonization is considered, and an ID approach is applied to distinguish redundant and skeleton points. Here, the redundant points only interact with other points in the same cell, leading to a reindexing and elimination of the redundant points.
The relationships between levels are defined as follows,
equationsection
G_ℓ =
A^-1, ℓ = 0;
(W_ℓ-V_ℓU_ℓ^-1V_ℓ^T)^-1, ℓ is integer;
(W_ℓ-V̅_ℓU̅_ℓ^-1V̅_ℓ^T)^-1, ℓ is fractional.
Based on (<ref>), it follows the recursive relation with integer ℓ,
equationsection
G_ℓ-1≈ P_ℓ-1/2Q_ℓ-1/2L_ℓ-1/2^T[[ U̅_ℓ-1/2^-1 ; G_ℓ-1/2 ]]L_ℓ-1/2Q_ℓ-1/2^TP_ℓ-1/2^-1,
equationsection
G_ℓ-1/2 = P_ℓL_ℓ^T[[ U_ℓ^-1 ; G_ℓ ]]L_ℓP_ℓ^-1.
Therefore, the hierarchy of Schur complements can be constructed from Level 1. One describes the steps in Algorithm <ref>. Note that the reindexing is implicitly included in Algorithm <ref>, when one uses the index sets I_ℓ;ijk and J_ℓ;ijk or I_ℓ;i and J_ℓ;i for A_ℓ.
1em
1em
§.§ Extracting the diagonals of the matrix inverse
After the hierarchy of Schur complements is constructed, one can proceed to extract the diagonals of the matrix G. It is important to note that computing the entire Schur complement G_ℓ is not required.
This is based on the following observations,
equationsection
G_ℓ-1(I_ℓ;ijkJ_ℓ;ijk,I_ℓ;ijkJ_ℓ;ijk) is determined by G_ℓ-1/2(J_ℓ;ijk,J_ℓ;ijk),
G_ℓ-1/2(I_ℓ-1/2;iJ_ℓ-1/2;i,I_ℓ-1/2;iJ_ℓ-1/2;i) is determined by G_ℓ(J_ℓ-1/2;i,J_ℓ-1/2;i).
For the purpose of extracting relevant information, we begin with considering the top level ℓ=L=3. G_5/2 can be calculated using the following formula with given G_3,
equationsection
G_5/2 = P_3[[ U_3^-1+U_3^-1V_3^TG_3V_3U_3^-1 -U_3^-1V_3^TG_3; -G_3V_3U_3^-1 G_3 ]]P_3^-1.
The submatrices enclosed in the bracket are indexed by (I_3|J_3), while G_5/2 is indexed by J_5/2 = J_5/2;1J_5/2;2… J_5/2;48, as a result of the permutation matrix P_3. However, it suffices to focus on G_5/2(J_5/2;i,J_5/2;i) instead of the off-diagonal blocks to extract the diagonal elements of G_5/2. As a consequence, we can represent G_5/2 as:
equationsection
G_5/2 =
[[ G_5/2;1 * * * * *; G_5/2;2 * * * *; * G_5/2;3 * * *; * * G_5/2;4 * *; * * * G_5/2;5 *; * * * * G_5/2;48 ]]
with G_5/2;i = G_5/2(J_5/2;i,J_5/2;i), i=1,2,…,48.
One recovers the elements of diagonal blocks for matrix G_5/2 in the previous layer. Furthermore, the diagonal blocks of the following G_2 are acquired based on the observation of (<ref>),
equationsection
G_2≈ P_5/2[[ ℋ_2 -U̅_5/2^-1V̅_5/2^TG_5/2 + ℋ_2T_5/2^T; -G_5/2V̅_5/2U̅_5/2^-1 + T_5/2ℋ_2 ℌ_2 ]]P_5/2^-1
with
equationsectionℋ_2=U̅_5/2^-1+U̅_5/2^-1V̅_5/2^TG_5/2V̅_5/2U̅_5/2^-1,
and
equationsectionℌ_2 = T_5/2ℋ_2T_5/2^T-G_5/2V̅_5/2U̅_5/2^-1T_5/2^T-T_5/2U̅_5/2^-1V̅_5/2^TG_5/2+G_5/2.
All matrices in the bracket of (<ref>) are indexed by (I_5/2|J_5/2). G_2 is indexed by
equationsection
J_2 = J_2;111J_2;121J_2;211J_2;221J_2;112J_2;122J_2;212J_2;222
due to the permutation matrix P_5/2.
Recalling the construction process, one can assert that T_5/2, U_5/2^-1, and V_5/2 are block diagonal matrices and the diagonal blocks of the following matrices are both obtained,
equationsectionU̅_5/2^-1V̅_5/2^TG_5/2V̅_5/2U̅_5/2^-1 =
[[ U̅_5/2;1^-1V̅_5/2;1^TG_5/2;1V̅_5/2;1U̅_5/2;1^-1 ⋯ *; ⋮ ⋱ ⋮; ⋯ U̅_5/2;48^-1V̅_5/2;48^TG_5/2;48V̅_5/2;48U̅_5/2;48^-1 ]],
equationsection
G_5/2V̅_5/2U̅_5/2^-1T_5/2^T =
[[ G_5/2;1V̅_5/2;1U̅_5/2;1^-1T_5/2;1^T ⋯ *; ⋮ ⋱ ⋮; ⋯ G_5/2;48V̅_5/2;48U̅_5/2;48^-1T_5/2;48^T ]].
This means that only block-block multiplication is needed to get the elements of the diagonal blocks ℋ_2 and ℌ_2. The computational complexity is then greatly reduced. Additionally, the diagonal blocks G_2(J_2;ijk,J_2;ijk) are obtained directly.
1em
1em
At Level 2, one has
equationsection
G_3/2 = P_2[[ U_2^-1+U_2^-1V_2^TG_2V_2U_2^-1 -U_2^-1V_2^TG_2; -G_2V_2U_2^-1 G_2 ]]P_2^-1.
Similarly, the submatrices in the bracket of (<ref>) are indexed by (I_2|J_2). G_3/2 is indexed by J_3/2 = J_3/2;1⋯ J_3/2;384 with the permutation matrix P_2. Moreover, one just needs to compute G_3/2(J_3/2;i,J_3/2;i) in this step.
At Level 3/2, one has
equationsection
G_1≈ P_3/2[[ ℋ_1 -U̅_3/2^-1V̅_3/2^TG_3/2 + ℋ_1T_3/2^T; -G_3/2V̅_3/2U̅_3/2^-1 + T_3/2ℋ_1 ℌ_1 ]]P_3/2^-1,
with
equationsectionℋ_1=U̅_3/2^-1+U̅_3/2^-1V̅_3/2^TG_3/2V̅_3/2U̅_3/2^-1,
and
equationsectionℌ_1 = T_3/2ℋ_1T_3/2^T-G_3/2V̅_3/2U̅_3/2^-1T_3/2^T-T_3/2U̅_3/2^-1V̅_3/2^TG_3/2+G_3/2.
The diagonal blocks of ℋ_1 and ℌ_1 can be efficiently computed using block-block multiplication, just like the Level ℓ=5/2. The index of the submatrices in the bracket of equation (<ref>) is
(I_3/2|J_3/2). Due to the permutation matrix P_3/2, matrix G_1 is indexed by J_1 = J_1;111J_1;121⋯ J_1;444 and all elements needed are the diagonal blocks G_1(J_1;ijk,J_1;ijk).
At the bottom level ℓ=1, one applies the same procedure as at Level 2 and Level 3. Specifically, one obtains G_1(J_1;ijk,J_1;ijk) from Level 3/2, while G(J_0;ijk,J_0;ijk) is computed directly. Consequently, the diagonal elements of G can be obtained by combining the diagonal elements of each level.
Finally, a quasilinear scaling algorithm can be achieved to extract the diagonal elements of G recursively. Algorithm <ref> presents the organized form of this procedure. It is worth noting that the reindexing process is implicitly included in Algorithm <ref> as one uses the index sets J_ℓ;ijk or J_ℓ;i for G_ℓ.
§.§ The SelInvHIF with edge skeletonization
In Section <ref>, the construction step of the SelInvHIF is characterized by its incorporation of interior points elimination and face skeletonization, while SelInvHIF with edge skeletonization further advances this approach by implementing additional edge skeletonization.
That is, additional layers are also introduced to skeletonize edges to achieve a complete dimensionality reduction.
Let us recall the construction step within the SelInvHIF, wherein the hierarchy construction of Schur complements is systematically performed at levels 1, 3/2, 2, 5/2, …, and L.
Compared to SelInvHIF, the construction step of SelInvHIF with edge skeletonization is carried out at levels 1, 4/3, 5/3,2, 7/3, …, and L.
Specifically, At each integer level, the points are reindexed and the interior points are eliminated accordingly.
In addition, the face skeletonization is performed at level ℓ+1/3 and edge skeletonization is performed
at level ℓ+2/3, respectively. Precisely, the relationships between levels are defined the same as <ref>. Furthermore, one obtains the following recursive relation with integer ℓ,
equationsection
G_ℓ-1≈ P_ℓ-2/3Q_ℓ-2/3L_ℓ-2/3^T[[ U̅_ℓ-2/3^-1 ; G_ℓ-2/3 ]]L_ℓ-2/3Q_ℓ-2/3^TP_ℓ-2/3^-1,
equationsection
G_ℓ-2/3≈ P_ℓ-1/3Q_ℓ-1/3L_ℓ-1/3^T[[ U̅_ℓ-1/3^-1 ; G_ℓ-1/3 ]]L_ℓ-1/3Q_ℓ-1/3^TP_ℓ-1/3^-1,
equationsection
G_ℓ-1/3 = P_ℓL_ℓ^T[[ U_ℓ^-1 ; G_ℓ ]]L_ℓP_ℓ^-1.
Similar to Section <ref>, one can extract the diagonals of the matrix G based on the hierarchy of Schur complements. The following observations shows that computing the entire G_ℓ is not required,
equationsection
G_ℓ-1(I_ℓ;ijkJ_ℓ;ijk,I_ℓ;ijkJ_ℓ;ijk) is determined by G_ℓ-2/3(J_ℓ;ijk,J_ℓ;ijk),
G_ℓ-2/3(I_ℓ-2/3;iJ_ℓ-2/3;i,I_ℓ-2/3;iJ_ℓ-2/3;i) is determined by G_ℓ-1/3(J_ℓ-2/3;i,J_ℓ-2/3;i),
G_ℓ-1/3(I_ℓ-1/3;iJ_ℓ-1/3;i,I_ℓ-1/3;iJ_ℓ-1/3;i) is determined by G_ℓ(J_ℓ-1/3;i,J_ℓ-1/3;i).
Based on <ref>, the recovery process for G can be carried out as in Section <ref>. Hence, the construction step and extracting step of the SelInvHIF with edge skeletonization can be described in Algorithms <ref> and <ref>.
1em
1em
1em
1em
§.§ Computational Complexity
In this section, the computational complexity of the SelInvHIF is considered. Assume that the domain consists of N = √(N)×√(N)×√(N) points and set √(N)=2^L with ℓ_max<L. The number of blocks at level ℓ is defined as n_B(ℓ), and the following formula holds
equationsection
n_B(ℓ) =O(2^3(ℓ_max-ℓ)).
The number of points in each block (cubic face or cubic edge) is represented as n_P(ℓ). It should be noted that the interior or redundant points from the previous level are not included, since they have already been eliminated in previous levels.
In order to estimate n_P(ℓ), we rely on the assumption made in <cit.> regarding the skeletonization.
The assumption states that the typical skeleton size is:
equationsection
n_P(ℓ) =
O(ℓ), for edges;
O(2^ℓ), for faces.
Firstly, we consider the construction step of SelInvHIF, which involves the following steps in Algorithm <ref>.
At the integer level ℓ, one computes U_ℓ;ijk^-1 (Line 9) for each block.
One then multiplies the inverse with V_ℓ;ijk to obtain K_ℓ;ijk (Line 10) and update the new A_ℓ+1/2(J_ℓ;ijk,J_ℓ;ijk) (Line 11).
At fractional level ℓ+1/2, the T_ℓ+1/2;k is recorded by ID for each cell (Line 16).
The cost for this step is O(n_P(ℓ)^3) since each cell only interacts with O(1) cells, one then applies it (Lines 19, 20, and 21) and multiply the inverse of U̅_ℓ+1/2;k (Line 25) with V̅_ℓ+1/2;k to obtain K̅_ℓ+1/2;k (Line 26).
Finally, one update A_ℓ+1(J_ℓ+1/2;k,J_ℓ+1/2;k) (Line 27). Thus, the computational cost for these steps at each level is O(n_P(ℓ)^3). The total computational complexity is
equationsection∑_ℓ=1^ℓ_maxn_B(ℓ)n_P(ℓ)^3≤ C∑_ℓ=1^ℓ_max2^3 ℓ_max-3ℓ2^3 ℓ≤ C_0 Nlog N,
where C and C_0 are constant. The total computational cost for the construction step is O(Nlog N) with ℓ_max=O(L).
In addition, the extraction phase is considered and the following steps are shown in Algorithm <ref>. At the integer level ℓ, one can calculate G_ℓ-1/2(I_ℓ;ijk,I_ℓ;ijk) (Line 3) and G_ℓ- 1/2(J_ℓ;ijk,I_ℓ;ijk) (Line 4) for each block.
At the fractional level ℓ- 1/2, G_ℓ-1(I_ℓ-1/2;k,I_ℓ-1/2;k) (Line 10),
G_ℓ-1(J_ℓ-1/2;k,I_ℓ-1/2;k) (Line 12) and G_ℓ-1(J_ℓ-1/2;k,J_ℓ-1/2;k) (Line 14) are calculated for each cell. The computational cost for these steps at each level is O(n_P(ℓ)^3). It turns out that the complexity for the extraction phase is also O(Nlog N).
As for the computational complexity of SelInvHIF with edge skeletonization, the cost for each step at each level is also n_B(ℓ)n_P(ℓ)^3. Thus, the total computational complexity is
equationsection
∑_ℓ=1^ℓ_maxn_B(ℓ)n_P(ℓ)^3≤ C∑_ℓ=1^ℓ_max2^3 ℓ_max-3ℓℓ^3≤ C_0 N,
where C and C_0 are constant. The total computational cost for the construction step is O(N) with ℓ_max=O(L).
Finally, the quasi-linear scaling of the SelInvHIF and the linear scaling of the SelInvHIF with edge skeletonization are proved. Although SelInvHIF with edge skeletonization can achieve O(N) complexity, some fill-in is generated after edge skeletonization, which brings additional computational costs. This also implies that it can achieve optimal complexity only if N is suitably large, which presents challenges in directly applying it to the MPB equation of interest. Therefore, only the SelInvHIF is applied in all subsequent numerical examples.
§ NUMERICAL METHOD FOR MPB EQUATIONS
In this section, the iterative solver is proposed to solve the MPB equation. The following governing equations for the whole space in <cit.> based on Gaussian variational field theory <cit.>,
{[ ∇·η(r) ∇ϕ-χΛ e^-Ξ c(r) / 2sinhϕ=-2 ρ_f(r),; [∇·η(r) ∇-χΛ e^-Ξ c(r) / 2coshϕ] G(r, r^')=-4 πδ(r-r^'),; ].
with the potential ϕ, the relative dielectric function η(r), the density of fixed charge ρ_f(r), and the Green's function G(r, r^'). The coupling parameter Ξ and the rescaled fugacity Λ are given for specific problems.
The function χ(r) is defined as 1 to represent the region that is accessible for ions, while it is defined as 0 elsewhere.
The correlation function c(r) reads
c(r)=lim _r→r^'[G(r, r^')-1 /η(r)|r-r^'| ].
A self-consistent iterative scheme is utilized to solve the partial differential equations (<ref>), as described in <cit.>. This scheme comprises of two alternating steps: first, given a c(r) , the modified Poisson-Boltzmann (PB) equation (the first equation) is solved to obtain the potential ϕ with given boundary conditions. Second, for a given c(r) and ϕ, the GDH equation (the second equation) is solved to obtain G and a new c(r). These two steps are iterated until the solution reaches the desired convergence criteria. The iterative scheme is mathematically expressed <cit.>:
{[ ∇·η(r) ∇ϕ^(k+1)-Λ e^-Ξ c^(k)/2sinhϕ^(k+1)=-2 ρ_f(r),; [∇·η(r) ∇-Λ e^-Ξ c^(k)/2coshϕ^(k+1)] G^(k+1)=-4 πδ(r-r^'),; c^(k+1)(r)=lim _r→r^'[G^(k+1)(r, r^')-1 /η(r)|r-r^'|], ].
where the superscript k=0,1,...,K indicates the kth iteration step.
In addition, the PB steps can be efficiently solved using standard direct solvers. However, the key of solving a self-consistent equation lies in the GDH equation, which can be reformulated during a self-consistent iteration. In three dimensions, a finite difference approximation of the equation is employed to obtain the following algebraic equation:
A G=I
where matrix A is given by
A=h^3/4 π[D H D^T+diag{P}]
with P being the vector of function p(r), G representing the lattice Green's function, D being the difference matrices of operators ∇, and I being the unit matrix. It can be observed that the solution of the Green's function is equivalent to the matrix inversion, G=A^-1. Directly computing the inverse of the matrix is complex and unnecessary. Specially, only diag(G) is needed to obtain the self energy c(r). Thus, the SelInvHIF is employed to extract the diagonals of the inverse.
§ NUMERICAL RESULTS
To assess the effectiveness of the SelInvHIF, we present numerical results for the MPB equations in three dimensions. The scaling of the computational time is of particular interest. We set the coupling parameter Ξ=1 and the uniform fugacity parameter Λ=0.05.
For both the PB and the self-consistent iterations, the error criteria are set at 10^-8.
The initial value for the potential in the iteration are always constant in our instances with ϕ^(0)=0. It is important to note that the choice regarding the ID step depends on the problem specified. Both the PB and the GDH stages make use of Dirichlet boundary conditions. The calculation is executed on a machine with Intel Xeon 2.2GHz and 2TB memory.
All experiments are performed in Matlab with the FLAM package <cit.> for hierarchical matrices.
Prior to resolving the MPB equation, we begin by demonstrating an illustrative case of evaluating the diagonal elements of an elliptic differential operator.
The statistical computation time is computed as the average of five measurements.
Example 1: The discrete elliptic differential operator (3D). In our examination of three-dimensional problems, we first explore the diagonals of the inverse of a discrete elliptic differential operator. This is achieved through the implementation of a seven-point stencil discretization. Subsequently, one computes the diagonals of the inverse matrix utilizing both the SelInvHIF method and the exact approach as described in <cit.>.
The diagonals of the inverse of the discrete operator are set as d_s and d_e, respectively.
Table <ref> presents the absolute L^2 error E_a=√(∑(d_s-d_e)^2/N) between the numerical results and the reference solution, which corresponds to the matrix size as determined by the exact method. Additionally, Table <ref> highlights the relative L^2 errors E_r=d_s-d_e_2 / d_e_2, serving to validate the accuracy of the SelInvHIF. Furthermore, the computational time of the algorithm is displayed in Table <ref>, while Figure <ref> confirms the quasi-linear scaling of SelInvHIF.
Table <ref> illustrates the relationship between the rank of the ID step and the numerical error. This relationship demonstrates that the error can be effectively controlled as the rank of the ID step increases.
Example 2: The charge density with a delta function. In this example, we consider discontinuous charged distribution in a region [0, L]^3 with L = 32. Let the fixed charge density be a face charge:
ρ_f(x)=δ(x-L / 2).
One calculates the results of the MPB equations by the SelInvHIF. Figure <ref> visualizes the distribution of the potential in this system at z=L/2 with different matrix sizes N=16^3, 32^3,48^3 and 64^3.
The potential with respect to x=L/2 and y=L/2 remains symmetric due to the symmetry of the fixed charge and is most pronounced at x=L/2 due to the presence of the charge.
Table <ref> shows the accuracy of the whole algorithm to compute the potential ϕ compared to a reference potential computed with a sufficiently large grid size N=64^3, which verifies the approximate accuracy of first-order due to the discontinuous of the derivative of the potential at y=z=L/2. Furthermore, Table <ref> also shows the computational time of the algorithm to verify the quasilinear scaling of the SelInvHIF.
Example 3: The charge density with continuous function. In this example, we consider discontinuous charged distribution in a region [0, L]^3 with L = 32. Let the fixed charges density be:
ρ_f(x)=sin(π x/L).
One then calculates the results of the MPB equations by SelInvHIF with the accuracy 10^-8 in the ID step.
Figure <ref> visualizes the distribution of the potential in this system at x=L/2 with different matrix sizes N=16^3, 32^3,48^3 and 64^3, which displays the convergence.
The potential are most evident at x = L/2 due to the dense charge.
Table <ref> shows the accuracy of the whole algorithm to compute the potential ϕ compared to a reference potential computed with a sufficiently large grid size N=64^3, which verifies the convergence of our algorithm. Furthermore, Table <ref> also shows the computational time of the algorithm to verify the quasilinear scaling of the SelInvHIF.
§ CONCLUSIONS
This paper develops the SelInvHIF , a fast algorithm to solve the MPB equations.
The SelInvHIF effectively integrates hierarchical interpolative factorization and selected inverse techniques to achieve an O(Nlog N) computational complexity and O(N) computational complexity with edge skeletonization, in terms of operations and memory, necessary for computing the diagonal of the inverse of a sparse matrix discretized from an elliptic differential operator. The proposed algorithm was applied to three-dimensional MPB problems, and demonstrated impressive performance in terms of both accuracy and efficiency.
§ ACKNOWLEDGMENT
Y. Tu and Z. Xu acknowledge the financial support from the National Natural Science Foundation of China (grant No. 12071288), Science and Technology Commission of Shanghai Municipality (grant Nos. 20JC1414100 and 21JC1403700) and Strategic Priority Research Program of Chinese Academy of Sciences (grant No. XDA25010403). H. Yang thanks the support of the US National Science Foundation under award DMS-1945029.
unsrt
|
http://arxiv.org/abs/2307.02629v1
|
20230705195934
|
Compressibility measures for two-dimensional data
|
[
"Lorenzo Carfagna",
"Giovanni Manzini"
] |
cs.DS
|
[
"cs.DS",
"math.CO"
] |
Continuum Reverberation Mapping of Mrk 876 Over Three Years With Remote Robotic Observatories
[
August 1, 2023
=============================================================================================
In this paper we extend to two-dimensional data two recently introduced one-dimensional compressibility measures: the γ measure defined in terms of the smallest string attractor, and the δ measure defined in terms of the number of distinct substrings of the input string.
Concretely, we introduce the two-dimensional measures and as natural generalizations of γ and δ and study some of their properties. Among other things, we prove that is monotone and can be computed in linear time, and
we show that although it is still true that ≤ the gap between the two measures can be Ω(√(n)) for families of n× n matrices and therefore asymptotically larger than the gap in one-dimension. Finally, we use the measures and to provide the first analysis of the space usage of the two-dimensional block tree introduced in [Brisaboa et al., Two-dimensional block trees, The computer Journal, 2023].
§ INTRODUCTION
Since the recent introduction of the notion of string attractor <cit.> different measures of string repetitiveness have been proposed or revisited <cit.>. It has been shown that such measures are more appropriate than the classical statistical entropy for measuring the compressibility of highly repetitive strings. In addition, these measures have been used to devise efficient compressed indices for highly repetitive string collections <cit.> an important setting which is hard for traditional entropy-based compressed indices.
In this paper we generalize the notion of attractor to two dimensional data, i.e. (square) matrices of symbols, and we initiate the study of the properties of the measure (M) defined as the size of the smallest attractor for the matrix M (Definition <ref>). As in the one-dimensional case, we introduce also the measure (M) defined in terms of the number of distinct square submatrices (Definition <ref>) and we study the relationship between and . We prove that some properties that hold for strings are still valid in the two-dimensional case: for example computing is NP-complete while can be computed in linear time, and for every matrix M it is (M) ≤(M). However,
the gap between the two measures is larger than in one-dimension case since there are families of n× n matrices with = O(1) and = Ω(√(n)), whereas for strings it is always γ = O(δlogn/δ).
The study of the measures and is motivated by the fact that for two-dimensional data there is no clear definition of “context” of a symbol and therefore there is no universally accepted notion of statistical entropy. Therefore, alternative compressibility measures based on combinatorial properties such as and are worthwhile investigating. Indeed, in Section <ref> we use the measures and to provide the first analysis of the size of the two-dimensional block tree introduced in <cit.>. In particular we show that the space used by a two-dimensional block tree for an n× n matrix M with delta measure is bounded by O(( + √(n))logn/), and that this space is optimal within a multiplicative factor O(log n).
For the rest of the paper, the RAM model of computation is assumed, with word size w=Θ(log n) bits. Space is measured in words so when O(x) space is indicated, the actual space occupancy in bits is O(x log n).
§ TWO-DIMENSIONAL COMPRESSIBILITY MEASURES
We consider a square matrix M ∈Σ^n× n of size n× n where each of the n^2 symbols M[i][j] of M are drawn from the alphabet Σ with |Σ|=σ. Every symbol in Σ is assumed to appears in M otherwise Σ is properly restricted. A submatrix of M with topmost left cell M[i,j] is said to start at position (i,j) of M. An a × b submatrix of M starting at position (i,j) is written as M[i:i+a-1][j:j+b-1], meaning that it includes any cell with row index in the range [i,i+a-1] and column index in [j,j+b-1].
In this section two new repetitiveness measures for square matrices called γ_2D and δ_2D are proposed, as the generalisations of the γ and δ measures for strings respectively introduced in <cit.> and <cit.>.
An attractor Γ_2D for a square matrix M∈Σ^n× n is a set of positions of M: Γ_2D⊆{1,...,n}×{1,...,n} such that any square submatrix has an occurrence crossing (including) a position p = (i,j) ∈Γ_2D. The measure γ_2D(M) is defined as the cardinality of a smallest attractor for M.
We say that a position p=(i,j)∈Γ_2D(M) covers a submatrix I of M if there exists an occurrence of I which crosses p, and that a set of positions covers I if it includes a position p which covers I; when clear from the context, the parameter M is omitted from Γ_2D(M) expression.
As a first result we show that, not surprisingly, the problem of finding the size of a smallest attractor is NP-complete also in two dimensions. The NP-completeness proof is done considering the decision problem “is there an attractor of size k for the given input?”.
Given a string S ∈Σ^n, let R^S ∈Σ^n× n be the square matrix where each row is equal to the string S. Then there exists an (1-dim) attractor for S of size k if and only if there exists a (2-dim) attractor of size k for R^S.
Given S and the corresponding R^S, the following observations hold: 1) any submatrix of R^S has an occurrence starting at the same column but on the first row of R^S; 2) any two ℓ×ℓ submatrices of R^S are equal if and only if the two respective substrings of S composing their rows are equal, formally: R^S[i:i+ℓ-1][j:j+ℓ-1]=R^S[i':i'+ℓ-1][j':j'+ℓ-1] if and only if S[j,j+ℓ-1]=S[j',j'+ℓ-1].
From 1) and 2) the lemma follows: given a string attractor Γ(S) for S of size k, the set = {(1,j) : j ∈Γ(S)} of size k is a two dimensional attractor for R^S and, vice versa, a string attractor Γ for S could be obtained from a matrix attractor Γ_2D(R^S) for R^S projecting each couple by column index, formally, Γ={j : (i,j) ∈Γ_2D(R^S)} is a one dimensional attractor for S. Note that if Γ_2D(R^S) is a smallest attractor it does not include two positions on the same column, because, any distinct submatrix crossing one position has an occurrence (starting in the same column but at a different row) which crosses the other, hence in this case the projection does not generate any column index collision and |Γ|=|Γ_2D(R^S)|=k, otherwise, in case of collision, Γ could be completed with any k-|Γ| positions not in Γ to reach size k=|Γ_2D(R^S)|.
Informally, the lemma follows from the following observations: 1) any submatrix of R^S has an occurrence obtained shifting it up (or down) by one row, as a consequence all submatrices have an occurrence starting in the first row of R^S. 2) Any two ℓ×ℓ submatrices of R^S are equal if and only if the two respective substrings of S composing their rows are equal, formally: R^S[i:i+ℓ-1][j:j+ℓ-1]=R^S[i':i'+ℓ-1][j':j'+ℓ-1] if and only if S[j,j+ℓ-1]=S[j',j'+ℓ-1]. 3) Any smallest attractor A for R^S, does not include two positions on the same column because any distinct submatrix crossing one position has an occurrence crossing the other, hence A would not have the least size. Putting this observations together we have:
⟹): let Γ(S) be an attractor for S of size k, we claim that the set = {(1,i) : i ∈Γ(S)} having size |Γ(S)|=k is an attractor for R^S. Suppose by contradiction that is not an attractor for R^S, then there exists an ℓ×ℓ submatrix R^S[i:i+ℓ-1][j:j+ℓ-1] such that it and any of its occurrences are not covered by any position [id=LC,comment=Se vogliamo mettere la dimostrazione, qui non va usato i e neanche j perchè sono usati in R^S[i:i+ℓ-1][j:j+ℓ-1] della riga sopra](1,i) : i ∈Γ(S), this imply that no occurrence S[j',j'+ℓ-1] of the substring S[j,j+ℓ-1] crosses any position i ∈Γ(S), otherwise R^S[1:ℓ][j':j'+ℓ-1] = R^S[i:i+ℓ-1][j:j+ℓ-1] would include the position (1,i)∈. ⟸: Assume Γ_2D(R^S) is an attractor of R^S, then any distinct squared submatrix of R^S has an occurrence crossing a position in Γ_2D(R^S). Define the set Γ obtained projecting by column index each couple (i,j) ∈Γ_2D(R^S) so Γ={j : (i,j) ∈Γ_2D(R^S)}, note that due to 3) if Γ_2D(R^S) is a smallest attractor then |Γ|=|Γ_2D(R^S)|=k since the projection does not generate any column index collision, otherwise Γ could be completed with any k-|Γ| positions not in Γ. The constructed set is an attractor for S, to see this suppose by contradiction that Γ is not an attractor for S, then there exists a substring S[j,j+ℓ-1] such that it and any of its other occurrences do not cross any column index in Γ_2D(R^S). As a consequence the submatrix R^S[1:l][j:j+ℓ-1] on the first row of R^S, having S[j,j+ℓ-1] as first row is not covered by any position in Γ_2D(R^S).
As an immediate consequence of the above lemma we have the following result.
Computing is NP complete.
It is easy to see that γ_2D≥σ and γ_2D is insensitive to transpositions but, as for strings <cit.>, γ_2D is not monotone. We show this by providing a family of matrices, built using the counterexample in <cit.> to disprove the monotonicity of γ, containing a submatrix with smaller γ_2D.
γ_2D is not monotone.
Let w be the string abbba^nab with n>0, having γ(w)=3 minimal for the subset of positions Γ(w) = {2,4,n+5} underlined in w. The string w· b = abbba^nabb obtained concatenating the letter b to w has a smaller compressibility measure γ(w· b)=2 corresponding to Γ(w· b) = {4,n+5} <cit.>, as the prefix w[1,3]=abb occurring as a suffix of w· b is already covered by position n+5 in Γ(w· b).
Consider R^w· b of size (n+7)× (n+7), from Lemma <ref> follows that (R^w· b)=γ(w· b)=2, but the submatrix R^w· b[1:n+6][1:n+6] equal to R^w has a greater (R^w)=γ(w)=3.
§.§ The measure
The measure δ(S) for a string S, formally defined in <cit.> and previously introduced in <cit.> to approximate the output size of the Lempel–Ziv parsing, is the maximum over k∈ [1,|S|] of the expression d_k(S)/k where d_k(S) is the number of distinct substrings of length k in S. We now show how to generalize this measure to two dimensions, by introducing the measure δ_2D which is defined in a similar way, considering k × k submatrices instead of length-k substrings.
Given M∈Σ^n× n, let d_k× k(M) be the number of distinct k× k submatrices of M, then
δ_2D(M)= max{d_k× k(M)/k^2 : k ∈ [1 , n]}
The measure δ_2D preserves some good properties of δ: δ_2D is invariant through transpositions and decreases or grows by at most 1 after a single cell edit since any d_k× k of the updated matrix could differ at most by k^2 from the initial one. δ_2D is monotone: given a submatrix M' of M having size ℓ×ℓ with ℓ≤ n any submatrix of M' appears somewhere in M then d_k× k(M')≤ d_k× k(M) for any k∈[1,ℓ]⊆ [1,n].
The next lemma shows that, as in the one-dimensional setting, δ_2D is upper bounded by γ_2D.
δ_2D(M)≤γ_2D(M) for any matrix M∈Σ^n× n
Let be a least size attractor for M i.e. ||=γ_2D. For any k ∈ [1,n] an attractor position p ∈ is included in at most k^2 distinct k × k submatrices, then we need at least d_k× k(M)/k^2 distinct positions in to cover all k× k submatrices of M, formally, || ≥ d_k× k(M)/k^2 holds for any k∈[1,n] in particular for k^*∈[1,n] such that δ_2D= d_k^*× k^*(M)/(k^*)^2.
One of the main reasons for introducing δ was that it can be computed efficiently: <cit.> describes a linear algorithm to compute δ(S) with a single visit of the Suffix tree of S. We now show that an efficient algorithm for computing δ_2D can be derived in a similar way using the Isuffix tree introduced in <cit.> which can be built in O(n^2) time, which is linear in the size of the input. A somewhat simpler algorithm can be obtained using the Lsuffix tree <cit.> but its construction takes O(n^2log n) time.
The Isuffix tree IST(A) of a matrix A∈Σ^n× m generalises the Suffix Tree to matrices: IST(A) is a compacted trie representing all square submatrices of A.
The Isuffix trees adopts a linear representation of a square matrix C∈Σ^q× q: let I Σ = ⋃_i=1^∞Σ^i, each string in I Σ is considered as an atomic Icharacter, the unique Istring associated to matrix C is I_C∈IΣ^2q-1 where I_C[2i+1] with i∈ [0,q) is the (i+1)^th column-type Icharacter C[1:i+1][i+1] and I_C[2i] with i∈ [1,q) is the i^th row-type Icharacter C[i+1][1:i]. See Figure <ref> for an example.
The k^th Iprefix of C is defined as the concatenation of the first k Icharacters I_C[1]· I_C[2] ·…· I_C[k]=I_C[1,k] of I_C. Note that an Iprefix ending in an odd position k is the Istring of the ℓ×ℓ square submatrix with ℓ=⌈ k/2 ⌉ starting at C's top-left corner, that is, C[1:ℓ][1:ℓ]. For the example in Figure <ref>, the 3^rd Iprefix of C is a a ba which corresponds to the submatrix C[1:2][1:2].
Given A∈Σ^n× n, for 1 ≤ i,j ≤ n, the Isuffix I_A_ij of A is defined as the Istring of the largest square submatrix A_ij of A with upper left corner at position (i,j). From the above definitions is clear that the Istring of any square submatrix of A, is an Iprefix (ending in a odd position) of some Isuffix I_A_ij. To ensure that no Isuffix I_A_ij is Iprefixed by another Isuffix, A is completed with an additional bottom row and right column containing 2n+1 distinct new symbols $_1, …$_2n+1. For simplicity in the following we refer as A the input matrix already enlarged with $_i symbols. See Figure <ref> for an example.
The Isuffix tree IST(A) is a compacted trie over the alphabet IΣ representing all the n^2 distinct Isuffixes I_A_ij of A with, among others, the following properties <cit.>: 1) each edge is labeled with a non empty Isubstring I_A_ij[ℓ_1,ℓ_2] of an Isuffix I_A_ij, that label is represented in constant space as the quadruple ⟨ i,j,ℓ_1,ℓ_2 ⟩, the Isubstrings on any two sibling edges start with different Icharacters; 2) each internal node has at least two children and there are exactly n^2 leaves representing all the Isuffixes of A: let L(u) be the Istring obtained concatenating the Isubstrings on the path from the root to a node u, for any leaf l_ij, the Istring L(l_ij) is equal to the linear representation I_A_ij of the unique suffix A_ij; 3) The Isuffix tree satisfies the common prefix constraint: square submatrices of A with a common Iprefix share the same initial path in the tree; 4) The Isuffix tree satisfies the completeness constraint since all square submatrices of A are represented in IST(A) as an Iprefix of some Isuffix of A.
δ_2D can be calculated in optimal time and space O(n^2)
To compute δ_2D of A∈Σ^n× n, we build the array d[1:n] which stores at position k the number of distinct k× k submatrices of A then we obtain δ_2D as max_k d[k]/k^2. Initially the Isuffix Tree IST(A) of A is built in time O(n^2) <cit.>, then IST(A) is visited in depth first order.
Let u be a node such that the path from the root to u contains
|L(u)| Icharacters. Let e be an edge outgoing from u labeled with q_e=⟨ i,j,ℓ_1,ℓ_2 ⟩ where ℓ_1=|L(u)|+1. The Istring of a distinct square submatrix is obtained whenever appending an Iprefix of I_A_ij[ℓ_1,ℓ_2] to L(u) yields an Istring of odd length. Because the traversing of e may yields new square submatrices, d[·] must be updated accordingly. Let s=⌈ℓ_1-1/2⌉+1 and t=⌈ℓ_2/2⌉. Every d[k] with k∈ [s,t] should be increased by one: to do this in constant time we set d[s]=d[s]+1 and d[t+1]=d[t+1]-1 and we assume that each value stored in an entry d[i] is implicitly propagated to positions i+1, i+2, … n: so the +1 is propagated from s up to t and the propagation is canceled by the -1 added at the position t+1.
At the end of the Isuffix tree visit, for each k∈ [1,n-1] we set d[k+1]=d[k+1]+d[k] so that d[k] contains the number of distinct k× k matrices encountered during the visit and we can compute δ_2D as max_k d[k]/k^2.
Note that when leaf l_ij is reached via the edge e with label q_e=⟨ i,j,ℓ_1,ℓ_2 ⟩, all the Iprefixes of I_A_ij[ℓ_1,ℓ_2] that have an Icharacter which includes some $_x symbol should not be counted. The range of well formed Iprefixes can be determined in constant time since it suffices to access one symbol in each of the last two trailing Icharacters of I_A_ij[ℓ_1,ℓ_2] to check whether these two contains any $_x. Since the Isuffix Tree can be constructed and visited in O(n^2) time the overall time and space complexity of the above algorithm is O(n^2).
We now study how large can be the gap between the two measures and , recalling that by Lemma <ref> it is ≤. In <cit.> Kociumaka et al. establish a separation result between measures δ and γ by showing a family of strings with δ=O(1) and γ = Ω(log n). This bound is tight since they also prove that γ = O(δlogn/δ).
The next theorem proves that the gap between the two measures in two dimensions is much bigger: δ_2D can be (asymptotically) smaller than γ_2D up to a √(n) factor.
There exists a family of n× n matrices with δ_2D=O(1) and γ_2D=Ω(√(n))
Consider the matrix M of size n× n where the first row is the string S composed by √(n)/2 consecutive blocks of size 2√(n) each. The i^th block S_i with i=1,…,√(n)/2 is the string 1^i 0^(2√(n)-i), so S_i contains (from left to right) i initial ones and the remaining positions are zeros. The remaining rows of the matrix are all equals to #^n. Note that for any size k all distinct submatrices start in the first row or are equal to #^(k× k). Let δ_k be d_k× k/k^2, so that can be rewritten as max{δ_k | k ∈ [1 , n]}. We compute δ_k for each possible k. For k=1, we have δ_1 = |Σ| = 3.
For k ≥√(n) it is δ_k=O(1) since k^2 ≥ n and there at most (n-k+1)+1≤ n distinct k × k matrices.
Now consider δ_k with k∈ [2,…,√(n)). All distinct k × k submatrices (excluded the #^(k× k) one) are those having as first row a distinct substring of length k of S. All those substrings are included in the language 0^a1^b0^c with a∈[0,…,k], b∈ [0,…,k-a], c ∈ [0,…,k-a-b] such that a+b+c=k, to see this note that no substring of length k<√(n) can contains any two non adjacent (and non empty) groups of ones since there is a group of at least √(n)>k consecutive zeros between each of them in S.
Fixed k, to count the strings in 0^a1^b0^c is enough to count the possible choices for the starting/ending positions of the middle 1^b block: which are O(k^2), then for k∈ [2,…,√(n)), δ_k = O(k^2)/k^2 = O(1). This proves that = O(1).
To estimate consider the i^th block on the first row: S_i= 1^i 0^(2√(n)-i). Each S_i with i=1,…,√(n)/2 is a unique occurrence since the sequence 1^i occurs only inside blocks S_j with j≥ i which begins with at least i ones, but inside S_j the sequence 1^i is followed by 2√(n)-j < 2√(n)-i zeros, so the copy of S_i will intersect the (j+1)^th block where no leading zeros are present. As a consequence each submatrix M_i of size 2√(n)× 2√(n) having S_i as first row is a unique occurrence too. As each M_i do not overlap any other M_j with j≠ i at least √(n)/2 positions are needed in Γ_2D to cover them. This proves that =Ω(√(n)).
Given a set S, the worst-case entropy <cit.> of S defined as ⌈log_2 |S| ⌉ is the minimum number of bits needed to encode all the elements in S. In the following Lemma, we extend the construction of Lemma <ref> to define a family ℱ of matrices with constant and worst-case entropy Ω(√(n)log n).
There exists a family of square matrices on a constant size alphabet Σ with common measure =O(1) and worst-case entropy Ω(√(n)log n).
Consider again the matrix M of Lemma <ref>. Each of the (√(n)/2)! matrices obtained permuting the √(n)/2 blocks S_i on the first row of M has still =O(1). On the other hand, every encoding algorithm to distinguish among these matrices needs at least log((√(n)/2)!)=Θ(√(n)log n) bits.
§ SPACE BOUNDS FOR TWO-DIMENSIONAL BLOCK TREES
Brisaboa et. al. <cit.> generalized the Block Tree concept <cit.> to two dimensional data providing a compressed representation for discrete repetitive matrices that offers direct access to any compressed submatrix in logarithmic time. Given a matrix M∈Σ^n× n and an integer parameter k>1, assuming for simplicity that n if a power of k, i.e. n=k^α, M is split into k^2 non overlapping submatrices, called blocks, each of size (n/k) × (n/k) = k^α-1× k^α-1 . Each of these blocks corresponds to a node at level ℓ=1 in the and the root of the tree at level ℓ=0 represents the whole matrix M. A tree is obtained by splitting (some of) the blocks at level ℓ, which have size (n/k^ℓ) × (n/k^ℓ), into k^2 non overlapping blocks of size (n/k^ℓ+1) × (n/k^ℓ+1).
At any level ℓ, nodes whose corresponding submatrix intersects the first occurrence, in row major order, of a (n/k^ℓ) × (n/k^ℓ) submatrix (including themselves) are internal nodes, referred in the following as marked ones; all others nodes are the level-ℓ leaves of the , and referred in the following as unmarked nodes.
Only marked nodes are recursively split and expanded at level ℓ+1; instead an unmarked node corresponding to a block X points to the marked nodes in the same level corresponding to the level-ℓ blocks overlapping the first (in row major order) occurrence O of X, and stores the relative offset ⟨ O_x,O_y⟩ of O inside the top left of such blocks (see Figure <ref>).
The splitting process ends when explicitly storing blocks is more convenient than storing pointers to marked blocks. The resulting tree-shaped data structure has height h=O(log_k n). In the following the block related to node u in the tree is named B_u, and a block B_u is said to be marked (unmarked) if the corresponding node u is marked (unmarked).
Note that if X is unmarked, then the (up to) four blocks intersecting the first occurrence O of X are marked by construction. If we call D the (2n/k^ℓ) × (2n/k^ℓ) submatrix formed by these four blocks, we observe that this is also a first occurrence (otherwise we would have another occurrence of X preceding O) and therefore the up to four blocks at level ℓ-1 containing D will be marked. Repeating this argument shows that an unmarked node points to marked nodes which always exist in the same level since none of their ancestors has been pruned in a previous level. Note that our marking scheme is slightly different than the one in <cit.> in which if a submatrix is pruned at some level its content is seen as all 0s in the subsequent levels. This approach removes the issue of possibly pointing to pruned nodes, but makes it difficult to estimate the number of marked nodes in terms of the matrix content, which is our next objective.
It has already been proved <cit.> that one-dimensional Block Trees are worst case optimal in terms of δ in the following sense: a BT on a string S∈Σ^n uses O(δlognlogσ/δlog n) space and there exist string families requiring that amount of space to be stored. No space analysis of the 2D-BT was given in <cit.>. In the following we show that a 2D-BT built on a matrix M∈Σ^n× n occupies O(( + √(n))logn/) space.
The number of marked nodes in any level of a 2D block tree is O( + √(n)).
Consider a generic tree level, and assume the blocks in this level have size k^ℓ× k^ℓ. In the following the term block denotes a k^ℓ× k^ℓ submatrix of M whose upper left corner is an entry of the form M[1+λ k^ℓ,1+μ k^ℓ] with λ, μ integers. For any distinct k^ℓ× k^ℓ submatrix in M, let O be the first occurrence of that submatrix in row major order. O intersects m ∈{1,2,4} blocks B_1,… ,B_m that are therefore marked. Let D be a 2k^ℓ× 2k^ℓ submatrix built including all the m blocks B_1,… ,B_m, and therefore containing O. We call D a superblock. If m=4, D is unique, otherwise 4-m more blocks are chosen arbitrarily to reach the desired size. Let u be the number of superblocks constructed in this way; the number of marked blocks at this level is at most 4u, hence we proceed bounding u. We partition the superblocks into 3 types: 1) those on a corner of M, i.e. including one of the entries M[1,1], M[1,n], M[n,1] or M[n,n], these are at most four; 2) those not on a corner but including an entry in the first/last row/column; 3) those not including any entry in the first/last row/column. Let u_i be the number of superblocks of type i: clearly u = u_1+u_2+u_3=O(u_2+u_3).
Given a superblock D of the third type, we observe that D is included into k^2ℓ distinct 3k^ℓ× 3k^ℓ submatrices starting at any position in the k^ℓ× k^ℓ block touching the top left corner of D (see Figure <ref>). Summing over all type 3 superblocks, we have a total of u_3 k^2ℓ submatrices of size 3k^ℓ× 3k^ℓ starting in distinct positions inside M. These submatrices are distinct: each submatrix contains, by construction, the first occurrence of some block; if two of those matrices were equal we would have two first occurrences of the same block starting in different positions which is impossible.
Since by definition the number of distinct submatrices of size 3k^ℓ×3k^ℓ is at most (3k^ℓ)^2 we have
u_3 k^2ℓ≤ 9k^2ℓ ⟹ u_3 ≤ 9
Consider now a superblock D' of the second type bordering the upper edge of M (the other 3 cases are treated similarly, see superblock D” in Figure <ref>). Any 3k^ℓ× 3k^ℓ matrix which starts in the same row of D', but, in any of the k^ℓ columns preceding D' is distinct by the same argument presented before (see again Figure <ref>). Reasoning as above we find u_2 k^ℓ distinct 3k^ℓ× 3k^ℓ matrices which implies u_2 ≤ 9k^ℓ. Since it is also u_2≤ n/k^ℓ, we have u_2≤min(9k^ℓ,n/k^ℓ)=O(√(n)). We conclude that the number of marked blocks at any level is O(u) = O(u_1+u_2+u_3)=O( + √(n)).
The takes O(( + √(n))logn/) space. This space is optimal within a multiplicative factor O(log n).
The as described at the beginning of the section has height log_k n. Such height can be reduced choosing a different size for the blocks at level ℓ=1. Assuming for simplicity n=√()k^α, M is initially divided into blocks of size k^α× k^α. In this way the height of the tree became O(log_k n/), using the bound of the Lemma <ref>, we get an overall number of marked nodes O(( + √(n))log_k n/). Each marked node produces at most k^2 unmarked nodes on the next level, hence the tree has at most O(k^2( + √(n))log_k n/) nodes which is O(( + √(n))logn/) for k=O(1).
To prove the worst-case quasi-optimality: let ℱ be the set of matrices having =O(1) from Lemma <ref>, for any coder C:ℱ→{0,1}^* representing all the matrices in ℱ, there exist a matrix W such that |C(W)|=Ω(√(n)log n) bits while the takes O(√(n)log^2 n) bits of space for any matrix in ℱ.
The following result shows that the bound in Lemma <ref> cannot be substantially improved at least when = O(1). Since the proof of Lemma <ref> shows that the number of marked blocks at the interior of the matrix is bounded by O(), we consider a family of matrices that have a hard to compress first row.
There exists an infinite family of matrices M∈Σ^n× n with =O(1), such that for M has Ω(√(n)) marked nodes on a single level.
Let M∈Σ^n× n be the matrix of Lemma <ref> with n = k^2α so that n is both a power of k and a perfect square. We have already proven that δ_2D(M)=O(1). Consider the 2D-BT built on M: note that for block size larger than 4√(n)× 4√(n) each block on the upper edge of M includes entirely in its first row at least one of the strings S_i of the form 1^i 0^(2√(n)-i) composing S. Since each S_i is unique, any of those blocks is a first occurrence and hence marked. In particular, at level ℓ = α - ⌈log_k 4 ⌉, counting levels from the root (ℓ=0) to the leaves, all Θ(√(n)) blocks on M upper side are marked and the lemma follows.
In <cit.> the authors introduced a variant of the one-dimensional block tree, called Γ-tree, in which, given a not necessarily minimum string attractor Γ, the marked nodes at each level are those close to an attractor position. The Γ-tree is then enriched with additional information that makes it a compressed full text index using O(γlog(n/γ)) space where γ = |Γ| is the size of the string attractor. Following the ideas from <cit.>, we now show how to modify the construction of the assuming we have available a, not necessarily minimum, 2D-attractor .
To simplify the explanation, again we assume that n=k^α for some α>0, given a matrix M∈Σ^n× n and an attractor Γ_2D={(i,j)_1,…,(i,j)_γ} for M, the splitting process is unchanged but at level ℓ we mark any node u corresponding to a n/k^ℓ× n/k^ℓ block B_u which includes a position p∈Γ_2D and all (the at most 8) nodes of the blocks adjacent to B_u. Remaining nodes are unmarked and store a pointer ptr_B to the marked block B on the same level ℓ where an occurrence O of their corresponding submatrix that spans a position p∈Γ_2D begins, as well as the relative offset of O within B. The claimed occurrence O crossing p∈Γ_2D exists otherwise Γ_2D would not be an attractor for M, and all the (at most) 4 blocks intersecting O are ensured to be marked as they contain p or are adjacent to a block containing p. We also point out that overlapping between an unmarked block B' and the pointed occurrence O containing p∈Γ_2D is impossible as if this happens B' would be adjacent to a block B with p∈ B, hence B' would be marked as well.
If n is not a power of k, blocks on right and bottom edges of M won't be squared, but no special treatment is needed as all the previous essential properties are still valid: consider a rectangular block B of size a× b on the edge, if B is marked, B is recursively split into smaller blocks (some of those with rectangular shape), if B is unmarked, the squared matrix B' of size c× c with c=max(a,b) including B is squared and will occur somewhere else crossing an attractor position while spanning at most four marked blocks, then B would occur as well. Note that B, contrary to B', may not cross any attractor position but will certainly point to marked blocks only, avoiding unmarked blocks point to other unmarked ones.
Given M∈Σ^n× n and an attractor Γ_2D(M)={(i,j)_1,…,(i,j)_γ} of size γ, the built using takes O(γlogn/γ) space.
Each position p∈Γ could mark at most 9 distinct blocks per level: the block B including p and the (up to) eight blocks adjacent to B. Hence the number of marked blocks per level is ≤ 9γ. Assuming again n=√(γ)k^α, dividing initially M into blocks of size k^α× k^α we get a shallower tree of height O(log_k n/γ) with O(γ) nodes on the second level (ℓ=1) and an overall number of marked nodes O(γlog_k n/γ). Since any marked node produces at most k^2 unmarked nodes on the next level, for any k=O(1) the built using any attractor of size γ takes O(γlogn/γ) space.
Access to a single symbol M[i][j] is quite as in the one dimensional Γ-Tree: assume the node u at level ℓ is reached with a local offset ⟨ O_x,O_y⟩, if u is a marked node, the child c of u where the searched cell falls is determined, the coordinates ⟨ O_x,O_y⟩ are translated to local coordinates on c where the search routine proceeds in the next level. If instead node u is unmarked, the marked node v on the same level is reached via the pointer ptr_v stored in u, the actual offset inside the block B_v is determined using the offset ⟨ O_x',O_y'⟩ stored in u and the access procedure continues on marked node v. The descending process halts when a marked block on the deepest level is reached and the corresponding explicit symbol is retrieved. Access procedure costs O(log n) as we visit at most 2 nodes on the same level before descend to next one.
plain
One of the main reasons for introducing δ is that it can be computed efficiently: <cit.> describes a linear algorithm to compute δ(S) with a single visit of the Suffix Tree of S. We now show that an efficient algorithm for computing δ_2D can be derived in a similar way using the Lsuffix tree introduced in <cit.>, which generalises the Suffix Tree to squared matrices.
Given a matrix A∈Σ^n× n, <cit.> defines the notions of prefix and suffix of a matrix as follows: the j^th prefix of A is A[1:j][1:j], the squared j× j submatrix of A starting at A's top-left corner, in a dual way the j^th suffix of A is defined as A[j:n][j:n]: the squared j× j submatrix ending at A's bottom-right corner.
Let A_d be the unique squared submatrix of A having as main diagonal the d^th diagonal of A: the main diagonal of A_d consists of the elements A[i][j] with i-j=d (A's main diagonal has index d=0, diagonals above (under) it have decreasing (increasing) indices). The important fact is that any squared submatrix of A is a prefix of some suffix of a certain A_d. A linear representation (Lstring) of a squared matrix A[1:n][1:n] is proposed: A is divided into n L-shaped characters: the i^th Lcharacter is A[i, 1:i-1]· A[1:i,i], so is the string of length 2i-1 obtained writing down the first i-1 elements on the i^th row followed by the first i of the i^th column. The one dimensional Lstring L_a uniquely identifying the matrix A is that obtain concatenating all the n L-shaped characters of A. A chunk generalise the notion of substrings to two dimensions and is defined as a sequence of one or more consecutive L-shaped characters. Given an L-string L_a, a chunk is indicated in a substring fashion as L_a[p,q] obtained concatenating the p-q+1 L-character at positions {p,p+1,…,q-1,q} in L_a.
A chunk starting with an L-character ∈Σ is called L-string and represents a matrix, otherwise represents an L-shape piece centered on some diagonal d of A. The Lsuffix Tree LT_A of A is a Lsuffix tree representing all the suffixes of the matrices A_d with 0≤ |d| < n where every A_d is completed with a bottom and rightmost column of $_i characters where each $_i is a new symbol, this is done to ensure that no suffix of any A_d is a prefix of others. LT_A (among the others) satisfies the following properties: 1) each edge outgoing from a node is labeled with a chunk and any chunks on two sibling edges start with different Lcharacters of the same lengths 2) the concatenation of the chunks labeling the edges on the path from the root to a leaf gives exactly the j^th suffix of a matrix A_d, that leaf is labeled with (j,d). As a consequence concatenating the chunks labeling the edges on any path starting from the root (the last chunk haven't to be concatenated entirely, any initial sequence of L-characters from the last chunk could be taken) we obtain a distinct prefix of a suffix of some A_d, hence a distinct submatrix of A. A chunk on an edge is represented using constant space as a quadruple ⟨ p,q,j,d ⟩, that is the chunk is equal to the substring of L-characters L_suf_j(L_a_d)[p,q] where L_suf_j(L_a_d) is the L-string of the j^th suffix of A_d.
δ_2D can be calculated in O(n^2log n) time and O(n^2) space
To compute δ_2D, we build the array d[·] which stores at position k the number of distinct k× k submatrices of M: i.e. d[k]=d_k× k for any k∈[1:n]. Then the measure δ_2D can be easily computed as max_k d[k]/k^2.
Initially the LSuffix Tree LT_A of A is built in time O(n^2log n) <cit.>, then LT_A is visited in depth first order.
Sitting at a node x of LT_A, assuming the squared matrix S_x obtained concatenating the chunks on the path from the root to x has size ℓ×ℓ, let e be an outgoing edge labeled with the chunk c=⟨ p,q,j,d ⟩ a distinct matrix with increasing size is obtained adding one after the other each L-character composing c to S_x, to account this every d[k] with k∈ [ℓ+1,ℓ+q-p] has to be increased. Note that if the node reached by x through e is a leaf, the last L-character of the chunk c contains only $_i and has not to be accounted in the counting process, so only positions in [ℓ+1,ℓ+q-p-1] have to be increased. When a node x is reached the range increase on d is simply done as d[ℓ+1]+1 and d[ℓ+q-p+1]-1. When all nodes have been visited, for each k∈ [1,n-1] we set d[k+1]=d[k]+d[k+1], this produce the correct range increase behaviour as the +1 is propagated in all positions from ℓ+1 up to ℓ+q-p and then the propagation is canceled by the -1 added at the next position. The LSuffix Tree <cit.> has O(n^2) nodes and edges hence is visited in O(n^2) time and takes O(n^2) space. The overall algorithm complexity is O(n^2log n), that of the LSuffix Tree construction.
[id=GM]In the full version of the paper we show that the cost of computing γ_2D can be reduced to O(n^2) by replacing the Lsuffix tree from <cit.> with the Isuffix tree from <cit.> in which L-characters are replaced by row-type and column-type I-characters. The Isuffix tree is slightly more complex since not every node represents a square submatrix, but it can be still used to compute γ_2D and it can be constructed in O(n^2) time.
In the proof we omit the 2D subscript to ease the reading.
Consider the matrix M of size n× n where the first row is the string S composed by √(n) consecutive blocks of size √(n) each. The i^th block S_i with i=1,…,√(n) is the string 1^i 0^(√(n)-i), so S_i contains (from left to right) i initial ones and the remaining positions are zeros. The remaining rows of the matrix are all equals to #^n. Note that for any size k all distinct submatrices start in the first row or are equal to #^(k× k). Let δ_k be d_k× k/k^2, so that δ can be rewritten as max{δ_k | k ∈ [1 , n]}. We compute δ_k for each possible k. For k=1, we have δ_1 = |Σ| = 3.
For k ≥√(n) it is δ_k=O(1) since k^2 ≥ n and there at most (n-k+1)+1 distinct k × k matrices.
Now consider δ_k with k∈ [2,…,√(n)). All distinct k × k submatrices (excluded the #^(k× k) one) are those having as first row a distinct substring of length k of S. All those substrings are of the form 1^a0^b1^c or 0^a1^b0^c with a,b,c ∈ [0,…,k] such that a+b+c=k, to see this note that any substring of the form 0^a1^b0^c1^d or 1^a0^b1^c0^d having two non adjacent and non empty groups of zeros and ones has size greater than √(n): in 0^a1^b0^c1^d the middle 1^b0^c part must be a full block of size √(n) and for 1^a0^b1^c0^d case we have that a≥ 1, if the leftmost 1 in 1^a starts in block i then b=√(n)-i and c=i+1 (c is the number of initial ones in the next block) and d≥ 1 so a+b+c+d>√(n).
To count substrings of the first kind 1^a0^b1^c (the same holds for the other form) is enough to count the possible choices for the starting/ending positions of the middle 0^b block: which are k^2 (note that the 1^k case, where no zero is present, is included in the 0^a1^b0^c counting process). So we have that for k∈ [2,…,√(n)), δ_k = 2k^2+1/k^2 = O(1), this proves that δ = O(1).
Now consider the i^th block on the first row: S_i= 1^i 0^(√(n)-i). Each S_i with i=1,…,√(n) is a unique occurrence since the sequence 1^i occurs only inside blocks S_j with j≥ i beginning with at least i ones, but for j>i the sequence 1^i is followed by less than √(n)-i zeros. As a consequence each submatrix M_i of size √(n)×√(n) having S_i as first row is a unique occurrence too. As each M_i do not overlap any other M_j with j≠ i at least √(n) positions are needed in Γ to cover them, this proves that γ=Ω(√(n)).
First we build the Isuffix Tree IST(A), then IST(A) is augmented as follows: . The pre-processing continue storing in each leaf l_ij with (i,j) block starting position, i.e. both i and j are of the form ak^c+1 with a≥ 0 a backward pointer to the node u which represents the k^ℓ× k^ℓ submatrix prefixing A_ij. That is done for each block size k^ℓ× k^ℓ with ℓ∈ [log n]. Note that a k^ℓ× k^ℓ block may be represented by an edge u→ v instead of a node u, if that's the case, the backward pointer is set to v. In total ∑_i=1^log n k^2i=O(n^2) additional pointers are stored hence IST(A) space occupation is unchanged.
To set the additional pointers, log n bitmaps of size n^2 each initially all equal to zero are stored, let b_ℓ be the ℓ^th bitmap, b_ℓ[p] is set to 1 if the suffix A_ij related to the p^th leaf l_ij starts at a position (i,j) with both i and j of the form ak^ℓ+1 with a≥ 0 otherwise b_ℓ[p]=0. This bitmaps initialisation costs in total O(n^2) time, during the process leaves are traversed once and the bitmaps are accessed one time for each block hence in total O(n^2) times summing over all blocks. Per fare questo, si itera sulle foglie, sia l_ij quella corrente, si prova per ℓ crescenti se i e j sono della forma ak^ℓ+1, questo metodo al massimo prova una dimensione di troppo per ogni foglia prima di passare alla successiva quindi rispetta l'analisi sopra.Rank and Select data structures that solve respective queries in O(1) time are built on each bitmap b_ℓ for a total space occupancy O(n^2 log n)+o(n^2log n) bits which is asymptotically equal to space occupied by the Isuffix Tree itself, the time required for this step is n^2log n. Once bitmaps are initialised IST(A) is visited, let u be the current node representing an k^ℓ× k^ℓ submatrix, the leaves descending from u are those in a contiguous range [s,e] (the range can be determined in advance with a visit of the tree and stored). From the leaves in this range starting in a block position, we have to set a backward pointer to u: each leaf index is retrieved via b_ℓ.select_1(s,e,m) with m∈ [1,b_ℓ.rank_1(s,e)] Forse non è neanche necessaria la rank ds. The cost is still in the order of the number of blocks O(n^2).
After the pre-processing the is constructed level by level starting from the root. Let ℓ be the level whose blocks have size k^ℓ× k^ℓ, we iterate over blocks starting positions, let (i,j) be a possible starting position, l_ij is retrieved in O(1) using an inverted index[id=GM]non ho capito come è fatto questo, [LC] L'inverted index o tutto da qui in poi?, from that the backward pointer to node u corresponding to the matrix B of size k^ℓ× k^ℓ prefixing A_ij is traversed. Now a range minimum query on the range of descending leaves [s,e] of u is done, that is the suffix prefixed by the first occurrence B' of B, and the at most 4 blocks intersecting B' are marked (the intersecting blocks are retrived in O(1) since B' starting position is know to be (i,j). Ended the first visit to mark nodes, one more visit is done to set pointers of unmarked nodes i.e. let A_i'j' be the suffix resulting from RMQ after that the backward edge from suffix A_ij is traversed (with i'≠ i or j'≠ j otherwise B starting at position (i,j) is marked, in realtà non solo sono diversi ma sono anche distanti di almeno k^ℓ altrimenti intersecandosi B viene marcato), the offset to store in un the unmarked node is that of A_i'j' inside the unique block B' including position (i',j') (the starting point of B' can be calculated in O(1)). [id=LC,comment=Non credo una visita in profondità vada bene, se il va costruito per livelli ci serve una visita per livelli dell' Isuffix tree dove il livello è la dimensione della matrice lungo il cammino, se cosi fosse il tempo dovrebbe diventare ammortizzato (ma invariato) se si usa un Fibonacci Heap per implementare la coda con priorità per scegliere quale è il successivo nodo da espandere](?)
In total we visit the tree a constant number of times, thanks to backward pointer and RMQ data structure the marking process and the offset computation (in case of unmarked node) for a single block is done in O(1), hence, the time spent to built the is the number of blocks plus the cost of the visits plus the rank/select data structure construction, which is O(n^2 log n).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.